May 15 10:16:23.741182 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 10:16:23.741203 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 09:09:56 -00 2025 May 15 10:16:23.741211 kernel: efi: EFI v2.70 by EDK II May 15 10:16:23.741217 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 15 10:16:23.741222 kernel: random: crng init done May 15 10:16:23.741228 kernel: ACPI: Early table checksum verification disabled May 15 10:16:23.741235 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 15 10:16:23.741242 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 10:16:23.741247 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741253 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741258 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741264 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741269 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741275 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741283 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741289 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741294 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:23.741303 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 10:16:23.741320 kernel: NUMA: Failed to initialise from firmware May 15 10:16:23.741328 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:16:23.741334 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 15 10:16:23.741340 kernel: Zone ranges: May 15 10:16:23.741346 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:16:23.741354 kernel: DMA32 empty May 15 10:16:23.741359 kernel: Normal empty May 15 10:16:23.741365 kernel: Movable zone start for each node May 15 10:16:23.741371 kernel: Early memory node ranges May 15 10:16:23.741377 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 15 10:16:23.741383 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 15 10:16:23.741388 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 15 10:16:23.741394 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 15 10:16:23.741400 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 15 10:16:23.741406 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 15 10:16:23.741411 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 15 10:16:23.741417 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:16:23.741424 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 10:16:23.741430 kernel: psci: probing for conduit method from ACPI. May 15 10:16:23.741435 kernel: psci: PSCIv1.1 detected in firmware. May 15 10:16:23.741441 kernel: psci: Using standard PSCI v0.2 function IDs May 15 10:16:23.741447 kernel: psci: Trusted OS migration not required May 15 10:16:23.741455 kernel: psci: SMC Calling Convention v1.1 May 15 10:16:23.741461 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 10:16:23.741469 kernel: ACPI: SRAT not present May 15 10:16:23.741484 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 15 10:16:23.741490 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 15 10:16:23.741497 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 10:16:23.741503 kernel: Detected PIPT I-cache on CPU0 May 15 10:16:23.741509 kernel: CPU features: detected: GIC system register CPU interface May 15 10:16:23.741515 kernel: CPU features: detected: Hardware dirty bit management May 15 10:16:23.741521 kernel: CPU features: detected: Spectre-v4 May 15 10:16:23.741527 kernel: CPU features: detected: Spectre-BHB May 15 10:16:23.741535 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 10:16:23.741541 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 10:16:23.741547 kernel: CPU features: detected: ARM erratum 1418040 May 15 10:16:23.741553 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 10:16:23.741559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 10:16:23.741565 kernel: Policy zone: DMA May 15 10:16:23.741573 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:16:23.741580 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:16:23.741586 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:16:23.741593 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:16:23.741599 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:16:23.741607 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) May 15 10:16:23.741613 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:16:23.741619 kernel: trace event string verifier disabled May 15 10:16:23.741626 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 10:16:23.741633 kernel: rcu: RCU event tracing is enabled. May 15 10:16:23.741639 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:16:23.741646 kernel: Trampoline variant of Tasks RCU enabled. May 15 10:16:23.741652 kernel: Tracing variant of Tasks RCU enabled. May 15 10:16:23.741658 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:16:23.741664 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:16:23.741671 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 10:16:23.741678 kernel: GICv3: 256 SPIs implemented May 15 10:16:23.741685 kernel: GICv3: 0 Extended SPIs implemented May 15 10:16:23.741691 kernel: GICv3: Distributor has no Range Selector support May 15 10:16:23.741697 kernel: Root IRQ handler: gic_handle_irq May 15 10:16:23.741703 kernel: GICv3: 16 PPIs implemented May 15 10:16:23.741709 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 10:16:23.741715 kernel: ACPI: SRAT not present May 15 10:16:23.741721 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 10:16:23.741728 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 15 10:16:23.741734 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 15 10:16:23.741740 kernel: GICv3: using LPI property table @0x00000000400d0000 May 15 10:16:23.741747 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 15 10:16:23.741754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:23.741760 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 10:16:23.741767 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 10:16:23.741773 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 10:16:23.741779 kernel: arm-pv: using stolen time PV May 15 10:16:23.741786 kernel: Console: colour dummy device 80x25 May 15 10:16:23.741792 kernel: ACPI: Core revision 20210730 May 15 10:16:23.741799 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 10:16:23.741805 kernel: pid_max: default: 32768 minimum: 301 May 15 10:16:23.741812 kernel: LSM: Security Framework initializing May 15 10:16:23.741820 kernel: SELinux: Initializing. May 15 10:16:23.741826 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:16:23.741832 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:16:23.741839 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 10:16:23.741845 kernel: rcu: Hierarchical SRCU implementation. May 15 10:16:23.741851 kernel: Platform MSI: ITS@0x8080000 domain created May 15 10:16:23.741858 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 10:16:23.741864 kernel: Remapping and enabling EFI services. May 15 10:16:23.741870 kernel: smp: Bringing up secondary CPUs ... May 15 10:16:23.741878 kernel: Detected PIPT I-cache on CPU1 May 15 10:16:23.741884 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 10:16:23.741891 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 15 10:16:23.741897 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:23.741903 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 10:16:23.741909 kernel: Detected PIPT I-cache on CPU2 May 15 10:16:23.741916 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 10:16:23.741922 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 15 10:16:23.741929 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:23.741935 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 10:16:23.741942 kernel: Detected PIPT I-cache on CPU3 May 15 10:16:23.741949 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 10:16:23.741955 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 15 10:16:23.741962 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:23.741972 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 10:16:23.741980 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:16:23.741986 kernel: SMP: Total of 4 processors activated. May 15 10:16:23.741993 kernel: CPU features: detected: 32-bit EL0 Support May 15 10:16:23.742000 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 10:16:23.742006 kernel: CPU features: detected: Common not Private translations May 15 10:16:23.742013 kernel: CPU features: detected: CRC32 instructions May 15 10:16:23.742020 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 10:16:23.742027 kernel: CPU features: detected: LSE atomic instructions May 15 10:16:23.742034 kernel: CPU features: detected: Privileged Access Never May 15 10:16:23.742041 kernel: CPU features: detected: RAS Extension Support May 15 10:16:23.742048 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 10:16:23.742054 kernel: CPU: All CPU(s) started at EL1 May 15 10:16:23.742062 kernel: alternatives: patching kernel code May 15 10:16:23.742068 kernel: devtmpfs: initialized May 15 10:16:23.742075 kernel: KASLR enabled May 15 10:16:23.742082 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:16:23.742089 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:16:23.742095 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:16:23.742102 kernel: SMBIOS 3.0.0 present. May 15 10:16:23.742109 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 15 10:16:23.742115 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:16:23.742123 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 10:16:23.742130 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 10:16:23.742137 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 10:16:23.742144 kernel: audit: initializing netlink subsys (disabled) May 15 10:16:23.742150 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 15 10:16:23.742157 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:16:23.742164 kernel: cpuidle: using governor menu May 15 10:16:23.742171 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 10:16:23.742177 kernel: ASID allocator initialised with 32768 entries May 15 10:16:23.742185 kernel: ACPI: bus type PCI registered May 15 10:16:23.742192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:16:23.742198 kernel: Serial: AMBA PL011 UART driver May 15 10:16:23.742206 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:16:23.742212 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 15 10:16:23.742219 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:16:23.742226 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 15 10:16:23.742232 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:16:23.742239 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 10:16:23.742247 kernel: ACPI: Added _OSI(Module Device) May 15 10:16:23.742254 kernel: ACPI: Added _OSI(Processor Device) May 15 10:16:23.742261 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:16:23.742268 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:16:23.742274 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:16:23.742281 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:16:23.742288 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:16:23.742295 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:16:23.742301 kernel: ACPI: Interpreter enabled May 15 10:16:23.742309 kernel: ACPI: Using GIC for interrupt routing May 15 10:16:23.742320 kernel: ACPI: MCFG table detected, 1 entries May 15 10:16:23.742327 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 10:16:23.742334 kernel: printk: console [ttyAMA0] enabled May 15 10:16:23.742341 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:16:23.742469 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:16:23.742564 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 10:16:23.742625 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 10:16:23.742687 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 10:16:23.742750 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 10:16:23.742758 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 10:16:23.742765 kernel: PCI host bridge to bus 0000:00 May 15 10:16:23.742836 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 10:16:23.742892 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 10:16:23.742947 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 10:16:23.743003 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:16:23.743077 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 10:16:23.743149 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:16:23.743212 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 10:16:23.743274 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 10:16:23.743352 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:16:23.743416 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:16:23.743490 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 10:16:23.743556 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 10:16:23.743611 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 10:16:23.743671 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 10:16:23.743726 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 10:16:23.743734 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 10:16:23.743741 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 10:16:23.743750 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 10:16:23.743757 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 10:16:23.743767 kernel: iommu: Default domain type: Translated May 15 10:16:23.743774 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 10:16:23.743781 kernel: vgaarb: loaded May 15 10:16:23.743787 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:16:23.743794 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:16:23.743800 kernel: PTP clock support registered May 15 10:16:23.743807 kernel: Registered efivars operations May 15 10:16:23.743815 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 10:16:23.743825 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:16:23.743832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:16:23.743838 kernel: pnp: PnP ACPI init May 15 10:16:23.743945 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 10:16:23.743959 kernel: pnp: PnP ACPI: found 1 devices May 15 10:16:23.743966 kernel: NET: Registered PF_INET protocol family May 15 10:16:23.743973 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:16:23.743982 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:16:23.743989 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:16:23.743996 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:16:23.744002 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:16:23.744009 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:16:23.744016 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:16:23.744022 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:16:23.744032 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:16:23.744038 kernel: PCI: CLS 0 bytes, default 64 May 15 10:16:23.744047 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 10:16:23.744053 kernel: kvm [1]: HYP mode not available May 15 10:16:23.744060 kernel: Initialise system trusted keyrings May 15 10:16:23.744067 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:16:23.744073 kernel: Key type asymmetric registered May 15 10:16:23.744080 kernel: Asymmetric key parser 'x509' registered May 15 10:16:23.744087 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:16:23.744094 kernel: io scheduler mq-deadline registered May 15 10:16:23.744100 kernel: io scheduler kyber registered May 15 10:16:23.744108 kernel: io scheduler bfq registered May 15 10:16:23.744115 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 10:16:23.744125 kernel: ACPI: button: Power Button [PWRB] May 15 10:16:23.744132 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 10:16:23.744200 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 10:16:23.744211 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:16:23.744218 kernel: thunder_xcv, ver 1.0 May 15 10:16:23.744224 kernel: thunder_bgx, ver 1.0 May 15 10:16:23.744231 kernel: nicpf, ver 1.0 May 15 10:16:23.744239 kernel: nicvf, ver 1.0 May 15 10:16:23.744309 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 10:16:23.744383 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T10:16:23 UTC (1747304183) May 15 10:16:23.744393 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 10:16:23.744400 kernel: NET: Registered PF_INET6 protocol family May 15 10:16:23.744406 kernel: Segment Routing with IPv6 May 15 10:16:23.744413 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:16:23.744420 kernel: NET: Registered PF_PACKET protocol family May 15 10:16:23.744429 kernel: Key type dns_resolver registered May 15 10:16:23.744436 kernel: registered taskstats version 1 May 15 10:16:23.744442 kernel: Loading compiled-in X.509 certificates May 15 10:16:23.744449 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 3679cbfb4d4756a2ddc177f0eaedea33fb5fdf2e' May 15 10:16:23.744456 kernel: Key type .fscrypt registered May 15 10:16:23.744462 kernel: Key type fscrypt-provisioning registered May 15 10:16:23.744469 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:16:23.744497 kernel: ima: Allocated hash algorithm: sha1 May 15 10:16:23.744505 kernel: ima: No architecture policies found May 15 10:16:23.744513 kernel: clk: Disabling unused clocks May 15 10:16:23.744520 kernel: Freeing unused kernel memory: 36416K May 15 10:16:23.744526 kernel: Run /init as init process May 15 10:16:23.744533 kernel: with arguments: May 15 10:16:23.744540 kernel: /init May 15 10:16:23.744546 kernel: with environment: May 15 10:16:23.744552 kernel: HOME=/ May 15 10:16:23.744559 kernel: TERM=linux May 15 10:16:23.744565 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:16:23.744576 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:16:23.744585 systemd[1]: Detected virtualization kvm. May 15 10:16:23.744592 systemd[1]: Detected architecture arm64. May 15 10:16:23.744599 systemd[1]: Running in initrd. May 15 10:16:23.744606 systemd[1]: No hostname configured, using default hostname. May 15 10:16:23.744614 systemd[1]: Hostname set to . May 15 10:16:23.744621 systemd[1]: Initializing machine ID from VM UUID. May 15 10:16:23.744629 systemd[1]: Queued start job for default target initrd.target. May 15 10:16:23.744636 systemd[1]: Started systemd-ask-password-console.path. May 15 10:16:23.744643 systemd[1]: Reached target cryptsetup.target. May 15 10:16:23.744650 systemd[1]: Reached target paths.target. May 15 10:16:23.744657 systemd[1]: Reached target slices.target. May 15 10:16:23.744664 systemd[1]: Reached target swap.target. May 15 10:16:23.744671 systemd[1]: Reached target timers.target. May 15 10:16:23.744679 systemd[1]: Listening on iscsid.socket. May 15 10:16:23.744687 systemd[1]: Listening on iscsiuio.socket. May 15 10:16:23.744695 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:16:23.744702 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:16:23.744709 systemd[1]: Listening on systemd-journald.socket. May 15 10:16:23.744716 systemd[1]: Listening on systemd-networkd.socket. May 15 10:16:23.744723 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:16:23.744731 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:16:23.744738 systemd[1]: Reached target sockets.target. May 15 10:16:23.744746 systemd[1]: Starting kmod-static-nodes.service... May 15 10:16:23.744754 systemd[1]: Finished network-cleanup.service. May 15 10:16:23.744761 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:16:23.744768 systemd[1]: Starting systemd-journald.service... May 15 10:16:23.744775 systemd[1]: Starting systemd-modules-load.service... May 15 10:16:23.744782 systemd[1]: Starting systemd-resolved.service... May 15 10:16:23.744789 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:16:23.744797 systemd[1]: Finished kmod-static-nodes.service. May 15 10:16:23.744804 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:16:23.744812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:16:23.744819 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:16:23.744830 systemd-journald[290]: Journal started May 15 10:16:23.744874 systemd-journald[290]: Runtime Journal (/run/log/journal/3ffb512c55954583ac795fbba90e3e5e) is 6.0M, max 48.7M, 42.6M free. May 15 10:16:23.732758 systemd-modules-load[291]: Inserted module 'overlay' May 15 10:16:23.749810 kernel: audit: type=1130 audit(1747304183.745:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.749830 systemd[1]: Started systemd-journald.service. May 15 10:16:23.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.750967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:16:23.757514 kernel: audit: type=1130 audit(1747304183.750:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.757532 kernel: audit: type=1130 audit(1747304183.753:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.757206 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:16:23.763563 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:16:23.763823 systemd-resolved[292]: Positive Trust Anchors: May 15 10:16:23.763837 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:16:23.763865 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:16:23.767943 systemd-resolved[292]: Defaulting to hostname 'linux'. May 15 10:16:23.772507 kernel: Bridge firewalling registered May 15 10:16:23.772425 systemd[1]: Started systemd-resolved.service. May 15 10:16:23.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.772497 systemd-modules-load[291]: Inserted module 'br_netfilter' May 15 10:16:23.777309 kernel: audit: type=1130 audit(1747304183.772:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.773304 systemd[1]: Reached target nss-lookup.target. May 15 10:16:23.778868 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:16:23.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.782914 systemd[1]: Starting dracut-cmdline.service... May 15 10:16:23.784490 kernel: audit: type=1130 audit(1747304183.779:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.790834 kernel: SCSI subsystem initialized May 15 10:16:23.791867 dracut-cmdline[308]: dracut-dracut-053 May 15 10:16:23.794004 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:16:23.802142 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:16:23.802178 kernel: device-mapper: uevent: version 1.0.3 May 15 10:16:23.803500 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:16:23.805741 systemd-modules-load[291]: Inserted module 'dm_multipath' May 15 10:16:23.806500 systemd[1]: Finished systemd-modules-load.service. May 15 10:16:23.808304 systemd[1]: Starting systemd-sysctl.service... May 15 10:16:23.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.812514 kernel: audit: type=1130 audit(1747304183.807:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.817464 systemd[1]: Finished systemd-sysctl.service. May 15 10:16:23.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.821505 kernel: audit: type=1130 audit(1747304183.817:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.857496 kernel: Loading iSCSI transport class v2.0-870. May 15 10:16:23.869500 kernel: iscsi: registered transport (tcp) May 15 10:16:23.884508 kernel: iscsi: registered transport (qla4xxx) May 15 10:16:23.884564 kernel: QLogic iSCSI HBA Driver May 15 10:16:23.919103 systemd[1]: Finished dracut-cmdline.service. May 15 10:16:23.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.920802 systemd[1]: Starting dracut-pre-udev.service... May 15 10:16:23.924160 kernel: audit: type=1130 audit(1747304183.919:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:23.964531 kernel: raid6: neonx8 gen() 13741 MB/s May 15 10:16:23.981497 kernel: raid6: neonx8 xor() 10758 MB/s May 15 10:16:23.998498 kernel: raid6: neonx4 gen() 13510 MB/s May 15 10:16:24.015504 kernel: raid6: neonx4 xor() 11047 MB/s May 15 10:16:24.032497 kernel: raid6: neonx2 gen() 12902 MB/s May 15 10:16:24.049494 kernel: raid6: neonx2 xor() 10423 MB/s May 15 10:16:24.066505 kernel: raid6: neonx1 gen() 10568 MB/s May 15 10:16:24.083496 kernel: raid6: neonx1 xor() 8744 MB/s May 15 10:16:24.100503 kernel: raid6: int64x8 gen() 6268 MB/s May 15 10:16:24.117497 kernel: raid6: int64x8 xor() 3537 MB/s May 15 10:16:24.134503 kernel: raid6: int64x4 gen() 7208 MB/s May 15 10:16:24.151504 kernel: raid6: int64x4 xor() 3849 MB/s May 15 10:16:24.168496 kernel: raid6: int64x2 gen() 6147 MB/s May 15 10:16:24.185504 kernel: raid6: int64x2 xor() 3316 MB/s May 15 10:16:24.202496 kernel: raid6: int64x1 gen() 5040 MB/s May 15 10:16:24.219592 kernel: raid6: int64x1 xor() 2640 MB/s May 15 10:16:24.219603 kernel: raid6: using algorithm neonx8 gen() 13741 MB/s May 15 10:16:24.219611 kernel: raid6: .... xor() 10758 MB/s, rmw enabled May 15 10:16:24.220678 kernel: raid6: using neon recovery algorithm May 15 10:16:24.232998 kernel: xor: measuring software checksum speed May 15 10:16:24.233025 kernel: 8regs : 16647 MB/sec May 15 10:16:24.233662 kernel: 32regs : 20277 MB/sec May 15 10:16:24.234903 kernel: arm64_neon : 27496 MB/sec May 15 10:16:24.234915 kernel: xor: using function: arm64_neon (27496 MB/sec) May 15 10:16:24.291506 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 15 10:16:24.300760 systemd[1]: Finished dracut-pre-udev.service. May 15 10:16:24.304569 kernel: audit: type=1130 audit(1747304184.301:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:24.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:24.304000 audit: BPF prog-id=7 op=LOAD May 15 10:16:24.304000 audit: BPF prog-id=8 op=LOAD May 15 10:16:24.304901 systemd[1]: Starting systemd-udevd.service... May 15 10:16:24.316409 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 15 10:16:24.319718 systemd[1]: Started systemd-udevd.service. May 15 10:16:24.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:24.321175 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:16:24.332655 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 15 10:16:24.357556 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:16:24.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:24.359056 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:16:24.392470 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:16:24.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:24.418513 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:16:24.422489 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:16:24.422506 kernel: GPT:9289727 != 19775487 May 15 10:16:24.422515 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:16:24.422525 kernel: GPT:9289727 != 19775487 May 15 10:16:24.422537 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:16:24.422545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:16:24.437135 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:16:24.439148 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (549) May 15 10:16:24.440264 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:16:24.441439 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:16:24.452360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:16:24.455952 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:16:24.459790 systemd[1]: Starting disk-uuid.service... May 15 10:16:24.465649 disk-uuid[563]: Primary Header is updated. May 15 10:16:24.465649 disk-uuid[563]: Secondary Entries is updated. May 15 10:16:24.465649 disk-uuid[563]: Secondary Header is updated. May 15 10:16:24.468755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:16:25.479937 disk-uuid[564]: The operation has completed successfully. May 15 10:16:25.481055 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:16:25.503303 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:16:25.504500 systemd[1]: Finished disk-uuid.service. May 15 10:16:25.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.506824 systemd[1]: Starting verity-setup.service... May 15 10:16:25.518496 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 10:16:25.540645 systemd[1]: Found device dev-mapper-usr.device. May 15 10:16:25.542159 systemd[1]: Mounting sysusr-usr.mount... May 15 10:16:25.542976 systemd[1]: Finished verity-setup.service. May 15 10:16:25.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.590289 systemd[1]: Mounted sysusr-usr.mount. May 15 10:16:25.591632 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:16:25.591161 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:16:25.591829 systemd[1]: Starting ignition-setup.service... May 15 10:16:25.594120 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:16:25.600291 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:16:25.600332 kernel: BTRFS info (device vda6): using free space tree May 15 10:16:25.600343 kernel: BTRFS info (device vda6): has skinny extents May 15 10:16:25.608547 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:16:25.614188 systemd[1]: Finished ignition-setup.service. May 15 10:16:25.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.615774 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:16:25.683139 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:16:25.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.684000 audit: BPF prog-id=9 op=LOAD May 15 10:16:25.685406 systemd[1]: Starting systemd-networkd.service... May 15 10:16:25.706493 ignition[649]: Ignition 2.14.0 May 15 10:16:25.706515 ignition[649]: Stage: fetch-offline May 15 10:16:25.706554 ignition[649]: no configs at "/usr/lib/ignition/base.d" May 15 10:16:25.706562 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:25.706690 ignition[649]: parsed url from cmdline: "" May 15 10:16:25.706693 ignition[649]: no config URL provided May 15 10:16:25.706698 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:16:25.706705 ignition[649]: no config at "/usr/lib/ignition/user.ign" May 15 10:16:25.706723 ignition[649]: op(1): [started] loading QEMU firmware config module May 15 10:16:25.706728 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:16:25.715169 systemd-networkd[741]: lo: Link UP May 15 10:16:25.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.715182 systemd-networkd[741]: lo: Gained carrier May 15 10:16:25.715757 systemd-networkd[741]: Enumeration completed May 15 10:16:25.716029 systemd[1]: Started systemd-networkd.service. May 15 10:16:25.720564 ignition[649]: op(1): [finished] loading QEMU firmware config module May 15 10:16:25.716104 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:16:25.717009 systemd[1]: Reached target network.target. May 15 10:16:25.717694 systemd-networkd[741]: eth0: Link UP May 15 10:16:25.717698 systemd-networkd[741]: eth0: Gained carrier May 15 10:16:25.718807 systemd[1]: Starting iscsiuio.service... May 15 10:16:25.727424 systemd[1]: Started iscsiuio.service. May 15 10:16:25.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.729024 systemd[1]: Starting iscsid.service... May 15 10:16:25.732218 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:16:25.732218 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:16:25.732218 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:16:25.732218 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:16:25.732218 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:16:25.732218 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:16:25.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.733814 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:16:25.735157 systemd[1]: Started iscsid.service. May 15 10:16:25.741495 systemd[1]: Starting dracut-initqueue.service... May 15 10:16:25.751303 systemd[1]: Finished dracut-initqueue.service. May 15 10:16:25.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.752364 systemd[1]: Reached target remote-fs-pre.target. May 15 10:16:25.753816 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:16:25.755341 systemd[1]: Reached target remote-fs.target. May 15 10:16:25.757685 systemd[1]: Starting dracut-pre-mount.service... May 15 10:16:25.764048 ignition[649]: parsing config with SHA512: 60e8fcc43643e1ecaed849de5ebfbb56ef6c7435e365d9f1a9d8d7e0ac78498c6690b18ec73ae00d24935fca8f20215c1c81a179006c1c4dd51f911e98f5c6c6 May 15 10:16:25.765197 systemd[1]: Finished dracut-pre-mount.service. May 15 10:16:25.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.770922 unknown[649]: fetched base config from "system" May 15 10:16:25.770933 unknown[649]: fetched user config from "qemu" May 15 10:16:25.771405 ignition[649]: fetch-offline: fetch-offline passed May 15 10:16:25.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.772434 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:16:25.771461 ignition[649]: Ignition finished successfully May 15 10:16:25.774037 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:16:25.774734 systemd[1]: Starting ignition-kargs.service... May 15 10:16:25.783088 ignition[762]: Ignition 2.14.0 May 15 10:16:25.783105 ignition[762]: Stage: kargs May 15 10:16:25.783190 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 15 10:16:25.783199 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:25.785381 systemd[1]: Finished ignition-kargs.service. May 15 10:16:25.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.783982 ignition[762]: kargs: kargs passed May 15 10:16:25.784020 ignition[762]: Ignition finished successfully May 15 10:16:25.787763 systemd[1]: Starting ignition-disks.service... May 15 10:16:25.793841 ignition[768]: Ignition 2.14.0 May 15 10:16:25.793857 ignition[768]: Stage: disks May 15 10:16:25.793937 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 15 10:16:25.793946 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:25.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.795270 systemd[1]: Finished ignition-disks.service. May 15 10:16:25.794704 ignition[768]: disks: disks passed May 15 10:16:25.796743 systemd[1]: Reached target initrd-root-device.target. May 15 10:16:25.794741 ignition[768]: Ignition finished successfully May 15 10:16:25.798283 systemd[1]: Reached target local-fs-pre.target. May 15 10:16:25.799663 systemd[1]: Reached target local-fs.target. May 15 10:16:25.800848 systemd[1]: Reached target sysinit.target. May 15 10:16:25.802171 systemd[1]: Reached target basic.target. May 15 10:16:25.804244 systemd[1]: Starting systemd-fsck-root.service... May 15 10:16:25.814819 systemd-fsck[776]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 15 10:16:25.818373 systemd[1]: Finished systemd-fsck-root.service. May 15 10:16:25.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.820010 systemd[1]: Mounting sysroot.mount... May 15 10:16:25.825493 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:16:25.825723 systemd[1]: Mounted sysroot.mount. May 15 10:16:25.826455 systemd[1]: Reached target initrd-root-fs.target. May 15 10:16:25.829195 systemd[1]: Mounting sysroot-usr.mount... May 15 10:16:25.830096 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:16:25.830136 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:16:25.830161 systemd[1]: Reached target ignition-diskful.target. May 15 10:16:25.831944 systemd[1]: Mounted sysroot-usr.mount. May 15 10:16:25.833804 systemd[1]: Starting initrd-setup-root.service... May 15 10:16:25.837934 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:16:25.841303 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 15 10:16:25.845932 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:16:25.850681 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:16:25.877049 systemd[1]: Finished initrd-setup-root.service. May 15 10:16:25.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.878682 systemd[1]: Starting ignition-mount.service... May 15 10:16:25.879971 systemd[1]: Starting sysroot-boot.service... May 15 10:16:25.884033 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:16:25.895198 ignition[829]: INFO : Ignition 2.14.0 May 15 10:16:25.896148 ignition[829]: INFO : Stage: mount May 15 10:16:25.896980 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:16:25.898042 systemd[1]: Finished sysroot-boot.service. May 15 10:16:25.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:25.899509 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:25.900590 ignition[829]: INFO : mount: mount passed May 15 10:16:25.900590 ignition[829]: INFO : Ignition finished successfully May 15 10:16:25.901292 systemd[1]: Finished ignition-mount.service. May 15 10:16:25.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:26.550107 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:16:26.567269 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) May 15 10:16:26.567317 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:16:26.567328 kernel: BTRFS info (device vda6): using free space tree May 15 10:16:26.567961 kernel: BTRFS info (device vda6): has skinny extents May 15 10:16:26.571543 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:16:26.573174 systemd[1]: Starting ignition-files.service... May 15 10:16:26.589991 ignition[858]: INFO : Ignition 2.14.0 May 15 10:16:26.589991 ignition[858]: INFO : Stage: files May 15 10:16:26.591631 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:16:26.591631 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:26.591631 ignition[858]: DEBUG : files: compiled without relabeling support, skipping May 15 10:16:26.595243 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:16:26.595243 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:16:26.598188 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:16:26.598188 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:16:26.598188 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:16:26.598188 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 10:16:26.597708 unknown[858]: wrote ssh authorized keys file for user: core May 15 10:16:26.605229 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 15 10:16:26.741690 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 10:16:26.850890 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 10:16:26.850890 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:26.854706 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 10:16:27.144877 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 10:16:27.336639 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:27.336639 ignition[858]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:16:27.340583 ignition[858]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:16:27.371191 ignition[858]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:16:27.372786 ignition[858]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:16:27.372786 ignition[858]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:16:27.372786 ignition[858]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:16:27.372786 ignition[858]: INFO : files: files passed May 15 10:16:27.372786 ignition[858]: INFO : Ignition finished successfully May 15 10:16:27.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.372698 systemd[1]: Finished ignition-files.service. May 15 10:16:27.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.374500 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:16:27.375790 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:16:27.387202 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:16:27.376516 systemd[1]: Starting ignition-quench.service... May 15 10:16:27.389437 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:16:27.379949 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:16:27.380029 systemd[1]: Finished ignition-quench.service. May 15 10:16:27.381239 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:16:27.382649 systemd[1]: Reached target ignition-complete.target. May 15 10:16:27.384601 systemd[1]: Starting initrd-parse-etc.service... May 15 10:16:27.396809 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:16:27.396900 systemd[1]: Finished initrd-parse-etc.service. May 15 10:16:27.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.398627 systemd[1]: Reached target initrd-fs.target. May 15 10:16:27.399929 systemd[1]: Reached target initrd.target. May 15 10:16:27.401299 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:16:27.402144 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:16:27.411974 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:16:27.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.413613 systemd[1]: Starting initrd-cleanup.service... May 15 10:16:27.421123 systemd[1]: Stopped target nss-lookup.target. May 15 10:16:27.422190 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:16:27.423745 systemd[1]: Stopped target timers.target. May 15 10:16:27.425175 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:16:27.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.425280 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:16:27.426926 systemd[1]: Stopped target initrd.target. May 15 10:16:27.428356 systemd[1]: Stopped target basic.target. May 15 10:16:27.429713 systemd[1]: Stopped target ignition-complete.target. May 15 10:16:27.431162 systemd[1]: Stopped target ignition-diskful.target. May 15 10:16:27.432612 systemd[1]: Stopped target initrd-root-device.target. May 15 10:16:27.434160 systemd[1]: Stopped target remote-fs.target. May 15 10:16:27.435636 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:16:27.437271 systemd[1]: Stopped target sysinit.target. May 15 10:16:27.438630 systemd[1]: Stopped target local-fs.target. May 15 10:16:27.440085 systemd[1]: Stopped target local-fs-pre.target. May 15 10:16:27.440576 systemd-networkd[741]: eth0: Gained IPv6LL May 15 10:16:27.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.441688 systemd[1]: Stopped target swap.target. May 15 10:16:27.442993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:16:27.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.443098 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:16:27.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.444626 systemd[1]: Stopped target cryptsetup.target. May 15 10:16:27.446064 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:16:27.446163 systemd[1]: Stopped dracut-initqueue.service. May 15 10:16:27.447501 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:16:27.447600 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:16:27.449207 systemd[1]: Stopped target paths.target. May 15 10:16:27.450465 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:16:27.454585 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:16:27.455538 systemd[1]: Stopped target slices.target. May 15 10:16:27.456764 systemd[1]: Stopped target sockets.target. May 15 10:16:27.458291 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:16:27.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.458370 systemd[1]: Closed iscsid.socket. May 15 10:16:27.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.459605 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:16:27.459669 systemd[1]: Closed iscsiuio.socket. May 15 10:16:27.460810 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:16:27.460910 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:16:27.462279 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:16:27.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.462385 systemd[1]: Stopped ignition-files.service. May 15 10:16:27.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.471946 ignition[899]: INFO : Ignition 2.14.0 May 15 10:16:27.471946 ignition[899]: INFO : Stage: umount May 15 10:16:27.471946 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:16:27.471946 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:27.464401 systemd[1]: Stopping ignition-mount.service... May 15 10:16:27.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.478004 ignition[899]: INFO : umount: umount passed May 15 10:16:27.478004 ignition[899]: INFO : Ignition finished successfully May 15 10:16:27.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.466393 systemd[1]: Stopping sysroot-boot.service... May 15 10:16:27.467873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:16:27.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.468003 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:16:27.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.469623 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:16:27.469721 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:16:27.475012 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:16:27.475390 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:16:27.475465 systemd[1]: Finished initrd-cleanup.service. May 15 10:16:27.477496 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:16:27.477576 systemd[1]: Stopped ignition-mount.service. May 15 10:16:27.478739 systemd[1]: Stopped target network.target. May 15 10:16:27.481317 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:16:27.481375 systemd[1]: Stopped ignition-disks.service. May 15 10:16:27.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.482811 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:16:27.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.482857 systemd[1]: Stopped ignition-kargs.service. May 15 10:16:27.484134 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:16:27.484177 systemd[1]: Stopped ignition-setup.service. May 15 10:16:27.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.504000 audit: BPF prog-id=6 op=UNLOAD May 15 10:16:27.485898 systemd[1]: Stopping systemd-networkd.service... May 15 10:16:27.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.487292 systemd[1]: Stopping systemd-resolved.service... May 15 10:16:27.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.493525 systemd-networkd[741]: eth0: DHCPv6 lease lost May 15 10:16:27.509000 audit: BPF prog-id=9 op=UNLOAD May 15 10:16:27.495582 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:16:27.495677 systemd[1]: Stopped systemd-networkd.service. May 15 10:16:27.498411 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:16:27.498517 systemd[1]: Stopped systemd-resolved.service. May 15 10:16:27.499832 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:16:27.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.499861 systemd[1]: Closed systemd-networkd.socket. May 15 10:16:27.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.502017 systemd[1]: Stopping network-cleanup.service... May 15 10:16:27.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.503434 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:16:27.503517 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:16:27.504578 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:16:27.504635 systemd[1]: Stopped systemd-sysctl.service. May 15 10:16:27.506714 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:16:27.506756 systemd[1]: Stopped systemd-modules-load.service. May 15 10:16:27.507757 systemd[1]: Stopping systemd-udevd.service... May 15 10:16:27.512232 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:16:27.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.515937 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:16:27.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.516043 systemd[1]: Stopped network-cleanup.service. May 15 10:16:27.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.518365 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:16:27.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.518499 systemd[1]: Stopped systemd-udevd.service. May 15 10:16:27.519846 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:16:27.519923 systemd[1]: Stopped sysroot-boot.service. May 15 10:16:27.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.521770 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:16:27.521802 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:16:27.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.523021 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:16:27.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.523053 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:16:27.525058 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:16:27.525100 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:16:27.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.533230 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:16:27.533275 systemd[1]: Stopped dracut-cmdline.service. May 15 10:16:27.534983 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:16:27.535024 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:16:27.536512 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:16:27.536551 systemd[1]: Stopped initrd-setup-root.service. May 15 10:16:27.539729 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:16:27.541992 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 10:16:27.542054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 10:16:27.544554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:16:27.544597 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:16:27.546232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:16:27.546275 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:16:27.548644 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 10:16:27.549299 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:16:27.549441 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:16:27.551783 systemd[1]: Reached target initrd-switch-root.target. May 15 10:16:27.553886 systemd[1]: Starting initrd-switch-root.service... May 15 10:16:27.560324 systemd[1]: Switching root. May 15 10:16:27.581555 iscsid[748]: iscsid shutting down. May 15 10:16:27.582244 systemd-journald[290]: Journal stopped May 15 10:16:29.635110 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 15 10:16:29.635165 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:16:29.635182 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:16:29.635192 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:16:29.635202 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:16:29.635216 kernel: SELinux: policy capability open_perms=1 May 15 10:16:29.635228 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:16:29.635242 kernel: SELinux: policy capability always_check_network=0 May 15 10:16:29.635252 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:16:29.635261 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:16:29.635270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:16:29.635279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:16:29.635290 systemd[1]: Successfully loaded SELinux policy in 35.168ms. May 15 10:16:29.635316 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.429ms. May 15 10:16:29.635335 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:16:29.635347 systemd[1]: Detected virtualization kvm. May 15 10:16:29.635357 systemd[1]: Detected architecture arm64. May 15 10:16:29.635367 systemd[1]: Detected first boot. May 15 10:16:29.635378 systemd[1]: Initializing machine ID from VM UUID. May 15 10:16:29.635388 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:16:29.635399 systemd[1]: Populated /etc with preset unit settings. May 15 10:16:29.635410 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:16:29.635423 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:16:29.635434 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:16:29.635445 kernel: kauditd_printk_skb: 79 callbacks suppressed May 15 10:16:29.635455 kernel: audit: type=1334 audit(1747304189.473:83): prog-id=12 op=LOAD May 15 10:16:29.635464 kernel: audit: type=1334 audit(1747304189.473:84): prog-id=3 op=UNLOAD May 15 10:16:29.635492 kernel: audit: type=1334 audit(1747304189.474:85): prog-id=13 op=LOAD May 15 10:16:29.635503 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:16:29.635513 kernel: audit: type=1334 audit(1747304189.475:86): prog-id=14 op=LOAD May 15 10:16:29.635524 systemd[1]: Stopped iscsiuio.service. May 15 10:16:29.635534 kernel: audit: type=1334 audit(1747304189.475:87): prog-id=4 op=UNLOAD May 15 10:16:29.635543 kernel: audit: type=1334 audit(1747304189.475:88): prog-id=5 op=UNLOAD May 15 10:16:29.635553 kernel: audit: type=1131 audit(1747304189.476:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.635563 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:16:29.635574 systemd[1]: Stopped iscsid.service. May 15 10:16:29.635586 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 10:16:29.635597 kernel: audit: type=1131 audit(1747304189.483:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.635607 systemd[1]: Stopped initrd-switch-root.service. May 15 10:16:29.635618 kernel: audit: type=1131 audit(1747304189.486:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.635631 kernel: audit: type=1334 audit(1747304189.491:92): prog-id=12 op=UNLOAD May 15 10:16:29.635642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 10:16:29.635653 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:16:29.635665 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:16:29.635676 systemd[1]: Created slice system-getty.slice. May 15 10:16:29.635687 systemd[1]: Created slice system-modprobe.slice. May 15 10:16:29.635698 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:16:29.635709 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:16:29.635720 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:16:29.635730 systemd[1]: Created slice user.slice. May 15 10:16:29.635740 systemd[1]: Started systemd-ask-password-console.path. May 15 10:16:29.635751 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:16:29.635762 systemd[1]: Set up automount boot.automount. May 15 10:16:29.635773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:16:29.635783 systemd[1]: Stopped target initrd-switch-root.target. May 15 10:16:29.635793 systemd[1]: Stopped target initrd-fs.target. May 15 10:16:29.635804 systemd[1]: Stopped target initrd-root-fs.target. May 15 10:16:29.635814 systemd[1]: Reached target integritysetup.target. May 15 10:16:29.635825 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:16:29.635835 systemd[1]: Reached target remote-fs.target. May 15 10:16:29.635845 systemd[1]: Reached target slices.target. May 15 10:16:29.635857 systemd[1]: Reached target swap.target. May 15 10:16:29.635867 systemd[1]: Reached target torcx.target. May 15 10:16:29.635878 systemd[1]: Reached target veritysetup.target. May 15 10:16:29.635889 systemd[1]: Listening on systemd-coredump.socket. May 15 10:16:29.635899 systemd[1]: Listening on systemd-initctl.socket. May 15 10:16:29.635909 systemd[1]: Listening on systemd-networkd.socket. May 15 10:16:29.635919 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:16:29.635930 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:16:29.635940 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:16:29.635952 systemd[1]: Mounting dev-hugepages.mount... May 15 10:16:29.635962 systemd[1]: Mounting dev-mqueue.mount... May 15 10:16:29.635973 systemd[1]: Mounting media.mount... May 15 10:16:29.635983 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:16:29.635993 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:16:29.636003 systemd[1]: Mounting tmp.mount... May 15 10:16:29.636013 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:16:29.636023 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:29.636034 systemd[1]: Starting kmod-static-nodes.service... May 15 10:16:29.636045 systemd[1]: Starting modprobe@configfs.service... May 15 10:16:29.636056 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:29.636067 systemd[1]: Starting modprobe@drm.service... May 15 10:16:29.636077 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:29.636087 systemd[1]: Starting modprobe@fuse.service... May 15 10:16:29.636098 systemd[1]: Starting modprobe@loop.service... May 15 10:16:29.636108 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:16:29.636119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 10:16:29.636130 systemd[1]: Stopped systemd-fsck-root.service. May 15 10:16:29.636142 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 10:16:29.636152 systemd[1]: Stopped systemd-fsck-usr.service. May 15 10:16:29.636162 systemd[1]: Stopped systemd-journald.service. May 15 10:16:29.636172 kernel: fuse: init (API version 7.34) May 15 10:16:29.636183 systemd[1]: Starting systemd-journald.service... May 15 10:16:29.636193 kernel: loop: module loaded May 15 10:16:29.636202 systemd[1]: Starting systemd-modules-load.service... May 15 10:16:29.636213 systemd[1]: Starting systemd-network-generator.service... May 15 10:16:29.636224 systemd[1]: Starting systemd-remount-fs.service... May 15 10:16:29.636234 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:16:29.636246 systemd[1]: verity-setup.service: Deactivated successfully. May 15 10:16:29.636257 systemd[1]: Stopped verity-setup.service. May 15 10:16:29.636267 systemd[1]: Mounted dev-hugepages.mount. May 15 10:16:29.636277 systemd[1]: Mounted dev-mqueue.mount. May 15 10:16:29.636287 systemd[1]: Mounted media.mount. May 15 10:16:29.636298 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:16:29.636316 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:16:29.636326 systemd[1]: Mounted tmp.mount. May 15 10:16:29.636337 systemd[1]: Finished kmod-static-nodes.service. May 15 10:16:29.636348 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:16:29.636359 systemd[1]: Finished modprobe@configfs.service. May 15 10:16:29.636369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:29.636379 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:29.636392 systemd-journald[1006]: Journal started May 15 10:16:29.636434 systemd-journald[1006]: Runtime Journal (/run/log/journal/3ffb512c55954583ac795fbba90e3e5e) is 6.0M, max 48.7M, 42.6M free. May 15 10:16:27.650000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 10:16:27.731000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:16:27.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:16:27.731000 audit: BPF prog-id=10 op=LOAD May 15 10:16:27.731000 audit: BPF prog-id=10 op=UNLOAD May 15 10:16:27.731000 audit: BPF prog-id=11 op=LOAD May 15 10:16:27.731000 audit: BPF prog-id=11 op=UNLOAD May 15 10:16:27.772000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 10:16:27.772000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd89c a1=400013ede0 a2=4000145040 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:27.772000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:16:27.774000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 10:16:27.774000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd979 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:27.774000 audit: CWD cwd="/" May 15 10:16:27.774000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:16:27.774000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:16:27.774000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:16:29.473000 audit: BPF prog-id=12 op=LOAD May 15 10:16:29.473000 audit: BPF prog-id=3 op=UNLOAD May 15 10:16:29.474000 audit: BPF prog-id=13 op=LOAD May 15 10:16:29.475000 audit: BPF prog-id=14 op=LOAD May 15 10:16:29.475000 audit: BPF prog-id=4 op=UNLOAD May 15 10:16:29.475000 audit: BPF prog-id=5 op=UNLOAD May 15 10:16:29.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.491000 audit: BPF prog-id=12 op=UNLOAD May 15 10:16:29.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.601000 audit: BPF prog-id=15 op=LOAD May 15 10:16:29.601000 audit: BPF prog-id=16 op=LOAD May 15 10:16:29.601000 audit: BPF prog-id=17 op=LOAD May 15 10:16:29.601000 audit: BPF prog-id=13 op=UNLOAD May 15 10:16:29.601000 audit: BPF prog-id=14 op=UNLOAD May 15 10:16:29.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.633000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:16:29.633000 audit[1006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc84552f0 a2=4000 a3=1 items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:29.633000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:16:29.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:27.771849 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:16:29.472085 systemd[1]: Queued start job for default target multi-user.target. May 15 10:16:27.772095 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:16:29.472097 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:16:27.772112 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:16:29.476722 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 10:16:27.772141 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 10:16:27.772151 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 10:16:27.772177 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 10:16:27.772194 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 10:16:27.772405 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 10:16:27.772438 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:16:27.772450 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:16:27.773063 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 10:16:27.773095 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 10:16:27.773119 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 May 15 10:16:27.773134 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 10:16:27.773150 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 May 15 10:16:27.773163 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 10:16:29.212990 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:29.638829 systemd[1]: Started systemd-journald.service. May 15 10:16:29.213259 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:29.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.213367 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:29.213578 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:29.213627 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 10:16:29.213687 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-15T10:16:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 10:16:29.639641 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:16:29.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.640731 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:16:29.640909 systemd[1]: Finished modprobe@drm.service. May 15 10:16:29.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.642063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:29.642222 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:29.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.643399 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:16:29.643576 systemd[1]: Finished modprobe@fuse.service. May 15 10:16:29.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.644673 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:29.644830 systemd[1]: Finished modprobe@loop.service. May 15 10:16:29.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.646035 systemd[1]: Finished systemd-modules-load.service. May 15 10:16:29.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.647246 systemd[1]: Finished systemd-network-generator.service. May 15 10:16:29.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.648541 systemd[1]: Finished systemd-remount-fs.service. May 15 10:16:29.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.649864 systemd[1]: Reached target network-pre.target. May 15 10:16:29.652008 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:16:29.654036 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:16:29.654866 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:16:29.656288 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:16:29.658429 systemd[1]: Starting systemd-journal-flush.service... May 15 10:16:29.659458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:16:29.660414 systemd[1]: Starting systemd-random-seed.service... May 15 10:16:29.661373 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:16:29.662375 systemd[1]: Starting systemd-sysctl.service... May 15 10:16:29.665755 systemd[1]: Starting systemd-sysusers.service... May 15 10:16:29.669040 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:16:29.670067 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:16:29.671491 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:16:29.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.675972 systemd-journald[1006]: Time spent on flushing to /var/log/journal/3ffb512c55954583ac795fbba90e3e5e is 12.589ms for 994 entries. May 15 10:16:29.675972 systemd-journald[1006]: System Journal (/var/log/journal/3ffb512c55954583ac795fbba90e3e5e) is 8.0M, max 195.6M, 187.6M free. May 15 10:16:29.699660 systemd-journald[1006]: Received client request to flush runtime journal. May 15 10:16:29.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.673615 systemd[1]: Starting systemd-udev-settle.service... May 15 10:16:29.678375 systemd[1]: Finished systemd-random-seed.service. May 15 10:16:29.700122 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 10:16:29.679686 systemd[1]: Reached target first-boot-complete.target. May 15 10:16:29.693575 systemd[1]: Finished systemd-sysctl.service. May 15 10:16:29.698269 systemd[1]: Finished systemd-sysusers.service. May 15 10:16:29.700693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:16:29.703326 systemd[1]: Finished systemd-journal-flush.service. May 15 10:16:29.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:29.717880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:16:29.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.053440 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:16:30.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.054000 audit: BPF prog-id=18 op=LOAD May 15 10:16:30.054000 audit: BPF prog-id=19 op=LOAD May 15 10:16:30.054000 audit: BPF prog-id=7 op=UNLOAD May 15 10:16:30.054000 audit: BPF prog-id=8 op=UNLOAD May 15 10:16:30.055782 systemd[1]: Starting systemd-udevd.service... May 15 10:16:30.072741 systemd-udevd[1038]: Using default interface naming scheme 'v252'. May 15 10:16:30.089208 systemd[1]: Started systemd-udevd.service. May 15 10:16:30.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.091000 audit: BPF prog-id=20 op=LOAD May 15 10:16:30.093106 systemd[1]: Starting systemd-networkd.service... May 15 10:16:30.114156 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 15 10:16:30.131000 audit: BPF prog-id=21 op=LOAD May 15 10:16:30.132000 audit: BPF prog-id=22 op=LOAD May 15 10:16:30.132000 audit: BPF prog-id=23 op=LOAD May 15 10:16:30.133348 systemd[1]: Starting systemd-userdbd.service... May 15 10:16:30.165296 systemd[1]: Started systemd-userdbd.service. May 15 10:16:30.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.178624 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:16:30.204917 systemd[1]: Finished systemd-udev-settle.service. May 15 10:16:30.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.207123 systemd[1]: Starting lvm2-activation-early.service... May 15 10:16:30.207285 systemd-networkd[1046]: lo: Link UP May 15 10:16:30.207289 systemd-networkd[1046]: lo: Gained carrier May 15 10:16:30.207887 systemd-networkd[1046]: Enumeration completed May 15 10:16:30.207991 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:16:30.208144 systemd[1]: Started systemd-networkd.service. May 15 10:16:30.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.209760 systemd-networkd[1046]: eth0: Link UP May 15 10:16:30.209769 systemd-networkd[1046]: eth0: Gained carrier May 15 10:16:30.222634 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:16:30.236632 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:16:30.243430 systemd[1]: Finished lvm2-activation-early.service. May 15 10:16:30.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.244612 systemd[1]: Reached target cryptsetup.target. May 15 10:16:30.246994 systemd[1]: Starting lvm2-activation.service... May 15 10:16:30.250925 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:16:30.276446 systemd[1]: Finished lvm2-activation.service. May 15 10:16:30.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.277504 systemd[1]: Reached target local-fs-pre.target. May 15 10:16:30.278375 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:16:30.278408 systemd[1]: Reached target local-fs.target. May 15 10:16:30.279246 systemd[1]: Reached target machines.target. May 15 10:16:30.281435 systemd[1]: Starting ldconfig.service... May 15 10:16:30.282528 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:30.282598 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.283987 systemd[1]: Starting systemd-boot-update.service... May 15 10:16:30.286180 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:16:30.288792 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:16:30.291175 systemd[1]: Starting systemd-sysext.service... May 15 10:16:30.295129 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) May 15 10:16:30.301522 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:16:30.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.308023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:16:30.321335 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:16:30.362921 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:16:30.363109 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:16:30.370443 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:16:30.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.375502 kernel: loop0: detected capacity change from 0 to 201592 May 15 10:16:30.386508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:16:30.387129 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) May 15 10:16:30.387129 systemd-fsck[1081]: /dev/vda1: 236 files, 117182/258078 clusters May 15 10:16:30.390309 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:16:30.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.401606 kernel: loop1: detected capacity change from 0 to 201592 May 15 10:16:30.405844 (sd-sysext)[1087]: Using extensions 'kubernetes'. May 15 10:16:30.406205 (sd-sysext)[1087]: Merged extensions into '/usr'. May 15 10:16:30.430404 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:30.431813 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:30.433769 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:30.435997 systemd[1]: Starting modprobe@loop.service... May 15 10:16:30.436923 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:30.437074 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.438061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:30.438236 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:30.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.439695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:30.439837 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:30.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.441317 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:30.441441 systemd[1]: Finished modprobe@loop.service. May 15 10:16:30.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.443031 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:16:30.443144 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:16:30.508295 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:16:30.512176 systemd[1]: Finished ldconfig.service. May 15 10:16:30.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.619873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:16:30.621676 systemd[1]: Mounting boot.mount... May 15 10:16:30.623500 systemd[1]: Mounting usr-share-oem.mount... May 15 10:16:30.629515 systemd[1]: Mounted boot.mount. May 15 10:16:30.631653 systemd[1]: Mounted usr-share-oem.mount. May 15 10:16:30.633632 systemd[1]: Finished systemd-sysext.service. May 15 10:16:30.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.636082 systemd[1]: Starting ensure-sysext.service... May 15 10:16:30.637748 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:16:30.638874 systemd[1]: Finished systemd-boot-update.service. May 15 10:16:30.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.643049 systemd[1]: Reloading. May 15 10:16:30.647569 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:16:30.648612 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:16:30.649945 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:16:30.681510 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-05-15T10:16:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:16:30.681859 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-05-15T10:16:30Z" level=info msg="torcx already run" May 15 10:16:30.734882 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:16:30.734903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:16:30.750464 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:16:30.791000 audit: BPF prog-id=24 op=LOAD May 15 10:16:30.791000 audit: BPF prog-id=21 op=UNLOAD May 15 10:16:30.791000 audit: BPF prog-id=25 op=LOAD May 15 10:16:30.791000 audit: BPF prog-id=26 op=LOAD May 15 10:16:30.791000 audit: BPF prog-id=22 op=UNLOAD May 15 10:16:30.791000 audit: BPF prog-id=23 op=UNLOAD May 15 10:16:30.792000 audit: BPF prog-id=27 op=LOAD May 15 10:16:30.792000 audit: BPF prog-id=28 op=LOAD May 15 10:16:30.792000 audit: BPF prog-id=18 op=UNLOAD May 15 10:16:30.792000 audit: BPF prog-id=19 op=UNLOAD May 15 10:16:30.793000 audit: BPF prog-id=29 op=LOAD May 15 10:16:30.793000 audit: BPF prog-id=15 op=UNLOAD May 15 10:16:30.793000 audit: BPF prog-id=30 op=LOAD May 15 10:16:30.793000 audit: BPF prog-id=31 op=LOAD May 15 10:16:30.793000 audit: BPF prog-id=16 op=UNLOAD May 15 10:16:30.793000 audit: BPF prog-id=17 op=UNLOAD May 15 10:16:30.794000 audit: BPF prog-id=32 op=LOAD May 15 10:16:30.794000 audit: BPF prog-id=20 op=UNLOAD May 15 10:16:30.796853 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:16:30.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.801193 systemd[1]: Starting audit-rules.service... May 15 10:16:30.802925 systemd[1]: Starting clean-ca-certificates.service... May 15 10:16:30.804977 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:16:30.808000 audit: BPF prog-id=33 op=LOAD May 15 10:16:30.809973 systemd[1]: Starting systemd-resolved.service... May 15 10:16:30.813000 audit: BPF prog-id=34 op=LOAD May 15 10:16:30.814554 systemd[1]: Starting systemd-timesyncd.service... May 15 10:16:30.816680 systemd[1]: Starting systemd-update-utmp.service... May 15 10:16:30.822243 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:30.822000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:16:30.823405 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:30.826467 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:30.828317 systemd[1]: Starting modprobe@loop.service... May 15 10:16:30.829163 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:30.829283 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.830141 systemd[1]: Finished clean-ca-certificates.service. May 15 10:16:30.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.831450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:30.831582 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:30.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.832811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:30.832914 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:30.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.834182 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:30.834296 systemd[1]: Finished modprobe@loop.service. May 15 10:16:30.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.838011 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:16:30.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.839600 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:30.840785 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:30.842613 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:30.844461 systemd[1]: Starting modprobe@loop.service... May 15 10:16:30.845320 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:30.845508 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.846903 systemd[1]: Starting systemd-update-done.service... May 15 10:16:30.847697 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:16:30.848684 systemd[1]: Finished systemd-update-utmp.service. May 15 10:16:30.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.849907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:30.850017 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:30.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.851201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:30.851337 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:30.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.852723 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:30.852837 systemd[1]: Finished modprobe@loop.service. May 15 10:16:30.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.854337 systemd[1]: Finished systemd-update-done.service. May 15 10:16:30.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.858330 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:30.859558 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:30.861440 systemd[1]: Starting modprobe@drm.service... May 15 10:16:30.863387 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:30.865469 systemd[1]: Starting modprobe@loop.service... May 15 10:16:30.866313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:30.866508 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.867822 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:16:30.868823 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:16:30.869848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:30.869958 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:30.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:30.871827 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:16:30.871935 systemd[1]: Finished modprobe@drm.service. May 15 10:16:30.873027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:30.873126 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:30.873000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:16:30.873000 audit[1184]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffff8fa300 a2=420 a3=0 items=0 ppid=1154 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:30.873000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:16:30.875603 augenrules[1184]: No rules May 15 10:16:30.874385 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:30.874493 systemd[1]: Finished modprobe@loop.service. May 15 10:16:30.875698 systemd[1]: Finished audit-rules.service. May 15 10:16:30.878070 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:16:30.878181 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:16:30.879788 systemd[1]: Finished ensure-sysext.service. May 15 10:16:30.883889 systemd-resolved[1158]: Positive Trust Anchors: May 15 10:16:30.884120 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:16:30.884199 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:16:30.886900 systemd[1]: Started systemd-timesyncd.service. May 15 10:16:30.424788 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:16:30.441613 systemd-journald[1006]: Time jumped backwards, rotating. May 15 10:16:30.424836 systemd-timesyncd[1162]: Initial clock synchronization to Thu 2025-05-15 10:16:30.424722 UTC. May 15 10:16:30.426017 systemd[1]: Reached target time-set.target. May 15 10:16:30.441334 systemd-resolved[1158]: Defaulting to hostname 'linux'. May 15 10:16:30.443201 systemd[1]: Started systemd-resolved.service. May 15 10:16:30.444026 systemd[1]: Reached target network.target. May 15 10:16:30.444774 systemd[1]: Reached target nss-lookup.target. May 15 10:16:30.445529 systemd[1]: Reached target sysinit.target. May 15 10:16:30.446376 systemd[1]: Started motdgen.path. May 15 10:16:30.447071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:16:30.448247 systemd[1]: Started logrotate.timer. May 15 10:16:30.449038 systemd[1]: Started mdadm.timer. May 15 10:16:30.449678 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:16:30.450484 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:16:30.450523 systemd[1]: Reached target paths.target. May 15 10:16:30.451248 systemd[1]: Reached target timers.target. May 15 10:16:30.452263 systemd[1]: Listening on dbus.socket. May 15 10:16:30.453949 systemd[1]: Starting docker.socket... May 15 10:16:30.457298 systemd[1]: Listening on sshd.socket. May 15 10:16:30.458127 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.458546 systemd[1]: Listening on docker.socket. May 15 10:16:30.459373 systemd[1]: Reached target sockets.target. May 15 10:16:30.460164 systemd[1]: Reached target basic.target. May 15 10:16:30.460919 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:16:30.460948 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:16:30.461920 systemd[1]: Starting containerd.service... May 15 10:16:30.463607 systemd[1]: Starting dbus.service... May 15 10:16:30.465215 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:16:30.467038 systemd[1]: Starting extend-filesystems.service... May 15 10:16:30.467861 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:16:30.469054 systemd[1]: Starting motdgen.service... May 15 10:16:30.473430 systemd[1]: Starting prepare-helm.service... May 15 10:16:30.475285 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:16:30.476772 jq[1197]: false May 15 10:16:30.477079 systemd[1]: Starting sshd-keygen.service... May 15 10:16:30.480215 systemd[1]: Starting systemd-logind.service... May 15 10:16:30.481130 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:30.481234 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:16:30.483454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 10:16:30.484747 systemd[1]: Starting update-engine.service... May 15 10:16:30.486944 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:16:30.489933 jq[1213]: true May 15 10:16:30.489928 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:16:30.490095 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:16:30.491395 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:16:30.491577 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:16:30.502884 tar[1218]: linux-arm64/LICENSE May 15 10:16:30.503750 extend-filesystems[1198]: Found loop1 May 15 10:16:30.503750 extend-filesystems[1198]: Found vda May 15 10:16:30.503750 extend-filesystems[1198]: Found vda1 May 15 10:16:30.503750 extend-filesystems[1198]: Found vda2 May 15 10:16:30.503750 extend-filesystems[1198]: Found vda3 May 15 10:16:30.503750 extend-filesystems[1198]: Found usr May 15 10:16:30.503750 extend-filesystems[1198]: Found vda4 May 15 10:16:30.503750 extend-filesystems[1198]: Found vda6 May 15 10:16:30.503750 extend-filesystems[1198]: Found vda7 May 15 10:16:30.503750 extend-filesystems[1198]: Found vda9 May 15 10:16:30.503750 extend-filesystems[1198]: Checking size of /dev/vda9 May 15 10:16:30.522086 jq[1219]: true May 15 10:16:30.521055 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:16:30.522248 tar[1218]: linux-arm64/helm May 15 10:16:30.521235 systemd[1]: Finished motdgen.service. May 15 10:16:30.526016 dbus-daemon[1196]: [system] SELinux support is enabled May 15 10:16:30.526180 systemd[1]: Started dbus.service. May 15 10:16:30.529075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:16:30.529109 systemd[1]: Reached target system-config.target. May 15 10:16:30.530008 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:16:30.530032 systemd[1]: Reached target user-config.target. May 15 10:16:30.543799 extend-filesystems[1198]: Resized partition /dev/vda9 May 15 10:16:30.547803 extend-filesystems[1246]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:16:30.553950 update_engine[1211]: I0515 10:16:30.553524 1211 main.cc:92] Flatcar Update Engine starting May 15 10:16:30.559723 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:16:30.559903 systemd[1]: Started update-engine.service. May 15 10:16:30.559984 update_engine[1211]: I0515 10:16:30.559937 1211 update_check_scheduler.cc:74] Next update check in 7m37s May 15 10:16:30.563756 systemd[1]: Started locksmithd.service. May 15 10:16:30.577672 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (Power Button) May 15 10:16:30.577911 systemd-logind[1208]: New seat seat0. May 15 10:16:30.579180 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 15 10:16:30.580090 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:16:30.581480 systemd[1]: Started systemd-logind.service. May 15 10:16:30.583716 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:16:30.597069 extend-filesystems[1246]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:16:30.597069 extend-filesystems[1246]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:16:30.597069 extend-filesystems[1246]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:16:30.601938 extend-filesystems[1198]: Resized filesystem in /dev/vda9 May 15 10:16:30.599607 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:16:30.599786 systemd[1]: Finished extend-filesystems.service. May 15 10:16:30.604813 env[1220]: time="2025-05-15T10:16:30.604737438Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:16:30.622278 env[1220]: time="2025-05-15T10:16:30.622239038Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:16:30.622645 env[1220]: time="2025-05-15T10:16:30.622623158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:30.628801 env[1220]: time="2025-05-15T10:16:30.628721918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:16:30.628801 env[1220]: time="2025-05-15T10:16:30.628751878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:30.629000 env[1220]: time="2025-05-15T10:16:30.628976998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:16:30.629000 env[1220]: time="2025-05-15T10:16:30.628998478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:16:30.629067 env[1220]: time="2025-05-15T10:16:30.629013878Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:16:30.629067 env[1220]: time="2025-05-15T10:16:30.629023518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:16:30.629111 env[1220]: time="2025-05-15T10:16:30.629090518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:30.629384 env[1220]: time="2025-05-15T10:16:30.629350198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:30.629504 env[1220]: time="2025-05-15T10:16:30.629475758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:16:30.629504 env[1220]: time="2025-05-15T10:16:30.629494558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:16:30.629570 env[1220]: time="2025-05-15T10:16:30.629557118Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:16:30.629607 env[1220]: time="2025-05-15T10:16:30.629569998Z" level=info msg="metadata content store policy set" policy=shared May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635312358Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635347438Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635361878Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635391678Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635405278Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635419278Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635431958Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635788398Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635809638Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635822438Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635839958Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635853318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.635965758Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:16:30.637720 env[1220]: time="2025-05-15T10:16:30.636036718Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636243158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636265078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636281318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636378198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636391638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636402918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636413358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636426238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636437678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636447598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636461438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636474198Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636601638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636620318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:16:30.637989 env[1220]: time="2025-05-15T10:16:30.636631718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:16:30.638320 env[1220]: time="2025-05-15T10:16:30.636643198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:16:30.638320 env[1220]: time="2025-05-15T10:16:30.636655758Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:16:30.638320 env[1220]: time="2025-05-15T10:16:30.636665998Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:16:30.638320 env[1220]: time="2025-05-15T10:16:30.636682238Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:16:30.638320 env[1220]: time="2025-05-15T10:16:30.636731238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:16:30.638418 env[1220]: time="2025-05-15T10:16:30.636917678Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:16:30.638418 env[1220]: time="2025-05-15T10:16:30.636970958Z" level=info msg="Connect containerd service" May 15 10:16:30.638418 env[1220]: time="2025-05-15T10:16:30.637003798Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:16:30.640648 env[1220]: time="2025-05-15T10:16:30.638890278Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:16:30.640648 env[1220]: time="2025-05-15T10:16:30.639374918Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:16:30.640648 env[1220]: time="2025-05-15T10:16:30.639410878Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:16:30.640648 env[1220]: time="2025-05-15T10:16:30.639459478Z" level=info msg="containerd successfully booted in 0.035788s" May 15 10:16:30.639550 systemd[1]: Started containerd.service. May 15 10:16:30.641276 env[1220]: time="2025-05-15T10:16:30.641239438Z" level=info msg="Start subscribing containerd event" May 15 10:16:30.641377 env[1220]: time="2025-05-15T10:16:30.641362318Z" level=info msg="Start recovering state" May 15 10:16:30.641491 env[1220]: time="2025-05-15T10:16:30.641477918Z" level=info msg="Start event monitor" May 15 10:16:30.641569 env[1220]: time="2025-05-15T10:16:30.641555038Z" level=info msg="Start snapshots syncer" May 15 10:16:30.641627 env[1220]: time="2025-05-15T10:16:30.641611558Z" level=info msg="Start cni network conf syncer for default" May 15 10:16:30.641677 env[1220]: time="2025-05-15T10:16:30.641664398Z" level=info msg="Start streaming server" May 15 10:16:30.657169 locksmithd[1248]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:16:30.934019 tar[1218]: linux-arm64/README.md May 15 10:16:30.938352 systemd[1]: Finished prepare-helm.service. May 15 10:16:31.264898 systemd-networkd[1046]: eth0: Gained IPv6LL May 15 10:16:31.266542 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:16:31.267943 systemd[1]: Reached target network-online.target. May 15 10:16:31.270323 systemd[1]: Starting kubelet.service... May 15 10:16:31.704883 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:16:31.724196 systemd[1]: Finished sshd-keygen.service. May 15 10:16:31.726574 systemd[1]: Starting issuegen.service... May 15 10:16:31.731377 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:16:31.731553 systemd[1]: Finished issuegen.service. May 15 10:16:31.733890 systemd[1]: Starting systemd-user-sessions.service... May 15 10:16:31.740412 systemd[1]: Finished systemd-user-sessions.service. May 15 10:16:31.742778 systemd[1]: Started getty@tty1.service. May 15 10:16:31.744905 systemd[1]: Started serial-getty@ttyAMA0.service. May 15 10:16:31.746150 systemd[1]: Reached target getty.target. May 15 10:16:31.840184 systemd[1]: Started kubelet.service. May 15 10:16:31.841573 systemd[1]: Reached target multi-user.target. May 15 10:16:31.843940 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:16:31.851275 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:16:31.851437 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:16:31.852581 systemd[1]: Startup finished in 580ms (kernel) + 4.039s (initrd) + 4.702s (userspace) = 9.322s. May 15 10:16:32.312560 kubelet[1277]: E0515 10:16:32.312495 1277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:16:32.314438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:16:32.314570 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:16:36.244581 systemd[1]: Created slice system-sshd.slice. May 15 10:16:36.245630 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:53714.service. May 15 10:16:36.293453 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 53714 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:36.295488 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:36.307353 systemd[1]: Created slice user-500.slice. May 15 10:16:36.308328 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:16:36.311742 systemd-logind[1208]: New session 1 of user core. May 15 10:16:36.316309 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:16:36.317532 systemd[1]: Starting user@500.service... May 15 10:16:36.320414 (systemd)[1289]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:36.384527 systemd[1289]: Queued start job for default target default.target. May 15 10:16:36.385020 systemd[1289]: Reached target paths.target. May 15 10:16:36.385052 systemd[1289]: Reached target sockets.target. May 15 10:16:36.385063 systemd[1289]: Reached target timers.target. May 15 10:16:36.385073 systemd[1289]: Reached target basic.target. May 15 10:16:36.385110 systemd[1289]: Reached target default.target. May 15 10:16:36.385134 systemd[1289]: Startup finished in 58ms. May 15 10:16:36.385207 systemd[1]: Started user@500.service. May 15 10:16:36.386156 systemd[1]: Started session-1.scope. May 15 10:16:36.436045 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:53722.service. May 15 10:16:36.477700 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 53722 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:36.479171 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:36.482648 systemd-logind[1208]: New session 2 of user core. May 15 10:16:36.483759 systemd[1]: Started session-2.scope. May 15 10:16:36.536009 sshd[1298]: pam_unix(sshd:session): session closed for user core May 15 10:16:36.539707 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:53728.service. May 15 10:16:36.540183 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:53722.service: Deactivated successfully. May 15 10:16:36.540858 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:16:36.541304 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit. May 15 10:16:36.541972 systemd-logind[1208]: Removed session 2. May 15 10:16:36.580875 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 53728 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:36.581960 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:36.585102 systemd-logind[1208]: New session 3 of user core. May 15 10:16:36.585869 systemd[1]: Started session-3.scope. May 15 10:16:36.633958 sshd[1303]: pam_unix(sshd:session): session closed for user core May 15 10:16:36.637291 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:53728.service: Deactivated successfully. May 15 10:16:36.637843 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:16:36.638290 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit. May 15 10:16:36.639252 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:53736.service. May 15 10:16:36.639871 systemd-logind[1208]: Removed session 3. May 15 10:16:36.679802 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:36.680903 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:36.684297 systemd-logind[1208]: New session 4 of user core. May 15 10:16:36.685060 systemd[1]: Started session-4.scope. May 15 10:16:36.737192 sshd[1311]: pam_unix(sshd:session): session closed for user core May 15 10:16:36.740347 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:53736.service: Deactivated successfully. May 15 10:16:36.740935 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:16:36.741398 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit. May 15 10:16:36.742422 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:53744.service. May 15 10:16:36.743069 systemd-logind[1208]: Removed session 4. May 15 10:16:36.783203 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 53744 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:36.784335 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:36.787710 systemd-logind[1208]: New session 5 of user core. May 15 10:16:36.789089 systemd[1]: Started session-5.scope. May 15 10:16:36.845644 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:16:36.845910 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:16:36.904582 systemd[1]: Starting docker.service... May 15 10:16:37.021978 env[1332]: time="2025-05-15T10:16:37.021827398Z" level=info msg="Starting up" May 15 10:16:37.023397 env[1332]: time="2025-05-15T10:16:37.023331158Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:16:37.023397 env[1332]: time="2025-05-15T10:16:37.023353638Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:16:37.023397 env[1332]: time="2025-05-15T10:16:37.023372878Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:16:37.023397 env[1332]: time="2025-05-15T10:16:37.023383878Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:16:37.025524 env[1332]: time="2025-05-15T10:16:37.025491078Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:16:37.025524 env[1332]: time="2025-05-15T10:16:37.025519198Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:16:37.025599 env[1332]: time="2025-05-15T10:16:37.025533118Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:16:37.025599 env[1332]: time="2025-05-15T10:16:37.025544718Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:16:37.125417 env[1332]: time="2025-05-15T10:16:37.125312838Z" level=info msg="Loading containers: start." May 15 10:16:37.240721 kernel: Initializing XFRM netlink socket May 15 10:16:37.264381 env[1332]: time="2025-05-15T10:16:37.264334678Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:16:37.317782 systemd-networkd[1046]: docker0: Link UP May 15 10:16:37.337931 env[1332]: time="2025-05-15T10:16:37.337889878Z" level=info msg="Loading containers: done." May 15 10:16:37.357089 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2818667113-merged.mount: Deactivated successfully. May 15 10:16:37.358195 env[1332]: time="2025-05-15T10:16:37.358146758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:16:37.358329 env[1332]: time="2025-05-15T10:16:37.358312838Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:16:37.358424 env[1332]: time="2025-05-15T10:16:37.358410238Z" level=info msg="Daemon has completed initialization" May 15 10:16:37.371533 systemd[1]: Started docker.service. May 15 10:16:37.377460 env[1332]: time="2025-05-15T10:16:37.377370398Z" level=info msg="API listen on /run/docker.sock" May 15 10:16:38.066990 env[1220]: time="2025-05-15T10:16:38.066929758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 10:16:38.920346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462270547.mount: Deactivated successfully. May 15 10:16:40.243815 env[1220]: time="2025-05-15T10:16:40.243762518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.245177 env[1220]: time="2025-05-15T10:16:40.245151598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.246816 env[1220]: time="2025-05-15T10:16:40.246790798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.249213 env[1220]: time="2025-05-15T10:16:40.249177198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.249969 env[1220]: time="2025-05-15T10:16:40.249938198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 15 10:16:40.251152 env[1220]: time="2025-05-15T10:16:40.251119078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 10:16:41.924203 env[1220]: time="2025-05-15T10:16:41.924141878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:41.925452 env[1220]: time="2025-05-15T10:16:41.925422718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:41.927545 env[1220]: time="2025-05-15T10:16:41.927503358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:41.929002 env[1220]: time="2025-05-15T10:16:41.928974158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:41.929703 env[1220]: time="2025-05-15T10:16:41.929657038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 15 10:16:41.930158 env[1220]: time="2025-05-15T10:16:41.930124598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 10:16:42.526189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:16:42.526373 systemd[1]: Stopped kubelet.service. May 15 10:16:42.527808 systemd[1]: Starting kubelet.service... May 15 10:16:42.628460 systemd[1]: Started kubelet.service. May 15 10:16:42.659974 kubelet[1466]: E0515 10:16:42.659925 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:16:42.662587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:16:42.662729 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:16:43.346674 env[1220]: time="2025-05-15T10:16:43.346590598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:43.348404 env[1220]: time="2025-05-15T10:16:43.348369318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:43.350716 env[1220]: time="2025-05-15T10:16:43.350676518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:43.355534 env[1220]: time="2025-05-15T10:16:43.355500198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:43.356317 env[1220]: time="2025-05-15T10:16:43.356272278Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 15 10:16:43.356825 env[1220]: time="2025-05-15T10:16:43.356783958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 10:16:44.428292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822477472.mount: Deactivated successfully. May 15 10:16:44.885217 env[1220]: time="2025-05-15T10:16:44.885140038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:44.886513 env[1220]: time="2025-05-15T10:16:44.886474558Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:44.887737 env[1220]: time="2025-05-15T10:16:44.887711878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:44.888984 env[1220]: time="2025-05-15T10:16:44.888947278Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:44.889496 env[1220]: time="2025-05-15T10:16:44.889455438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 10:16:44.889983 env[1220]: time="2025-05-15T10:16:44.889958598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 10:16:45.441299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653775363.mount: Deactivated successfully. May 15 10:16:46.481652 env[1220]: time="2025-05-15T10:16:46.481560158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:46.483386 env[1220]: time="2025-05-15T10:16:46.483343878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:46.485884 env[1220]: time="2025-05-15T10:16:46.485855438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:46.488202 env[1220]: time="2025-05-15T10:16:46.488159518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:46.489134 env[1220]: time="2025-05-15T10:16:46.489096838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 10:16:46.490755 env[1220]: time="2025-05-15T10:16:46.490717598Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 10:16:47.070300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221873881.mount: Deactivated successfully. May 15 10:16:47.076323 env[1220]: time="2025-05-15T10:16:47.076284678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:47.078654 env[1220]: time="2025-05-15T10:16:47.078626678Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:47.079816 env[1220]: time="2025-05-15T10:16:47.079791038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:47.082000 env[1220]: time="2025-05-15T10:16:47.081972718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:47.082707 env[1220]: time="2025-05-15T10:16:47.082667318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 10:16:47.084175 env[1220]: time="2025-05-15T10:16:47.084153078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 10:16:47.609004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094013406.mount: Deactivated successfully. May 15 10:16:50.182918 env[1220]: time="2025-05-15T10:16:50.182858318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:50.184374 env[1220]: time="2025-05-15T10:16:50.184342238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:50.186450 env[1220]: time="2025-05-15T10:16:50.186422318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:50.190747 env[1220]: time="2025-05-15T10:16:50.190713598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:50.191836 env[1220]: time="2025-05-15T10:16:50.191805918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 15 10:16:52.776201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:16:52.776372 systemd[1]: Stopped kubelet.service. May 15 10:16:52.777792 systemd[1]: Starting kubelet.service... May 15 10:16:52.869452 systemd[1]: Started kubelet.service. May 15 10:16:52.902035 kubelet[1499]: E0515 10:16:52.901985 1499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:16:52.904215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:16:52.904338 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:16:54.766423 systemd[1]: Stopped kubelet.service. May 15 10:16:54.768454 systemd[1]: Starting kubelet.service... May 15 10:16:54.788492 systemd[1]: Reloading. May 15 10:16:54.841098 /usr/lib/systemd/system-generators/torcx-generator[1535]: time="2025-05-15T10:16:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:16:54.841132 /usr/lib/systemd/system-generators/torcx-generator[1535]: time="2025-05-15T10:16:54Z" level=info msg="torcx already run" May 15 10:16:54.991186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:16:54.991208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:16:55.007124 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:16:55.072527 systemd[1]: Started kubelet.service. May 15 10:16:55.073822 systemd[1]: Stopping kubelet.service... May 15 10:16:55.074076 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:16:55.074254 systemd[1]: Stopped kubelet.service. May 15 10:16:55.075895 systemd[1]: Starting kubelet.service... May 15 10:16:55.161429 systemd[1]: Started kubelet.service. May 15 10:16:55.199373 kubelet[1578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:16:55.199760 kubelet[1578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 10:16:55.200149 kubelet[1578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:16:55.200149 kubelet[1578]: I0515 10:16:55.199881 1578 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:16:56.622644 kubelet[1578]: I0515 10:16:56.622598 1578 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 10:16:56.622988 kubelet[1578]: I0515 10:16:56.622973 1578 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:16:56.623511 kubelet[1578]: I0515 10:16:56.623488 1578 server.go:954] "Client rotation is on, will bootstrap in background" May 15 10:16:56.658479 kubelet[1578]: E0515 10:16:56.658434 1578 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:56.658614 kubelet[1578]: I0515 10:16:56.658550 1578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:16:56.666273 kubelet[1578]: E0515 10:16:56.666235 1578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:16:56.666492 kubelet[1578]: I0515 10:16:56.666477 1578 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:16:56.669345 kubelet[1578]: I0515 10:16:56.669319 1578 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:16:56.670097 kubelet[1578]: I0515 10:16:56.670058 1578 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:16:56.670306 kubelet[1578]: I0515 10:16:56.670104 1578 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:16:56.670397 kubelet[1578]: I0515 10:16:56.670383 1578 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:16:56.670397 kubelet[1578]: I0515 10:16:56.670393 1578 container_manager_linux.go:304] "Creating device plugin manager" May 15 10:16:56.670611 kubelet[1578]: I0515 10:16:56.670596 1578 state_mem.go:36] "Initialized new in-memory state store" May 15 10:16:56.675286 kubelet[1578]: I0515 10:16:56.675260 1578 kubelet.go:446] "Attempting to sync node with API server" May 15 10:16:56.675389 kubelet[1578]: I0515 10:16:56.675374 1578 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:16:56.675432 kubelet[1578]: I0515 10:16:56.675400 1578 kubelet.go:352] "Adding apiserver pod source" May 15 10:16:56.675432 kubelet[1578]: I0515 10:16:56.675412 1578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:16:56.694894 kubelet[1578]: W0515 10:16:56.694830 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:56.695029 kubelet[1578]: E0515 10:16:56.694910 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:56.695244 kubelet[1578]: W0515 10:16:56.695195 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:56.695289 kubelet[1578]: E0515 10:16:56.695250 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:56.698685 kubelet[1578]: I0515 10:16:56.698657 1578 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:16:56.699503 kubelet[1578]: I0515 10:16:56.699453 1578 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:16:56.699663 kubelet[1578]: W0515 10:16:56.699643 1578 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:16:56.700614 kubelet[1578]: I0515 10:16:56.700588 1578 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 10:16:56.700836 kubelet[1578]: I0515 10:16:56.700637 1578 server.go:1287] "Started kubelet" May 15 10:16:56.703201 kubelet[1578]: I0515 10:16:56.703157 1578 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:16:56.707097 kubelet[1578]: I0515 10:16:56.707067 1578 server.go:490] "Adding debug handlers to kubelet server" May 15 10:16:56.709329 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:16:56.710805 kubelet[1578]: I0515 10:16:56.710678 1578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:16:56.710952 kubelet[1578]: I0515 10:16:56.710927 1578 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:16:56.711178 kubelet[1578]: I0515 10:16:56.711076 1578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:16:56.715567 kubelet[1578]: I0515 10:16:56.714249 1578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:16:56.715893 kubelet[1578]: E0515 10:16:56.715857 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:16:56.715965 kubelet[1578]: I0515 10:16:56.715905 1578 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 10:16:56.716669 kubelet[1578]: E0515 10:16:56.715923 1578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fabed6efb5ab6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:16:56.700607158 +0000 UTC m=+1.535198401,LastTimestamp:2025-05-15 10:16:56.700607158 +0000 UTC m=+1.535198401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:16:56.716802 kubelet[1578]: I0515 10:16:56.716765 1578 factory.go:221] Registration of the systemd container factory successfully May 15 10:16:56.716802 kubelet[1578]: W0515 10:16:56.716775 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:56.716859 kubelet[1578]: E0515 10:16:56.716820 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:56.716885 kubelet[1578]: I0515 10:16:56.716868 1578 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:16:56.716913 kubelet[1578]: I0515 10:16:56.716899 1578 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:16:56.716936 kubelet[1578]: E0515 10:16:56.716894 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" May 15 10:16:56.717265 kubelet[1578]: I0515 10:16:56.717250 1578 reconciler.go:26] "Reconciler: start to sync state" May 15 10:16:56.718126 kubelet[1578]: E0515 10:16:56.717917 1578 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:16:56.718277 kubelet[1578]: I0515 10:16:56.718223 1578 factory.go:221] Registration of the containerd container factory successfully May 15 10:16:56.730713 kubelet[1578]: I0515 10:16:56.730658 1578 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 10:16:56.730713 kubelet[1578]: I0515 10:16:56.730684 1578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 10:16:56.730713 kubelet[1578]: I0515 10:16:56.730722 1578 state_mem.go:36] "Initialized new in-memory state store" May 15 10:16:56.732144 kubelet[1578]: I0515 10:16:56.732113 1578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:16:56.732433 kubelet[1578]: I0515 10:16:56.732408 1578 policy_none.go:49] "None policy: Start" May 15 10:16:56.732433 kubelet[1578]: I0515 10:16:56.732428 1578 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 10:16:56.732523 kubelet[1578]: I0515 10:16:56.732439 1578 state_mem.go:35] "Initializing new in-memory state store" May 15 10:16:56.733412 kubelet[1578]: I0515 10:16:56.733383 1578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:16:56.733412 kubelet[1578]: I0515 10:16:56.733414 1578 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 10:16:56.733526 kubelet[1578]: I0515 10:16:56.733434 1578 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 10:16:56.733526 kubelet[1578]: I0515 10:16:56.733445 1578 kubelet.go:2388] "Starting kubelet main sync loop" May 15 10:16:56.733526 kubelet[1578]: E0515 10:16:56.733498 1578 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:16:56.734999 kubelet[1578]: W0515 10:16:56.734942 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:56.735071 kubelet[1578]: E0515 10:16:56.735001 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:56.737840 systemd[1]: Created slice kubepods.slice. May 15 10:16:56.742529 systemd[1]: Created slice kubepods-burstable.slice. May 15 10:16:56.745527 systemd[1]: Created slice kubepods-besteffort.slice. May 15 10:16:56.763806 kubelet[1578]: I0515 10:16:56.763769 1578 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:16:56.764061 kubelet[1578]: I0515 10:16:56.763935 1578 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:16:56.764061 kubelet[1578]: I0515 10:16:56.763952 1578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:16:56.764514 kubelet[1578]: I0515 10:16:56.764231 1578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:16:56.765363 kubelet[1578]: E0515 10:16:56.765281 1578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 10:16:56.765363 kubelet[1578]: E0515 10:16:56.765322 1578 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:16:56.841159 systemd[1]: Created slice kubepods-burstable-podd0434f84b6f6acba10ae2e06ae256115.slice. May 15 10:16:56.849978 kubelet[1578]: E0515 10:16:56.849930 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:56.850582 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 10:16:56.864674 kubelet[1578]: E0515 10:16:56.864636 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:56.864958 kubelet[1578]: I0515 10:16:56.864932 1578 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:16:56.865388 kubelet[1578]: E0515 10:16:56.865359 1578 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 15 10:16:56.866941 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 10:16:56.868273 kubelet[1578]: E0515 10:16:56.868253 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:56.917958 kubelet[1578]: E0515 10:16:56.917862 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" May 15 10:16:56.919101 kubelet[1578]: I0515 10:16:56.919069 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0434f84b6f6acba10ae2e06ae256115-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0434f84b6f6acba10ae2e06ae256115\") " pod="kube-system/kube-apiserver-localhost" May 15 10:16:56.919233 kubelet[1578]: I0515 10:16:56.919217 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0434f84b6f6acba10ae2e06ae256115-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0434f84b6f6acba10ae2e06ae256115\") " pod="kube-system/kube-apiserver-localhost" May 15 10:16:56.919320 kubelet[1578]: I0515 10:16:56.919305 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0434f84b6f6acba10ae2e06ae256115-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d0434f84b6f6acba10ae2e06ae256115\") " pod="kube-system/kube-apiserver-localhost" May 15 10:16:56.919409 kubelet[1578]: I0515 10:16:56.919396 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:16:56.919503 kubelet[1578]: I0515 10:16:56.919489 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:16:56.919592 kubelet[1578]: I0515 10:16:56.919578 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:16:56.919683 kubelet[1578]: I0515 10:16:56.919671 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:16:56.919789 kubelet[1578]: I0515 10:16:56.919775 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:16:56.919890 kubelet[1578]: I0515 10:16:56.919877 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 10:16:57.066721 kubelet[1578]: I0515 10:16:57.066676 1578 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:16:57.067309 kubelet[1578]: E0515 10:16:57.067273 1578 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 15 10:16:57.152806 kubelet[1578]: E0515 10:16:57.152779 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:57.153731 env[1220]: time="2025-05-15T10:16:57.153609318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d0434f84b6f6acba10ae2e06ae256115,Namespace:kube-system,Attempt:0,}" May 15 10:16:57.165380 kubelet[1578]: E0515 10:16:57.165352 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:57.165828 env[1220]: time="2025-05-15T10:16:57.165787278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 10:16:57.169368 kubelet[1578]: E0515 10:16:57.169306 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:57.169793 env[1220]: time="2025-05-15T10:16:57.169749038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 10:16:57.318517 kubelet[1578]: E0515 10:16:57.318472 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" May 15 10:16:57.468575 kubelet[1578]: I0515 10:16:57.468478 1578 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:16:57.468927 kubelet[1578]: E0515 10:16:57.468891 1578 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 15 10:16:57.640301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864162488.mount: Deactivated successfully. May 15 10:16:57.645543 env[1220]: time="2025-05-15T10:16:57.645494998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.648764 env[1220]: time="2025-05-15T10:16:57.648724598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.650061 env[1220]: time="2025-05-15T10:16:57.650027718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.650825 env[1220]: time="2025-05-15T10:16:57.650774038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.652044 env[1220]: time="2025-05-15T10:16:57.652009518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.653389 env[1220]: time="2025-05-15T10:16:57.653358438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.654151 env[1220]: time="2025-05-15T10:16:57.654124838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.654967 env[1220]: time="2025-05-15T10:16:57.654936798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.658102 env[1220]: time="2025-05-15T10:16:57.658072038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.660168 env[1220]: time="2025-05-15T10:16:57.660140518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.661593 env[1220]: time="2025-05-15T10:16:57.661564958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.663091 env[1220]: time="2025-05-15T10:16:57.663067198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:57.692946 kubelet[1578]: W0515 10:16:57.692858 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:57.692946 kubelet[1578]: E0515 10:16:57.692899 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:57.696983 env[1220]: time="2025-05-15T10:16:57.696871998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:57.696983 env[1220]: time="2025-05-15T10:16:57.696915678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:57.696983 env[1220]: time="2025-05-15T10:16:57.696925918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:57.696983 env[1220]: time="2025-05-15T10:16:57.696891758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:57.696983 env[1220]: time="2025-05-15T10:16:57.696923958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:57.696983 env[1220]: time="2025-05-15T10:16:57.696960398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:57.697524 env[1220]: time="2025-05-15T10:16:57.697344558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:57.697524 env[1220]: time="2025-05-15T10:16:57.697378318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:57.697524 env[1220]: time="2025-05-15T10:16:57.697388798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:57.697524 env[1220]: time="2025-05-15T10:16:57.697336878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2354fef5679804f97a8364ba8c379c61444fda11650513fd363b3d559117a86 pid=1637 runtime=io.containerd.runc.v2 May 15 10:16:57.698436 env[1220]: time="2025-05-15T10:16:57.697823318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cadce5899eeba0367e3bdd4d11a83869bbc2c94b4f9f5d4b10bccabfe7ca06e pid=1635 runtime=io.containerd.runc.v2 May 15 10:16:57.698436 env[1220]: time="2025-05-15T10:16:57.697600838Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f855bd58307accf9dd1790d7790622a2b13ca306852af3bd4c40cd85b3cbf6 pid=1636 runtime=io.containerd.runc.v2 May 15 10:16:57.709315 systemd[1]: Started cri-containerd-d8f855bd58307accf9dd1790d7790622a2b13ca306852af3bd4c40cd85b3cbf6.scope. May 15 10:16:57.719292 kubelet[1578]: W0515 10:16:57.719141 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:57.719292 kubelet[1578]: E0515 10:16:57.719213 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:57.720551 systemd[1]: Started cri-containerd-5cadce5899eeba0367e3bdd4d11a83869bbc2c94b4f9f5d4b10bccabfe7ca06e.scope. May 15 10:16:57.726768 systemd[1]: Started cri-containerd-c2354fef5679804f97a8364ba8c379c61444fda11650513fd363b3d559117a86.scope. May 15 10:16:57.776179 env[1220]: time="2025-05-15T10:16:57.776136598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8f855bd58307accf9dd1790d7790622a2b13ca306852af3bd4c40cd85b3cbf6\"" May 15 10:16:57.777339 kubelet[1578]: E0515 10:16:57.777077 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:57.778857 env[1220]: time="2025-05-15T10:16:57.778818918Z" level=info msg="CreateContainer within sandbox \"d8f855bd58307accf9dd1790d7790622a2b13ca306852af3bd4c40cd85b3cbf6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:16:57.782725 env[1220]: time="2025-05-15T10:16:57.782315198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d0434f84b6f6acba10ae2e06ae256115,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2354fef5679804f97a8364ba8c379c61444fda11650513fd363b3d559117a86\"" May 15 10:16:57.782897 kubelet[1578]: E0515 10:16:57.782871 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:57.784945 env[1220]: time="2025-05-15T10:16:57.784897918Z" level=info msg="CreateContainer within sandbox \"c2354fef5679804f97a8364ba8c379c61444fda11650513fd363b3d559117a86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:16:57.787253 env[1220]: time="2025-05-15T10:16:57.787215198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cadce5899eeba0367e3bdd4d11a83869bbc2c94b4f9f5d4b10bccabfe7ca06e\"" May 15 10:16:57.788326 kubelet[1578]: E0515 10:16:57.788170 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:57.789960 env[1220]: time="2025-05-15T10:16:57.789924118Z" level=info msg="CreateContainer within sandbox \"5cadce5899eeba0367e3bdd4d11a83869bbc2c94b4f9f5d4b10bccabfe7ca06e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:16:57.796029 env[1220]: time="2025-05-15T10:16:57.795979998Z" level=info msg="CreateContainer within sandbox \"d8f855bd58307accf9dd1790d7790622a2b13ca306852af3bd4c40cd85b3cbf6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d0e5f99553e87f0a41b9d5a21960ccfecc0fb1c1d339258de1f92e1ddd6856a1\"" May 15 10:16:57.796613 env[1220]: time="2025-05-15T10:16:57.796581878Z" level=info msg="StartContainer for \"d0e5f99553e87f0a41b9d5a21960ccfecc0fb1c1d339258de1f92e1ddd6856a1\"" May 15 10:16:57.799047 env[1220]: time="2025-05-15T10:16:57.799007758Z" level=info msg="CreateContainer within sandbox \"c2354fef5679804f97a8364ba8c379c61444fda11650513fd363b3d559117a86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e0a8acec57a9885ad33740326d3de5cc28db88587a49948a95364b4693231a31\"" May 15 10:16:57.799660 env[1220]: time="2025-05-15T10:16:57.799636278Z" level=info msg="StartContainer for \"e0a8acec57a9885ad33740326d3de5cc28db88587a49948a95364b4693231a31\"" May 15 10:16:57.804341 env[1220]: time="2025-05-15T10:16:57.804287638Z" level=info msg="CreateContainer within sandbox \"5cadce5899eeba0367e3bdd4d11a83869bbc2c94b4f9f5d4b10bccabfe7ca06e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1e615b1e388f4b24b23f2f9c983a61fa91380a4f93248b19b175a9358a6e172\"" May 15 10:16:57.804762 env[1220]: time="2025-05-15T10:16:57.804730478Z" level=info msg="StartContainer for \"a1e615b1e388f4b24b23f2f9c983a61fa91380a4f93248b19b175a9358a6e172\"" May 15 10:16:57.813529 systemd[1]: Started cri-containerd-d0e5f99553e87f0a41b9d5a21960ccfecc0fb1c1d339258de1f92e1ddd6856a1.scope. May 15 10:16:57.823116 systemd[1]: Started cri-containerd-e0a8acec57a9885ad33740326d3de5cc28db88587a49948a95364b4693231a31.scope. May 15 10:16:57.827497 systemd[1]: Started cri-containerd-a1e615b1e388f4b24b23f2f9c983a61fa91380a4f93248b19b175a9358a6e172.scope. May 15 10:16:57.829634 kubelet[1578]: W0515 10:16:57.829526 1578 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 15 10:16:57.829634 kubelet[1578]: E0515 10:16:57.829605 1578 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" May 15 10:16:57.882122 env[1220]: time="2025-05-15T10:16:57.880643158Z" level=info msg="StartContainer for \"e0a8acec57a9885ad33740326d3de5cc28db88587a49948a95364b4693231a31\" returns successfully" May 15 10:16:57.895781 env[1220]: time="2025-05-15T10:16:57.894138838Z" level=info msg="StartContainer for \"a1e615b1e388f4b24b23f2f9c983a61fa91380a4f93248b19b175a9358a6e172\" returns successfully" May 15 10:16:57.911254 env[1220]: time="2025-05-15T10:16:57.911209718Z" level=info msg="StartContainer for \"d0e5f99553e87f0a41b9d5a21960ccfecc0fb1c1d339258de1f92e1ddd6856a1\" returns successfully" May 15 10:16:58.270948 kubelet[1578]: I0515 10:16:58.270617 1578 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:16:58.742089 kubelet[1578]: E0515 10:16:58.742058 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:58.742398 kubelet[1578]: E0515 10:16:58.742188 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:58.743388 kubelet[1578]: E0515 10:16:58.743369 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:58.743610 kubelet[1578]: E0515 10:16:58.743597 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:58.744815 kubelet[1578]: E0515 10:16:58.744793 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:58.744904 kubelet[1578]: E0515 10:16:58.744889 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:59.468448 kubelet[1578]: E0515 10:16:59.465742 1578 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 10:16:59.546154 kubelet[1578]: I0515 10:16:59.546114 1578 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 10:16:59.546154 kubelet[1578]: E0515 10:16:59.546154 1578 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 10:16:59.550910 kubelet[1578]: E0515 10:16:59.550874 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:16:59.651647 kubelet[1578]: E0515 10:16:59.651607 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:16:59.746250 kubelet[1578]: E0515 10:16:59.746155 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:59.746560 kubelet[1578]: E0515 10:16:59.746318 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:59.746759 kubelet[1578]: E0515 10:16:59.746734 1578 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:16:59.746962 kubelet[1578]: E0515 10:16:59.746948 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:59.752536 kubelet[1578]: E0515 10:16:59.752504 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:16:59.852607 kubelet[1578]: E0515 10:16:59.852564 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:16:59.953150 kubelet[1578]: E0515 10:16:59.953116 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:17:00.053656 kubelet[1578]: E0515 10:17:00.053614 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:17:00.154208 kubelet[1578]: E0515 10:17:00.154174 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:17:00.254438 kubelet[1578]: E0515 10:17:00.254396 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:17:00.355133 kubelet[1578]: E0515 10:17:00.355033 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:17:00.455785 kubelet[1578]: E0515 10:17:00.455752 1578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:17:00.617094 kubelet[1578]: I0515 10:17:00.617002 1578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 10:17:00.627176 kubelet[1578]: I0515 10:17:00.627148 1578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 10:17:00.630743 kubelet[1578]: I0515 10:17:00.630719 1578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 10:17:00.678646 kubelet[1578]: I0515 10:17:00.678614 1578 apiserver.go:52] "Watching apiserver" May 15 10:17:00.681598 kubelet[1578]: E0515 10:17:00.681570 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:00.717434 kubelet[1578]: I0515 10:17:00.717384 1578 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:17:00.747146 kubelet[1578]: E0515 10:17:00.747109 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:00.747624 kubelet[1578]: I0515 10:17:00.747600 1578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 10:17:00.752032 kubelet[1578]: E0515 10:17:00.751995 1578 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 10:17:00.752239 kubelet[1578]: E0515 10:17:00.752219 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:01.705242 systemd[1]: Reloading. May 15 10:17:01.750894 kubelet[1578]: E0515 10:17:01.750864 1578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:01.762634 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2025-05-15T10:17:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:17:01.764746 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2025-05-15T10:17:01Z" level=info msg="torcx already run" May 15 10:17:01.824184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:17:01.824203 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:17:01.840887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:17:01.936852 systemd[1]: Stopping kubelet.service... May 15 10:17:01.958080 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:17:01.958263 systemd[1]: Stopped kubelet.service. May 15 10:17:01.958305 systemd[1]: kubelet.service: Consumed 1.919s CPU time. May 15 10:17:01.959813 systemd[1]: Starting kubelet.service... May 15 10:17:02.059223 systemd[1]: Started kubelet.service. May 15 10:17:02.099650 kubelet[1921]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:17:02.099978 kubelet[1921]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 10:17:02.100023 kubelet[1921]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:17:02.100394 kubelet[1921]: I0515 10:17:02.100351 1921 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:17:02.109707 kubelet[1921]: I0515 10:17:02.109670 1921 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 10:17:02.109707 kubelet[1921]: I0515 10:17:02.109702 1921 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:17:02.110404 kubelet[1921]: I0515 10:17:02.110097 1921 server.go:954] "Client rotation is on, will bootstrap in background" May 15 10:17:02.112970 kubelet[1921]: I0515 10:17:02.112951 1921 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:17:02.116564 kubelet[1921]: I0515 10:17:02.115998 1921 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:17:02.119503 kubelet[1921]: E0515 10:17:02.119442 1921 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:17:02.119615 kubelet[1921]: I0515 10:17:02.119600 1921 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:17:02.123478 kubelet[1921]: I0515 10:17:02.123437 1921 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:17:02.123671 kubelet[1921]: I0515 10:17:02.123643 1921 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:17:02.123912 kubelet[1921]: I0515 10:17:02.123668 1921 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:17:02.123912 kubelet[1921]: I0515 10:17:02.123914 1921 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:17:02.124046 kubelet[1921]: I0515 10:17:02.123924 1921 container_manager_linux.go:304] "Creating device plugin manager" May 15 10:17:02.124046 kubelet[1921]: I0515 10:17:02.123970 1921 state_mem.go:36] "Initialized new in-memory state store" May 15 10:17:02.124116 kubelet[1921]: I0515 10:17:02.124098 1921 kubelet.go:446] "Attempting to sync node with API server" May 15 10:17:02.124185 kubelet[1921]: I0515 10:17:02.124171 1921 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:17:02.124224 kubelet[1921]: I0515 10:17:02.124199 1921 kubelet.go:352] "Adding apiserver pod source" May 15 10:17:02.124224 kubelet[1921]: I0515 10:17:02.124209 1921 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:17:02.128350 kubelet[1921]: I0515 10:17:02.128326 1921 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:17:02.128884 kubelet[1921]: I0515 10:17:02.128859 1921 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:17:02.129511 kubelet[1921]: I0515 10:17:02.129476 1921 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 10:17:02.129511 kubelet[1921]: I0515 10:17:02.129512 1921 server.go:1287] "Started kubelet" May 15 10:17:02.131423 kubelet[1921]: I0515 10:17:02.131398 1921 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:17:02.132666 kubelet[1921]: I0515 10:17:02.132635 1921 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:17:02.133570 kubelet[1921]: I0515 10:17:02.133545 1921 server.go:490] "Adding debug handlers to kubelet server" May 15 10:17:02.134588 kubelet[1921]: I0515 10:17:02.134544 1921 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:17:02.134895 kubelet[1921]: I0515 10:17:02.134877 1921 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:17:02.136378 kubelet[1921]: I0515 10:17:02.136340 1921 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:17:02.139808 kubelet[1921]: I0515 10:17:02.139782 1921 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 10:17:02.140855 kubelet[1921]: I0515 10:17:02.139928 1921 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:17:02.141053 kubelet[1921]: I0515 10:17:02.141041 1921 reconciler.go:26] "Reconciler: start to sync state" May 15 10:17:02.141463 kubelet[1921]: I0515 10:17:02.141428 1921 factory.go:221] Registration of the systemd container factory successfully May 15 10:17:02.142229 kubelet[1921]: I0515 10:17:02.142188 1921 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:17:02.144194 kubelet[1921]: E0515 10:17:02.144170 1921 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:17:02.144975 kubelet[1921]: I0515 10:17:02.144945 1921 factory.go:221] Registration of the containerd container factory successfully May 15 10:17:02.155148 kubelet[1921]: I0515 10:17:02.155099 1921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:17:02.156112 kubelet[1921]: I0515 10:17:02.156075 1921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:17:02.156112 kubelet[1921]: I0515 10:17:02.156105 1921 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 10:17:02.156221 kubelet[1921]: I0515 10:17:02.156125 1921 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 10:17:02.156221 kubelet[1921]: I0515 10:17:02.156131 1921 kubelet.go:2388] "Starting kubelet main sync loop" May 15 10:17:02.156221 kubelet[1921]: E0515 10:17:02.156179 1921 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:17:02.176841 kubelet[1921]: I0515 10:17:02.176806 1921 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 10:17:02.176841 kubelet[1921]: I0515 10:17:02.176825 1921 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 10:17:02.176841 kubelet[1921]: I0515 10:17:02.176843 1921 state_mem.go:36] "Initialized new in-memory state store" May 15 10:17:02.177004 kubelet[1921]: I0515 10:17:02.176986 1921 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:17:02.177033 kubelet[1921]: I0515 10:17:02.177003 1921 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:17:02.177033 kubelet[1921]: I0515 10:17:02.177020 1921 policy_none.go:49] "None policy: Start" May 15 10:17:02.177033 kubelet[1921]: I0515 10:17:02.177028 1921 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 10:17:02.177096 kubelet[1921]: I0515 10:17:02.177038 1921 state_mem.go:35] "Initializing new in-memory state store" May 15 10:17:02.177139 kubelet[1921]: I0515 10:17:02.177128 1921 state_mem.go:75] "Updated machine memory state" May 15 10:17:02.180332 kubelet[1921]: I0515 10:17:02.180310 1921 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:17:02.180829 kubelet[1921]: I0515 10:17:02.180793 1921 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:17:02.181013 kubelet[1921]: I0515 10:17:02.180981 1921 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:17:02.181174 kubelet[1921]: I0515 10:17:02.181157 1921 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:17:02.181870 kubelet[1921]: E0515 10:17:02.181850 1921 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 10:17:02.258678 kubelet[1921]: I0515 10:17:02.257381 1921 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 10:17:02.258678 kubelet[1921]: I0515 10:17:02.258281 1921 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.259939 kubelet[1921]: I0515 10:17:02.259923 1921 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 10:17:02.262467 kubelet[1921]: E0515 10:17:02.262433 1921 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.263936 kubelet[1921]: E0515 10:17:02.263871 1921 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:17:02.264068 kubelet[1921]: E0515 10:17:02.264050 1921 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 10:17:02.287943 kubelet[1921]: I0515 10:17:02.287900 1921 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:17:02.293525 kubelet[1921]: I0515 10:17:02.293500 1921 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 10:17:02.293662 kubelet[1921]: I0515 10:17:02.293650 1921 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 10:17:02.342228 kubelet[1921]: I0515 10:17:02.342178 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0434f84b6f6acba10ae2e06ae256115-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0434f84b6f6acba10ae2e06ae256115\") " pod="kube-system/kube-apiserver-localhost" May 15 10:17:02.342228 kubelet[1921]: I0515 10:17:02.342214 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.342228 kubelet[1921]: I0515 10:17:02.342244 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.342616 kubelet[1921]: I0515 10:17:02.342271 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.342656 kubelet[1921]: I0515 10:17:02.342635 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0434f84b6f6acba10ae2e06ae256115-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0434f84b6f6acba10ae2e06ae256115\") " pod="kube-system/kube-apiserver-localhost" May 15 10:17:02.342684 kubelet[1921]: I0515 10:17:02.342660 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.342684 kubelet[1921]: I0515 10:17:02.342675 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:17:02.342752 kubelet[1921]: I0515 10:17:02.342710 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 10:17:02.342752 kubelet[1921]: I0515 10:17:02.342730 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0434f84b6f6acba10ae2e06ae256115-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d0434f84b6f6acba10ae2e06ae256115\") " pod="kube-system/kube-apiserver-localhost" May 15 10:17:02.563467 kubelet[1921]: E0515 10:17:02.563423 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:02.564716 kubelet[1921]: E0515 10:17:02.564675 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:02.564716 kubelet[1921]: E0515 10:17:02.564709 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:03.125667 kubelet[1921]: I0515 10:17:03.125629 1921 apiserver.go:52] "Watching apiserver" May 15 10:17:03.141549 kubelet[1921]: I0515 10:17:03.141499 1921 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:17:03.168236 kubelet[1921]: E0515 10:17:03.168198 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:03.168704 kubelet[1921]: E0515 10:17:03.168653 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:03.168926 kubelet[1921]: I0515 10:17:03.168913 1921 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 10:17:03.186679 kubelet[1921]: E0515 10:17:03.186641 1921 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 10:17:03.186874 kubelet[1921]: E0515 10:17:03.186853 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:03.197081 kubelet[1921]: I0515 10:17:03.197022 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.197006203 podStartE2EDuration="3.197006203s" podCreationTimestamp="2025-05-15 10:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:03.189022477 +0000 UTC m=+1.126028280" watchObservedRunningTime="2025-05-15 10:17:03.197006203 +0000 UTC m=+1.134011966" May 15 10:17:03.197200 kubelet[1921]: I0515 10:17:03.197124 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.197119523 podStartE2EDuration="3.197119523s" podCreationTimestamp="2025-05-15 10:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:03.196613803 +0000 UTC m=+1.133619606" watchObservedRunningTime="2025-05-15 10:17:03.197119523 +0000 UTC m=+1.134125326" May 15 10:17:03.210325 kubelet[1921]: I0515 10:17:03.210274 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.210260695 podStartE2EDuration="3.210260695s" podCreationTimestamp="2025-05-15 10:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:03.203151929 +0000 UTC m=+1.140157732" watchObservedRunningTime="2025-05-15 10:17:03.210260695 +0000 UTC m=+1.147266498" May 15 10:17:03.687056 sudo[1320]: pam_unix(sudo:session): session closed for user root May 15 10:17:03.689159 sshd[1317]: pam_unix(sshd:session): session closed for user core May 15 10:17:03.691578 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:53744.service: Deactivated successfully. May 15 10:17:03.692287 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:17:03.692436 systemd[1]: session-5.scope: Consumed 5.605s CPU time. May 15 10:17:03.692828 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit. May 15 10:17:03.693514 systemd-logind[1208]: Removed session 5. May 15 10:17:04.169520 kubelet[1921]: E0515 10:17:04.169486 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:04.170662 kubelet[1921]: E0515 10:17:04.169562 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:05.111068 kubelet[1921]: E0515 10:17:05.110641 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:05.671697 kubelet[1921]: I0515 10:17:05.671651 1921 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:17:05.672062 env[1220]: time="2025-05-15T10:17:05.672024063Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:17:05.672267 kubelet[1921]: I0515 10:17:05.672232 1921 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:17:06.657233 systemd[1]: Created slice kubepods-besteffort-poda5b95e35_73a4_4182_a0f9_debeacc78619.slice. May 15 10:17:06.666298 systemd[1]: Created slice kubepods-burstable-pod8523d53b_9263_4582_94e6_60cc1f721638.slice. May 15 10:17:06.673834 kubelet[1921]: I0515 10:17:06.673792 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8523d53b-9263-4582-94e6-60cc1f721638-cni-plugin\") pod \"kube-flannel-ds-zmdnh\" (UID: \"8523d53b-9263-4582-94e6-60cc1f721638\") " pod="kube-flannel/kube-flannel-ds-zmdnh" May 15 10:17:06.673834 kubelet[1921]: I0515 10:17:06.673831 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8523d53b-9263-4582-94e6-60cc1f721638-cni\") pod \"kube-flannel-ds-zmdnh\" (UID: \"8523d53b-9263-4582-94e6-60cc1f721638\") " pod="kube-flannel/kube-flannel-ds-zmdnh" May 15 10:17:06.674159 kubelet[1921]: I0515 10:17:06.673852 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5b95e35-73a4-4182-a0f9-debeacc78619-kube-proxy\") pod \"kube-proxy-bcn7w\" (UID: \"a5b95e35-73a4-4182-a0f9-debeacc78619\") " pod="kube-system/kube-proxy-bcn7w" May 15 10:17:06.674159 kubelet[1921]: I0515 10:17:06.673868 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6544\" (UniqueName: \"kubernetes.io/projected/a5b95e35-73a4-4182-a0f9-debeacc78619-kube-api-access-f6544\") pod \"kube-proxy-bcn7w\" (UID: \"a5b95e35-73a4-4182-a0f9-debeacc78619\") " pod="kube-system/kube-proxy-bcn7w" May 15 10:17:06.674159 kubelet[1921]: I0515 10:17:06.673886 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8523d53b-9263-4582-94e6-60cc1f721638-run\") pod \"kube-flannel-ds-zmdnh\" (UID: \"8523d53b-9263-4582-94e6-60cc1f721638\") " pod="kube-flannel/kube-flannel-ds-zmdnh" May 15 10:17:06.674159 kubelet[1921]: I0515 10:17:06.673900 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8523d53b-9263-4582-94e6-60cc1f721638-flannel-cfg\") pod \"kube-flannel-ds-zmdnh\" (UID: \"8523d53b-9263-4582-94e6-60cc1f721638\") " pod="kube-flannel/kube-flannel-ds-zmdnh" May 15 10:17:06.674159 kubelet[1921]: I0515 10:17:06.673915 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5b95e35-73a4-4182-a0f9-debeacc78619-xtables-lock\") pod \"kube-proxy-bcn7w\" (UID: \"a5b95e35-73a4-4182-a0f9-debeacc78619\") " pod="kube-system/kube-proxy-bcn7w" May 15 10:17:06.674432 kubelet[1921]: I0515 10:17:06.673931 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5b95e35-73a4-4182-a0f9-debeacc78619-lib-modules\") pod \"kube-proxy-bcn7w\" (UID: \"a5b95e35-73a4-4182-a0f9-debeacc78619\") " pod="kube-system/kube-proxy-bcn7w" May 15 10:17:06.674541 kubelet[1921]: I0515 10:17:06.674518 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8523d53b-9263-4582-94e6-60cc1f721638-xtables-lock\") pod \"kube-flannel-ds-zmdnh\" (UID: \"8523d53b-9263-4582-94e6-60cc1f721638\") " pod="kube-flannel/kube-flannel-ds-zmdnh" May 15 10:17:06.674580 kubelet[1921]: I0515 10:17:06.674556 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx7ln\" (UniqueName: \"kubernetes.io/projected/8523d53b-9263-4582-94e6-60cc1f721638-kube-api-access-jx7ln\") pod \"kube-flannel-ds-zmdnh\" (UID: \"8523d53b-9263-4582-94e6-60cc1f721638\") " pod="kube-flannel/kube-flannel-ds-zmdnh" May 15 10:17:06.783066 kubelet[1921]: I0515 10:17:06.783032 1921 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 10:17:06.875934 kubelet[1921]: E0515 10:17:06.875897 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:06.963336 kubelet[1921]: E0515 10:17:06.962786 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:06.963580 env[1220]: time="2025-05-15T10:17:06.963518102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcn7w,Uid:a5b95e35-73a4-4182-a0f9-debeacc78619,Namespace:kube-system,Attempt:0,}" May 15 10:17:06.968220 kubelet[1921]: E0515 10:17:06.968194 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:06.968757 env[1220]: time="2025-05-15T10:17:06.968663346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zmdnh,Uid:8523d53b-9263-4582-94e6-60cc1f721638,Namespace:kube-flannel,Attempt:0,}" May 15 10:17:06.983006 env[1220]: time="2025-05-15T10:17:06.982933996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:17:06.983096 env[1220]: time="2025-05-15T10:17:06.983017916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:17:06.983096 env[1220]: time="2025-05-15T10:17:06.983046076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:17:06.983261 env[1220]: time="2025-05-15T10:17:06.983221876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2128175d789b708dddb4bc03b435d3432b3648b731ce57fd02b195f8b7592235 pid=1993 runtime=io.containerd.runc.v2 May 15 10:17:06.993161 env[1220]: time="2025-05-15T10:17:06.992156283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:17:06.993161 env[1220]: time="2025-05-15T10:17:06.992193803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:17:06.993161 env[1220]: time="2025-05-15T10:17:06.992204203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:17:06.993161 env[1220]: time="2025-05-15T10:17:06.992351683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1 pid=2012 runtime=io.containerd.runc.v2 May 15 10:17:06.999126 systemd[1]: Started cri-containerd-2128175d789b708dddb4bc03b435d3432b3648b731ce57fd02b195f8b7592235.scope. May 15 10:17:07.006852 systemd[1]: Started cri-containerd-6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1.scope. May 15 10:17:07.039465 env[1220]: time="2025-05-15T10:17:07.039367594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcn7w,Uid:a5b95e35-73a4-4182-a0f9-debeacc78619,Namespace:kube-system,Attempt:0,} returns sandbox id \"2128175d789b708dddb4bc03b435d3432b3648b731ce57fd02b195f8b7592235\"" May 15 10:17:07.040103 kubelet[1921]: E0515 10:17:07.040071 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:07.042808 env[1220]: time="2025-05-15T10:17:07.042413116Z" level=info msg="CreateContainer within sandbox \"2128175d789b708dddb4bc03b435d3432b3648b731ce57fd02b195f8b7592235\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:17:07.056374 env[1220]: time="2025-05-15T10:17:07.056329765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zmdnh,Uid:8523d53b-9263-4582-94e6-60cc1f721638,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\"" May 15 10:17:07.056956 kubelet[1921]: E0515 10:17:07.056932 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:07.059396 env[1220]: time="2025-05-15T10:17:07.058527766Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 15 10:17:07.063605 env[1220]: time="2025-05-15T10:17:07.063555050Z" level=info msg="CreateContainer within sandbox \"2128175d789b708dddb4bc03b435d3432b3648b731ce57fd02b195f8b7592235\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70ff7835faee953201b20fa808078953ef1ff681b6797c829351fa424fe10760\"" May 15 10:17:07.064096 env[1220]: time="2025-05-15T10:17:07.064061410Z" level=info msg="StartContainer for \"70ff7835faee953201b20fa808078953ef1ff681b6797c829351fa424fe10760\"" May 15 10:17:07.088430 systemd[1]: Started cri-containerd-70ff7835faee953201b20fa808078953ef1ff681b6797c829351fa424fe10760.scope. May 15 10:17:07.134197 env[1220]: time="2025-05-15T10:17:07.134136696Z" level=info msg="StartContainer for \"70ff7835faee953201b20fa808078953ef1ff681b6797c829351fa424fe10760\" returns successfully" May 15 10:17:07.178766 kubelet[1921]: E0515 10:17:07.177356 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:07.189737 kubelet[1921]: I0515 10:17:07.187190 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bcn7w" podStartSLOduration=1.187175251 podStartE2EDuration="1.187175251s" podCreationTimestamp="2025-05-15 10:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:07.187159891 +0000 UTC m=+5.124165694" watchObservedRunningTime="2025-05-15 10:17:07.187175251 +0000 UTC m=+5.124181054" May 15 10:17:07.192568 kubelet[1921]: E0515 10:17:07.192545 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:08.178997 kubelet[1921]: E0515 10:17:08.178666 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:08.203990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72280524.mount: Deactivated successfully. May 15 10:17:08.242231 env[1220]: time="2025-05-15T10:17:08.242179013Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:08.243480 env[1220]: time="2025-05-15T10:17:08.243447894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:08.245009 env[1220]: time="2025-05-15T10:17:08.244972535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:08.246412 env[1220]: time="2025-05-15T10:17:08.246384096Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:08.246907 env[1220]: time="2025-05-15T10:17:08.246881856Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 15 10:17:08.249405 env[1220]: time="2025-05-15T10:17:08.249369738Z" level=info msg="CreateContainer within sandbox \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 15 10:17:08.258145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2315626637.mount: Deactivated successfully. May 15 10:17:08.261884 env[1220]: time="2025-05-15T10:17:08.261838265Z" level=info msg="CreateContainer within sandbox \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276\"" May 15 10:17:08.263841 env[1220]: time="2025-05-15T10:17:08.263804826Z" level=info msg="StartContainer for \"5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276\"" May 15 10:17:08.277575 systemd[1]: Started cri-containerd-5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276.scope. May 15 10:17:08.311379 systemd[1]: cri-containerd-5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276.scope: Deactivated successfully. May 15 10:17:08.313318 env[1220]: time="2025-05-15T10:17:08.313272417Z" level=info msg="StartContainer for \"5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276\" returns successfully" May 15 10:17:08.346666 env[1220]: time="2025-05-15T10:17:08.346620677Z" level=info msg="shim disconnected" id=5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276 May 15 10:17:08.346666 env[1220]: time="2025-05-15T10:17:08.346663317Z" level=warning msg="cleaning up after shim disconnected" id=5f8d9106ba7a566d282a397bf4e6e09d645496cc98358857d1fc7c443b688276 namespace=k8s.io May 15 10:17:08.346666 env[1220]: time="2025-05-15T10:17:08.346672157Z" level=info msg="cleaning up dead shim" May 15 10:17:08.353695 env[1220]: time="2025-05-15T10:17:08.353642202Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2277 runtime=io.containerd.runc.v2\n" May 15 10:17:09.184853 kubelet[1921]: E0515 10:17:09.184523 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:09.185195 kubelet[1921]: E0515 10:17:09.185130 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:09.186188 env[1220]: time="2025-05-15T10:17:09.186154627Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 15 10:17:10.319500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643284978.mount: Deactivated successfully. May 15 10:17:11.002360 env[1220]: time="2025-05-15T10:17:11.002315958Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:11.003916 env[1220]: time="2025-05-15T10:17:11.003889279Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:11.006457 env[1220]: time="2025-05-15T10:17:11.006419360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:11.008297 env[1220]: time="2025-05-15T10:17:11.008270241Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:11.009972 env[1220]: time="2025-05-15T10:17:11.009942842Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 15 10:17:11.013000 env[1220]: time="2025-05-15T10:17:11.012970443Z" level=info msg="CreateContainer within sandbox \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 10:17:11.021100 env[1220]: time="2025-05-15T10:17:11.021064407Z" level=info msg="CreateContainer within sandbox \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054\"" May 15 10:17:11.021520 env[1220]: time="2025-05-15T10:17:11.021496448Z" level=info msg="StartContainer for \"751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054\"" May 15 10:17:11.037658 systemd[1]: Started cri-containerd-751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054.scope. May 15 10:17:11.073758 systemd[1]: cri-containerd-751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054.scope: Deactivated successfully. May 15 10:17:11.074679 env[1220]: time="2025-05-15T10:17:11.074598554Z" level=info msg="StartContainer for \"751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054\" returns successfully" May 15 10:17:11.084621 kubelet[1921]: I0515 10:17:11.084596 1921 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 10:17:11.113935 systemd[1]: Created slice kubepods-burstable-podece574a3_1978_42e0_9f5b_276189ec71d1.slice. May 15 10:17:11.119307 systemd[1]: Created slice kubepods-burstable-podeaa4073a_8fab_47aa_8bc1_f799922869af.slice. May 15 10:17:11.174763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054-rootfs.mount: Deactivated successfully. May 15 10:17:11.190985 kubelet[1921]: E0515 10:17:11.190582 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:11.198008 env[1220]: time="2025-05-15T10:17:11.197967777Z" level=info msg="shim disconnected" id=751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054 May 15 10:17:11.198173 env[1220]: time="2025-05-15T10:17:11.198154937Z" level=warning msg="cleaning up after shim disconnected" id=751cf268af0fd7d7e4e40f41a1fb82d306c14ea1f12ace3d9c9fa5b85522e054 namespace=k8s.io May 15 10:17:11.198230 env[1220]: time="2025-05-15T10:17:11.198218137Z" level=info msg="cleaning up dead shim" May 15 10:17:11.206046 env[1220]: time="2025-05-15T10:17:11.206009861Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2333 runtime=io.containerd.runc.v2\n" May 15 10:17:11.209125 kubelet[1921]: I0515 10:17:11.209092 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece574a3-1978-42e0-9f5b-276189ec71d1-config-volume\") pod \"coredns-668d6bf9bc-4qpzr\" (UID: \"ece574a3-1978-42e0-9f5b-276189ec71d1\") " pod="kube-system/coredns-668d6bf9bc-4qpzr" May 15 10:17:11.209216 kubelet[1921]: I0515 10:17:11.209130 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxsg\" (UniqueName: \"kubernetes.io/projected/eaa4073a-8fab-47aa-8bc1-f799922869af-kube-api-access-vhxsg\") pod \"coredns-668d6bf9bc-9gfwc\" (UID: \"eaa4073a-8fab-47aa-8bc1-f799922869af\") " pod="kube-system/coredns-668d6bf9bc-9gfwc" May 15 10:17:11.209216 kubelet[1921]: I0515 10:17:11.209162 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txg6q\" (UniqueName: \"kubernetes.io/projected/ece574a3-1978-42e0-9f5b-276189ec71d1-kube-api-access-txg6q\") pod \"coredns-668d6bf9bc-4qpzr\" (UID: \"ece574a3-1978-42e0-9f5b-276189ec71d1\") " pod="kube-system/coredns-668d6bf9bc-4qpzr" May 15 10:17:11.209269 kubelet[1921]: I0515 10:17:11.209228 1921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaa4073a-8fab-47aa-8bc1-f799922869af-config-volume\") pod \"coredns-668d6bf9bc-9gfwc\" (UID: \"eaa4073a-8fab-47aa-8bc1-f799922869af\") " pod="kube-system/coredns-668d6bf9bc-9gfwc" May 15 10:17:11.417943 kubelet[1921]: E0515 10:17:11.417903 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:11.419330 env[1220]: time="2025-05-15T10:17:11.419245329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qpzr,Uid:ece574a3-1978-42e0-9f5b-276189ec71d1,Namespace:kube-system,Attempt:0,}" May 15 10:17:11.423266 kubelet[1921]: E0515 10:17:11.423176 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:11.425139 env[1220]: time="2025-05-15T10:17:11.425101772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gfwc,Uid:eaa4073a-8fab-47aa-8bc1-f799922869af,Namespace:kube-system,Attempt:0,}" May 15 10:17:11.458803 env[1220]: time="2025-05-15T10:17:11.458721549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gfwc,Uid:eaa4073a-8fab-47aa-8bc1-f799922869af,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c7384a9bd9eed4c9feb21e6784aed8e1c52ab33ba2872242d36291a8d8edf36\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 10:17:11.459567 kubelet[1921]: E0515 10:17:11.459093 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7384a9bd9eed4c9feb21e6784aed8e1c52ab33ba2872242d36291a8d8edf36\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 10:17:11.459567 kubelet[1921]: E0515 10:17:11.459182 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7384a9bd9eed4c9feb21e6784aed8e1c52ab33ba2872242d36291a8d8edf36\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9gfwc" May 15 10:17:11.459567 kubelet[1921]: E0515 10:17:11.459203 1921 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7384a9bd9eed4c9feb21e6784aed8e1c52ab33ba2872242d36291a8d8edf36\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9gfwc" May 15 10:17:11.459567 kubelet[1921]: E0515 10:17:11.459255 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9gfwc_kube-system(eaa4073a-8fab-47aa-8bc1-f799922869af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9gfwc_kube-system(eaa4073a-8fab-47aa-8bc1-f799922869af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7384a9bd9eed4c9feb21e6784aed8e1c52ab33ba2872242d36291a8d8edf36\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-9gfwc" podUID="eaa4073a-8fab-47aa-8bc1-f799922869af" May 15 10:17:11.460962 env[1220]: time="2025-05-15T10:17:11.460776750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qpzr,Uid:ece574a3-1978-42e0-9f5b-276189ec71d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cc1447d25eb20064e01df3f9910edd548c3d32ab19136bc12ed41ed26c4c262\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 10:17:11.461102 kubelet[1921]: E0515 10:17:11.461074 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc1447d25eb20064e01df3f9910edd548c3d32ab19136bc12ed41ed26c4c262\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 10:17:11.461158 kubelet[1921]: E0515 10:17:11.461118 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc1447d25eb20064e01df3f9910edd548c3d32ab19136bc12ed41ed26c4c262\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4qpzr" May 15 10:17:11.461158 kubelet[1921]: E0515 10:17:11.461134 1921 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc1447d25eb20064e01df3f9910edd548c3d32ab19136bc12ed41ed26c4c262\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4qpzr" May 15 10:17:11.461213 kubelet[1921]: E0515 10:17:11.461166 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4qpzr_kube-system(ece574a3-1978-42e0-9f5b-276189ec71d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4qpzr_kube-system(ece574a3-1978-42e0-9f5b-276189ec71d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cc1447d25eb20064e01df3f9910edd548c3d32ab19136bc12ed41ed26c4c262\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-4qpzr" podUID="ece574a3-1978-42e0-9f5b-276189ec71d1" May 15 10:17:12.175090 systemd[1]: run-netns-cni\x2d1209c3af\x2d9ac2\x2d3929\x2dbbf0\x2d3e2613d71344.mount: Deactivated successfully. May 15 10:17:12.175179 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cc1447d25eb20064e01df3f9910edd548c3d32ab19136bc12ed41ed26c4c262-shm.mount: Deactivated successfully. May 15 10:17:12.194682 kubelet[1921]: E0515 10:17:12.194652 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:12.201512 env[1220]: time="2025-05-15T10:17:12.201453519Z" level=info msg="CreateContainer within sandbox \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 15 10:17:12.218619 env[1220]: time="2025-05-15T10:17:12.218578407Z" level=info msg="CreateContainer within sandbox \"6b6dc98820b2adf5c4deacbfa17c752e41e94f583692903c52b916b85a5cc7d1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c0af99f2fb6a42a6df3eec6830fb7fdc3132447d1a6e033b0c683c1ad5ff303e\"" May 15 10:17:12.219253 env[1220]: time="2025-05-15T10:17:12.219197048Z" level=info msg="StartContainer for \"c0af99f2fb6a42a6df3eec6830fb7fdc3132447d1a6e033b0c683c1ad5ff303e\"" May 15 10:17:12.237900 systemd[1]: Started cri-containerd-c0af99f2fb6a42a6df3eec6830fb7fdc3132447d1a6e033b0c683c1ad5ff303e.scope. May 15 10:17:12.290780 env[1220]: time="2025-05-15T10:17:12.290727522Z" level=info msg="StartContainer for \"c0af99f2fb6a42a6df3eec6830fb7fdc3132447d1a6e033b0c683c1ad5ff303e\" returns successfully" May 15 10:17:13.198399 kubelet[1921]: E0515 10:17:13.198361 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:13.209528 kubelet[1921]: I0515 10:17:13.209477 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zmdnh" podStartSLOduration=3.256967876 podStartE2EDuration="7.209460712s" podCreationTimestamp="2025-05-15 10:17:06 +0000 UTC" firstStartedPulling="2025-05-15 10:17:07.058095966 +0000 UTC m=+4.995101769" lastFinishedPulling="2025-05-15 10:17:11.010588842 +0000 UTC m=+8.947594605" observedRunningTime="2025-05-15 10:17:13.209118432 +0000 UTC m=+11.146124235" watchObservedRunningTime="2025-05-15 10:17:13.209460712 +0000 UTC m=+11.146466515" May 15 10:17:13.368217 systemd-networkd[1046]: flannel.1: Link UP May 15 10:17:13.368223 systemd-networkd[1046]: flannel.1: Gained carrier May 15 10:17:14.199835 kubelet[1921]: E0515 10:17:14.199799 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:15.118662 kubelet[1921]: E0515 10:17:15.118634 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:15.232896 systemd-networkd[1046]: flannel.1: Gained IPv6LL May 15 10:17:15.533275 update_engine[1211]: I0515 10:17:15.533234 1211 update_attempter.cc:509] Updating boot flags... May 15 10:17:16.883611 kubelet[1921]: E0515 10:17:16.883539 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:17.204110 kubelet[1921]: E0515 10:17:17.204008 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:25.157334 kubelet[1921]: E0515 10:17:25.157275 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:25.158917 env[1220]: time="2025-05-15T10:17:25.158866894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qpzr,Uid:ece574a3-1978-42e0-9f5b-276189ec71d1,Namespace:kube-system,Attempt:0,}" May 15 10:17:25.190033 systemd-networkd[1046]: cni0: Link UP May 15 10:17:25.190040 systemd-networkd[1046]: cni0: Gained carrier May 15 10:17:25.191031 systemd-networkd[1046]: cni0: Lost carrier May 15 10:17:25.198225 systemd-networkd[1046]: veth74c7a01c: Link UP May 15 10:17:25.201158 kernel: cni0: port 1(veth74c7a01c) entered blocking state May 15 10:17:25.201243 kernel: cni0: port 1(veth74c7a01c) entered disabled state May 15 10:17:25.203180 kernel: device veth74c7a01c entered promiscuous mode May 15 10:17:25.203238 kernel: cni0: port 1(veth74c7a01c) entered blocking state May 15 10:17:25.203258 kernel: cni0: port 1(veth74c7a01c) entered forwarding state May 15 10:17:25.204733 kernel: cni0: port 1(veth74c7a01c) entered disabled state May 15 10:17:25.219097 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth74c7a01c: link becomes ready May 15 10:17:25.219170 kernel: cni0: port 1(veth74c7a01c) entered blocking state May 15 10:17:25.219186 kernel: cni0: port 1(veth74c7a01c) entered forwarding state May 15 10:17:25.219819 systemd-networkd[1046]: veth74c7a01c: Gained carrier May 15 10:17:25.220023 systemd-networkd[1046]: cni0: Gained carrier May 15 10:17:25.221505 env[1220]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 15 10:17:25.221505 env[1220]: delegateAdd: netconf sent to delegate plugin: May 15 10:17:25.234227 env[1220]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-15T10:17:25.234154309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:17:25.234227 env[1220]: time="2025-05-15T10:17:25.234192309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:17:25.234227 env[1220]: time="2025-05-15T10:17:25.234203149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:17:25.234384 env[1220]: time="2025-05-15T10:17:25.234336389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83efdddae327e6099891ad1141d1ea6baa20da4eb0337b9f1235e8ef323b14c9 pid=2638 runtime=io.containerd.runc.v2 May 15 10:17:25.246474 systemd[1]: Started cri-containerd-83efdddae327e6099891ad1141d1ea6baa20da4eb0337b9f1235e8ef323b14c9.scope. May 15 10:17:25.268764 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:17:25.285043 env[1220]: time="2025-05-15T10:17:25.285002480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qpzr,Uid:ece574a3-1978-42e0-9f5b-276189ec71d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"83efdddae327e6099891ad1141d1ea6baa20da4eb0337b9f1235e8ef323b14c9\"" May 15 10:17:25.285785 kubelet[1921]: E0515 10:17:25.285755 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:25.288175 env[1220]: time="2025-05-15T10:17:25.288119520Z" level=info msg="CreateContainer within sandbox \"83efdddae327e6099891ad1141d1ea6baa20da4eb0337b9f1235e8ef323b14c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:17:25.299435 env[1220]: time="2025-05-15T10:17:25.299389243Z" level=info msg="CreateContainer within sandbox \"83efdddae327e6099891ad1141d1ea6baa20da4eb0337b9f1235e8ef323b14c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69ca4d036ce8d22580354af374cb68f4f8ff4820af57a682d4a3ce1a61e67182\"" May 15 10:17:25.299969 env[1220]: time="2025-05-15T10:17:25.299930643Z" level=info msg="StartContainer for \"69ca4d036ce8d22580354af374cb68f4f8ff4820af57a682d4a3ce1a61e67182\"" May 15 10:17:25.313147 systemd[1]: Started cri-containerd-69ca4d036ce8d22580354af374cb68f4f8ff4820af57a682d4a3ce1a61e67182.scope. May 15 10:17:25.343128 env[1220]: time="2025-05-15T10:17:25.343082332Z" level=info msg="StartContainer for \"69ca4d036ce8d22580354af374cb68f4f8ff4820af57a682d4a3ce1a61e67182\" returns successfully" May 15 10:17:25.770550 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:55468.service. May 15 10:17:25.811870 sshd[2711]: Accepted publickey for core from 10.0.0.1 port 55468 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:25.813291 sshd[2711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:25.817007 systemd-logind[1208]: New session 6 of user core. May 15 10:17:25.817797 systemd[1]: Started session-6.scope. May 15 10:17:25.932041 sshd[2711]: pam_unix(sshd:session): session closed for user core May 15 10:17:25.934594 systemd-logind[1208]: Session 6 logged out. Waiting for processes to exit. May 15 10:17:25.934818 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:55468.service: Deactivated successfully. May 15 10:17:25.935459 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:17:25.936034 systemd-logind[1208]: Removed session 6. May 15 10:17:26.159768 kubelet[1921]: E0515 10:17:26.159740 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:26.160312 env[1220]: time="2025-05-15T10:17:26.160273937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gfwc,Uid:eaa4073a-8fab-47aa-8bc1-f799922869af,Namespace:kube-system,Attempt:0,}" May 15 10:17:26.177244 systemd-networkd[1046]: vethbba974ab: Link UP May 15 10:17:26.178967 kernel: cni0: port 2(vethbba974ab) entered blocking state May 15 10:17:26.179148 kernel: cni0: port 2(vethbba974ab) entered disabled state May 15 10:17:26.179189 kernel: device vethbba974ab entered promiscuous mode May 15 10:17:26.182717 kernel: cni0: port 2(vethbba974ab) entered blocking state May 15 10:17:26.182782 kernel: cni0: port 2(vethbba974ab) entered forwarding state May 15 10:17:26.189420 systemd-networkd[1046]: vethbba974ab: Gained carrier May 15 10:17:26.189741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethbba974ab: link becomes ready May 15 10:17:26.191143 env[1220]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 15 10:17:26.191143 env[1220]: delegateAdd: netconf sent to delegate plugin: May 15 10:17:26.202395 env[1220]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-15T10:17:26.202328026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:17:26.202395 env[1220]: time="2025-05-15T10:17:26.202373866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:17:26.202556 env[1220]: time="2025-05-15T10:17:26.202524026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:17:26.202756 env[1220]: time="2025-05-15T10:17:26.202723786Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/124425d2d6a98fc0a2213b1dd9776200f6d0923e2146530cba5fa7631461a8f7 pid=2770 runtime=io.containerd.runc.v2 May 15 10:17:26.216267 systemd[1]: Started cri-containerd-124425d2d6a98fc0a2213b1dd9776200f6d0923e2146530cba5fa7631461a8f7.scope. May 15 10:17:26.219046 kubelet[1921]: E0515 10:17:26.218727 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:26.232416 kubelet[1921]: I0515 10:17:26.232348 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4qpzr" podStartSLOduration=20.232334191 podStartE2EDuration="20.232334191s" podCreationTimestamp="2025-05-15 10:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:26.230189391 +0000 UTC m=+24.167195194" watchObservedRunningTime="2025-05-15 10:17:26.232334191 +0000 UTC m=+24.169339994" May 15 10:17:26.236217 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:17:26.241802 systemd-networkd[1046]: cni0: Gained IPv6LL May 15 10:17:26.257790 env[1220]: time="2025-05-15T10:17:26.257750356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gfwc,Uid:eaa4073a-8fab-47aa-8bc1-f799922869af,Namespace:kube-system,Attempt:0,} returns sandbox id \"124425d2d6a98fc0a2213b1dd9776200f6d0923e2146530cba5fa7631461a8f7\"" May 15 10:17:26.259063 kubelet[1921]: E0515 10:17:26.258741 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:26.262736 env[1220]: time="2025-05-15T10:17:26.262558957Z" level=info msg="CreateContainer within sandbox \"124425d2d6a98fc0a2213b1dd9776200f6d0923e2146530cba5fa7631461a8f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:17:26.273004 env[1220]: time="2025-05-15T10:17:26.272968599Z" level=info msg="CreateContainer within sandbox \"124425d2d6a98fc0a2213b1dd9776200f6d0923e2146530cba5fa7631461a8f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cc569d21397a13555a0134aad4a0d8c3240583a2f9f6721b6c6748cbb2a00f0\"" May 15 10:17:26.273328 env[1220]: time="2025-05-15T10:17:26.273308759Z" level=info msg="StartContainer for \"4cc569d21397a13555a0134aad4a0d8c3240583a2f9f6721b6c6748cbb2a00f0\"" May 15 10:17:26.287146 systemd[1]: Started cri-containerd-4cc569d21397a13555a0134aad4a0d8c3240583a2f9f6721b6c6748cbb2a00f0.scope. May 15 10:17:26.316856 env[1220]: time="2025-05-15T10:17:26.316811968Z" level=info msg="StartContainer for \"4cc569d21397a13555a0134aad4a0d8c3240583a2f9f6721b6c6748cbb2a00f0\" returns successfully" May 15 10:17:26.560862 systemd-networkd[1046]: veth74c7a01c: Gained IPv6LL May 15 10:17:27.222250 kubelet[1921]: E0515 10:17:27.222210 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:27.222635 kubelet[1921]: E0515 10:17:27.222615 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:27.776927 systemd-networkd[1046]: vethbba974ab: Gained IPv6LL May 15 10:17:28.223981 kubelet[1921]: E0515 10:17:28.223710 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:30.936330 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:55470.service. May 15 10:17:30.976472 sshd[2869]: Accepted publickey for core from 10.0.0.1 port 55470 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:30.977860 sshd[2869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:30.980852 systemd-logind[1208]: New session 7 of user core. May 15 10:17:30.981653 systemd[1]: Started session-7.scope. May 15 10:17:31.089028 sshd[2869]: pam_unix(sshd:session): session closed for user core May 15 10:17:31.091442 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:55470.service: Deactivated successfully. May 15 10:17:31.092204 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:17:31.092682 systemd-logind[1208]: Session 7 logged out. Waiting for processes to exit. May 15 10:17:31.093378 systemd-logind[1208]: Removed session 7. May 15 10:17:36.098954 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:33958.service. May 15 10:17:36.145175 sshd[2905]: Accepted publickey for core from 10.0.0.1 port 33958 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:36.146485 sshd[2905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:36.151214 systemd-logind[1208]: New session 8 of user core. May 15 10:17:36.153203 systemd[1]: Started session-8.scope. May 15 10:17:36.276973 sshd[2905]: pam_unix(sshd:session): session closed for user core May 15 10:17:36.281956 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:33968.service. May 15 10:17:36.282711 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:33958.service: Deactivated successfully. May 15 10:17:36.283622 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:17:36.284460 systemd-logind[1208]: Session 8 logged out. Waiting for processes to exit. May 15 10:17:36.285749 systemd-logind[1208]: Removed session 8. May 15 10:17:36.335929 sshd[2919]: Accepted publickey for core from 10.0.0.1 port 33968 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:36.337291 sshd[2919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:36.340785 systemd-logind[1208]: New session 9 of user core. May 15 10:17:36.341641 systemd[1]: Started session-9.scope. May 15 10:17:36.503173 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:33984.service. May 15 10:17:36.509339 sshd[2919]: pam_unix(sshd:session): session closed for user core May 15 10:17:36.514647 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:33968.service: Deactivated successfully. May 15 10:17:36.515470 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:17:36.516108 systemd-logind[1208]: Session 9 logged out. Waiting for processes to exit. May 15 10:17:36.519028 systemd-logind[1208]: Removed session 9. May 15 10:17:36.550570 sshd[2931]: Accepted publickey for core from 10.0.0.1 port 33984 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:36.551916 sshd[2931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:36.555254 systemd-logind[1208]: New session 10 of user core. May 15 10:17:36.556106 systemd[1]: Started session-10.scope. May 15 10:17:36.688912 sshd[2931]: pam_unix(sshd:session): session closed for user core May 15 10:17:36.691586 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:33984.service: Deactivated successfully. May 15 10:17:36.692283 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:17:36.692889 systemd-logind[1208]: Session 10 logged out. Waiting for processes to exit. May 15 10:17:36.693787 systemd-logind[1208]: Removed session 10. May 15 10:17:37.222575 kubelet[1921]: E0515 10:17:37.222542 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:37.234698 kubelet[1921]: I0515 10:17:37.234635 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9gfwc" podStartSLOduration=31.234621295 podStartE2EDuration="31.234621295s" podCreationTimestamp="2025-05-15 10:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:27.234263541 +0000 UTC m=+25.171269384" watchObservedRunningTime="2025-05-15 10:17:37.234621295 +0000 UTC m=+35.171627098" May 15 10:17:37.240873 kubelet[1921]: E0515 10:17:37.240843 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:41.692183 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:34000.service. May 15 10:17:41.734703 sshd[2973]: Accepted publickey for core from 10.0.0.1 port 34000 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:41.735128 sshd[2973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:41.739028 systemd-logind[1208]: New session 11 of user core. May 15 10:17:41.740220 systemd[1]: Started session-11.scope. May 15 10:17:41.870462 sshd[2973]: pam_unix(sshd:session): session closed for user core May 15 10:17:41.881045 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:34008.service. May 15 10:17:41.881417 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:17:41.882031 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:34000.service: Deactivated successfully. May 15 10:17:41.882908 systemd-logind[1208]: Session 11 logged out. Waiting for processes to exit. May 15 10:17:41.884042 systemd-logind[1208]: Removed session 11. May 15 10:17:41.930119 sshd[2985]: Accepted publickey for core from 10.0.0.1 port 34008 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:41.931747 sshd[2985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:41.935190 systemd-logind[1208]: New session 12 of user core. May 15 10:17:41.936107 systemd[1]: Started session-12.scope. May 15 10:17:42.132774 sshd[2985]: pam_unix(sshd:session): session closed for user core May 15 10:17:42.138895 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:34024.service. May 15 10:17:42.142949 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:34008.service: Deactivated successfully. May 15 10:17:42.143759 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:17:42.147285 systemd-logind[1208]: Session 12 logged out. Waiting for processes to exit. May 15 10:17:42.148389 systemd-logind[1208]: Removed session 12. May 15 10:17:42.193914 sshd[2996]: Accepted publickey for core from 10.0.0.1 port 34024 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:42.195287 sshd[2996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:42.199607 systemd[1]: Started session-13.scope. May 15 10:17:42.200189 systemd-logind[1208]: New session 13 of user core. May 15 10:17:42.954982 sshd[2996]: pam_unix(sshd:session): session closed for user core May 15 10:17:42.958861 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:57026.service. May 15 10:17:42.959341 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:34024.service: Deactivated successfully. May 15 10:17:42.960661 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:17:42.961606 systemd-logind[1208]: Session 13 logged out. Waiting for processes to exit. May 15 10:17:42.962728 systemd-logind[1208]: Removed session 13. May 15 10:17:43.006856 sshd[3014]: Accepted publickey for core from 10.0.0.1 port 57026 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:43.011011 sshd[3014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:43.014754 systemd-logind[1208]: New session 14 of user core. May 15 10:17:43.015619 systemd[1]: Started session-14.scope. May 15 10:17:43.223656 sshd[3014]: pam_unix(sshd:session): session closed for user core May 15 10:17:43.227530 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:57028.service. May 15 10:17:43.227974 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:57026.service: Deactivated successfully. May 15 10:17:43.228955 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:17:43.228986 systemd-logind[1208]: Session 14 logged out. Waiting for processes to exit. May 15 10:17:43.231199 systemd-logind[1208]: Removed session 14. May 15 10:17:43.268775 sshd[3027]: Accepted publickey for core from 10.0.0.1 port 57028 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:43.270122 sshd[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:43.272952 systemd-logind[1208]: New session 15 of user core. May 15 10:17:43.273734 systemd[1]: Started session-15.scope. May 15 10:17:43.387073 sshd[3027]: pam_unix(sshd:session): session closed for user core May 15 10:17:43.389519 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:57028.service: Deactivated successfully. May 15 10:17:43.390203 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:17:43.390761 systemd-logind[1208]: Session 15 logged out. Waiting for processes to exit. May 15 10:17:43.391469 systemd-logind[1208]: Removed session 15. May 15 10:17:48.392033 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:57038.service. May 15 10:17:48.432970 sshd[3064]: Accepted publickey for core from 10.0.0.1 port 57038 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:48.434482 sshd[3064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:48.437745 systemd-logind[1208]: New session 16 of user core. May 15 10:17:48.438651 systemd[1]: Started session-16.scope. May 15 10:17:48.561068 sshd[3064]: pam_unix(sshd:session): session closed for user core May 15 10:17:48.564516 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:57038.service: Deactivated successfully. May 15 10:17:48.565598 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:17:48.566148 systemd-logind[1208]: Session 16 logged out. Waiting for processes to exit. May 15 10:17:48.566835 systemd-logind[1208]: Removed session 16. May 15 10:17:53.565003 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:54546.service. May 15 10:17:53.615679 sshd[3120]: Accepted publickey for core from 10.0.0.1 port 54546 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:53.617324 sshd[3120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:53.620777 systemd-logind[1208]: New session 17 of user core. May 15 10:17:53.621639 systemd[1]: Started session-17.scope. May 15 10:17:53.737989 sshd[3120]: pam_unix(sshd:session): session closed for user core May 15 10:17:53.741074 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:54546.service: Deactivated successfully. May 15 10:17:53.741823 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:17:53.742326 systemd-logind[1208]: Session 17 logged out. Waiting for processes to exit. May 15 10:17:53.743099 systemd-logind[1208]: Removed session 17. May 15 10:17:58.742573 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:54548.service. May 15 10:17:58.784910 sshd[3155]: Accepted publickey for core from 10.0.0.1 port 54548 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:17:58.786498 sshd[3155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:17:58.790144 systemd-logind[1208]: New session 18 of user core. May 15 10:17:58.790624 systemd[1]: Started session-18.scope. May 15 10:17:58.903768 sshd[3155]: pam_unix(sshd:session): session closed for user core May 15 10:17:58.906384 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:54548.service: Deactivated successfully. May 15 10:17:58.907113 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:17:58.907603 systemd-logind[1208]: Session 18 logged out. Waiting for processes to exit. May 15 10:17:58.908373 systemd-logind[1208]: Removed session 18.