May 13 00:17:32.768106 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:17:32.768127 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon May 12 23:22:00 -00 2025 May 13 00:17:32.768135 kernel: efi: EFI v2.70 by EDK II May 13 00:17:32.768141 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 13 00:17:32.768147 kernel: random: crng init done May 13 00:17:32.768152 kernel: ACPI: Early table checksum verification disabled May 13 00:17:32.768159 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 13 00:17:32.768166 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:17:32.768172 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768178 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768184 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768190 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768195 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768201 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768209 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768215 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768222 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:17:32.768227 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:17:32.768233 kernel: NUMA: Failed to initialise from firmware May 13 00:17:32.768239 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:17:32.768245 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 13 00:17:32.768251 kernel: Zone ranges: May 13 00:17:32.768292 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:17:32.768301 kernel: DMA32 empty May 13 00:17:32.768307 kernel: Normal empty May 13 00:17:32.768316 kernel: Movable zone start for each node May 13 00:17:32.768323 kernel: Early memory node ranges May 13 00:17:32.768328 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 13 00:17:32.768335 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 13 00:17:32.768341 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 13 00:17:32.768346 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 13 00:17:32.768353 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 13 00:17:32.768359 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 13 00:17:32.768365 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 13 00:17:32.768371 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:17:32.768378 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:17:32.768384 kernel: psci: probing for conduit method from ACPI. May 13 00:17:32.768390 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:17:32.768396 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:17:32.768402 kernel: psci: Trusted OS migration not required May 13 00:17:32.768411 kernel: psci: SMC Calling Convention v1.1 May 13 00:17:32.768417 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:17:32.768425 kernel: ACPI: SRAT not present May 13 00:17:32.768432 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 13 00:17:32.768438 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 13 00:17:32.768444 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:17:32.768450 kernel: Detected PIPT I-cache on CPU0 May 13 00:17:32.768457 kernel: CPU features: detected: GIC system register CPU interface May 13 00:17:32.768463 kernel: CPU features: detected: Hardware dirty bit management May 13 00:17:32.768469 kernel: CPU features: detected: Spectre-v4 May 13 00:17:32.768475 kernel: CPU features: detected: Spectre-BHB May 13 00:17:32.768483 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:17:32.768489 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:17:32.768495 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:17:32.768501 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:17:32.768508 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:17:32.768514 kernel: Policy zone: DMA May 13 00:17:32.768522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:17:32.768528 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:17:32.768535 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:17:32.768542 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:17:32.768548 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:17:32.768557 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 13 00:17:32.768563 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:17:32.768570 kernel: trace event string verifier disabled May 13 00:17:32.768576 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:17:32.768583 kernel: rcu: RCU event tracing is enabled. May 13 00:17:32.768589 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:17:32.768596 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:17:32.768602 kernel: Tracing variant of Tasks RCU enabled. May 13 00:17:32.768608 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:17:32.768615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:17:32.768621 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:17:32.768628 kernel: GICv3: 256 SPIs implemented May 13 00:17:32.768634 kernel: GICv3: 0 Extended SPIs implemented May 13 00:17:32.768640 kernel: GICv3: Distributor has no Range Selector support May 13 00:17:32.768646 kernel: Root IRQ handler: gic_handle_irq May 13 00:17:32.768652 kernel: GICv3: 16 PPIs implemented May 13 00:17:32.768659 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:17:32.768665 kernel: ACPI: SRAT not present May 13 00:17:32.768671 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:17:32.768677 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:17:32.768684 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 13 00:17:32.768691 kernel: GICv3: using LPI property table @0x00000000400d0000 May 13 00:17:32.768697 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 13 00:17:32.768705 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:17:32.768711 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:17:32.768718 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:17:32.768724 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:17:32.768730 kernel: arm-pv: using stolen time PV May 13 00:17:32.768737 kernel: Console: colour dummy device 80x25 May 13 00:17:32.768750 kernel: ACPI: Core revision 20210730 May 13 00:17:32.768762 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:17:32.768769 kernel: pid_max: default: 32768 minimum: 301 May 13 00:17:32.768776 kernel: LSM: Security Framework initializing May 13 00:17:32.768784 kernel: SELinux: Initializing. May 13 00:17:32.768791 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:17:32.768798 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:17:32.768804 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:17:32.768811 kernel: rcu: Hierarchical SRCU implementation. May 13 00:17:32.768817 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:17:32.768824 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:17:32.768831 kernel: Remapping and enabling EFI services. May 13 00:17:32.768837 kernel: smp: Bringing up secondary CPUs ... May 13 00:17:32.768845 kernel: Detected PIPT I-cache on CPU1 May 13 00:17:32.768852 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:17:32.768859 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 13 00:17:32.768865 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:17:32.768872 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:17:32.768878 kernel: Detected PIPT I-cache on CPU2 May 13 00:17:32.768885 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:17:32.768892 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 13 00:17:32.768899 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:17:32.768906 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:17:32.768918 kernel: Detected PIPT I-cache on CPU3 May 13 00:17:32.768925 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:17:32.768931 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 13 00:17:32.768938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:17:32.768949 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:17:32.768957 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:17:32.768964 kernel: SMP: Total of 4 processors activated. May 13 00:17:32.768971 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:17:32.768979 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:17:32.768985 kernel: CPU features: detected: Common not Private translations May 13 00:17:32.768992 kernel: CPU features: detected: CRC32 instructions May 13 00:17:32.768999 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:17:32.769007 kernel: CPU features: detected: LSE atomic instructions May 13 00:17:32.769020 kernel: CPU features: detected: Privileged Access Never May 13 00:17:32.769028 kernel: CPU features: detected: RAS Extension Support May 13 00:17:32.769034 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:17:32.769041 kernel: CPU: All CPU(s) started at EL1 May 13 00:17:32.769049 kernel: alternatives: patching kernel code May 13 00:17:32.769056 kernel: devtmpfs: initialized May 13 00:17:32.769063 kernel: KASLR enabled May 13 00:17:32.769070 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:17:32.769076 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:17:32.769083 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:17:32.769090 kernel: SMBIOS 3.0.0 present. May 13 00:17:32.769097 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 13 00:17:32.769104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:17:32.769112 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:17:32.769119 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:17:32.769125 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:17:32.769133 kernel: audit: initializing netlink subsys (disabled) May 13 00:17:32.769140 kernel: audit: type=2000 audit(0.070:1): state=initialized audit_enabled=0 res=1 May 13 00:17:32.769147 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:17:32.769154 kernel: cpuidle: using governor menu May 13 00:17:32.769161 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:17:32.769168 kernel: ASID allocator initialised with 32768 entries May 13 00:17:32.769176 kernel: ACPI: bus type PCI registered May 13 00:17:32.769183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:17:32.769190 kernel: Serial: AMBA PL011 UART driver May 13 00:17:32.769198 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:17:32.769204 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:17:32.769212 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:17:32.769219 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:17:32.769226 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:17:32.769233 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:17:32.769241 kernel: ACPI: Added _OSI(Module Device) May 13 00:17:32.769248 kernel: ACPI: Added _OSI(Processor Device) May 13 00:17:32.769267 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:17:32.769274 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:17:32.769281 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:17:32.769288 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:17:32.769298 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:17:32.769305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:17:32.769313 kernel: ACPI: Interpreter enabled May 13 00:17:32.769322 kernel: ACPI: Using GIC for interrupt routing May 13 00:17:32.769329 kernel: ACPI: MCFG table detected, 1 entries May 13 00:17:32.769336 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:17:32.769343 kernel: printk: console [ttyAMA0] enabled May 13 00:17:32.769350 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:17:32.769504 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:17:32.769573 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:17:32.769638 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:17:32.769704 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:17:32.769763 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:17:32.769772 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:17:32.769779 kernel: PCI host bridge to bus 0000:00 May 13 00:17:32.769852 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:17:32.769922 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:17:32.769981 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:17:32.770046 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:17:32.770123 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:17:32.770205 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:17:32.770327 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:17:32.770397 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:17:32.770460 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:17:32.770527 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:17:32.770587 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:17:32.770651 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:17:32.770707 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:17:32.770762 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:17:32.770817 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:17:32.770827 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:17:32.770834 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:17:32.770842 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:17:32.770850 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:17:32.770857 kernel: iommu: Default domain type: Translated May 13 00:17:32.770863 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:17:32.770870 kernel: vgaarb: loaded May 13 00:17:32.770877 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:17:32.770887 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:17:32.770896 kernel: PTP clock support registered May 13 00:17:32.770903 kernel: Registered efivars operations May 13 00:17:32.770912 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:17:32.770919 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:17:32.770926 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:17:32.770933 kernel: pnp: PnP ACPI init May 13 00:17:32.771003 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:17:32.771013 kernel: pnp: PnP ACPI: found 1 devices May 13 00:17:32.771028 kernel: NET: Registered PF_INET protocol family May 13 00:17:32.771035 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:17:32.771045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:17:32.771052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:17:32.771059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:17:32.771066 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:17:32.771073 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:17:32.771080 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:17:32.771086 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:17:32.771093 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:17:32.771100 kernel: PCI: CLS 0 bytes, default 64 May 13 00:17:32.771108 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:17:32.771115 kernel: kvm [1]: HYP mode not available May 13 00:17:32.771122 kernel: Initialise system trusted keyrings May 13 00:17:32.771129 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:17:32.771136 kernel: Key type asymmetric registered May 13 00:17:32.771142 kernel: Asymmetric key parser 'x509' registered May 13 00:17:32.771150 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:17:32.771157 kernel: io scheduler mq-deadline registered May 13 00:17:32.771164 kernel: io scheduler kyber registered May 13 00:17:32.771172 kernel: io scheduler bfq registered May 13 00:17:32.771179 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:17:32.771186 kernel: ACPI: button: Power Button [PWRB] May 13 00:17:32.771193 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:17:32.771296 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:17:32.771308 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:17:32.771315 kernel: thunder_xcv, ver 1.0 May 13 00:17:32.771323 kernel: thunder_bgx, ver 1.0 May 13 00:17:32.771331 kernel: nicpf, ver 1.0 May 13 00:17:32.771341 kernel: nicvf, ver 1.0 May 13 00:17:32.771435 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:17:32.771524 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:17:32 UTC (1747095452) May 13 00:17:32.771534 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:17:32.771543 kernel: NET: Registered PF_INET6 protocol family May 13 00:17:32.771550 kernel: Segment Routing with IPv6 May 13 00:17:32.771557 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:17:32.771564 kernel: NET: Registered PF_PACKET protocol family May 13 00:17:32.771572 kernel: Key type dns_resolver registered May 13 00:17:32.771579 kernel: registered taskstats version 1 May 13 00:17:32.771586 kernel: Loading compiled-in X.509 certificates May 13 00:17:32.771593 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: d291b704d59536a3c0ba96fd6f5a99459de8de99' May 13 00:17:32.771600 kernel: Key type .fscrypt registered May 13 00:17:32.771606 kernel: Key type fscrypt-provisioning registered May 13 00:17:32.771613 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:17:32.771620 kernel: ima: Allocated hash algorithm: sha1 May 13 00:17:32.771627 kernel: ima: No architecture policies found May 13 00:17:32.771635 kernel: clk: Disabling unused clocks May 13 00:17:32.771642 kernel: Freeing unused kernel memory: 36480K May 13 00:17:32.771649 kernel: Run /init as init process May 13 00:17:32.771656 kernel: with arguments: May 13 00:17:32.771663 kernel: /init May 13 00:17:32.771670 kernel: with environment: May 13 00:17:32.771677 kernel: HOME=/ May 13 00:17:32.771683 kernel: TERM=linux May 13 00:17:32.771690 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:17:32.771700 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:17:32.771710 systemd[1]: Detected virtualization kvm. May 13 00:17:32.771717 systemd[1]: Detected architecture arm64. May 13 00:17:32.771725 systemd[1]: Running in initrd. May 13 00:17:32.771732 systemd[1]: No hostname configured, using default hostname. May 13 00:17:32.771740 systemd[1]: Hostname set to . May 13 00:17:32.771748 systemd[1]: Initializing machine ID from VM UUID. May 13 00:17:32.771756 systemd[1]: Queued start job for default target initrd.target. May 13 00:17:32.771763 systemd[1]: Started systemd-ask-password-console.path. May 13 00:17:32.771771 systemd[1]: Reached target cryptsetup.target. May 13 00:17:32.771778 systemd[1]: Reached target paths.target. May 13 00:17:32.771785 systemd[1]: Reached target slices.target. May 13 00:17:32.771792 systemd[1]: Reached target swap.target. May 13 00:17:32.771799 systemd[1]: Reached target timers.target. May 13 00:17:32.771807 systemd[1]: Listening on iscsid.socket. May 13 00:17:32.771815 systemd[1]: Listening on iscsiuio.socket. May 13 00:17:32.771823 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:17:32.771830 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:17:32.771837 systemd[1]: Listening on systemd-journald.socket. May 13 00:17:32.771844 systemd[1]: Listening on systemd-networkd.socket. May 13 00:17:32.771851 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:17:32.771859 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:17:32.771866 systemd[1]: Reached target sockets.target. May 13 00:17:32.771874 systemd[1]: Starting kmod-static-nodes.service... May 13 00:17:32.771882 systemd[1]: Finished network-cleanup.service. May 13 00:17:32.771889 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:17:32.771897 systemd[1]: Starting systemd-journald.service... May 13 00:17:32.771905 systemd[1]: Starting systemd-modules-load.service... May 13 00:17:32.771913 systemd[1]: Starting systemd-resolved.service... May 13 00:17:32.771921 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:17:32.771928 systemd[1]: Finished kmod-static-nodes.service. May 13 00:17:32.771936 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:17:32.771944 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:17:32.771966 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:17:32.771973 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:17:32.771986 systemd-journald[290]: Journal started May 13 00:17:32.772037 systemd-journald[290]: Runtime Journal (/run/log/journal/38c535d99e0c4fd6b1275beeef027dd9) is 6.0M, max 48.7M, 42.6M free. May 13 00:17:32.764453 systemd-modules-load[291]: Inserted module 'overlay' May 13 00:17:32.775612 systemd[1]: Started systemd-journald.service. May 13 00:17:32.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.776161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:17:32.782564 kernel: audit: type=1130 audit(1747095452.775:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.782590 kernel: audit: type=1130 audit(1747095452.778:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.779362 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:17:32.782944 systemd[1]: Starting dracut-cmdline.service... May 13 00:17:32.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.787263 kernel: audit: type=1130 audit(1747095452.781:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.798535 systemd-resolved[292]: Positive Trust Anchors: May 13 00:17:32.799721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:17:32.799741 dracut-cmdline[310]: dracut-dracut-053 May 13 00:17:32.798552 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:17:32.798580 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:17:32.806984 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:17:32.811923 systemd-modules-load[291]: Inserted module 'br_netfilter' May 13 00:17:32.812653 kernel: Bridge firewalling registered May 13 00:17:32.814422 systemd-resolved[292]: Defaulting to hostname 'linux'. May 13 00:17:32.816801 systemd[1]: Started systemd-resolved.service. May 13 00:17:32.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.817618 systemd[1]: Reached target nss-lookup.target. May 13 00:17:32.820425 kernel: audit: type=1130 audit(1747095452.817:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.828277 kernel: SCSI subsystem initialized May 13 00:17:32.836845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:17:32.836896 kernel: device-mapper: uevent: version 1.0.3 May 13 00:17:32.836907 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:17:32.839545 systemd-modules-load[291]: Inserted module 'dm_multipath' May 13 00:17:32.840440 systemd[1]: Finished systemd-modules-load.service. May 13 00:17:32.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.842240 systemd[1]: Starting systemd-sysctl.service... May 13 00:17:32.844997 kernel: audit: type=1130 audit(1747095452.841:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.850597 systemd[1]: Finished systemd-sysctl.service. May 13 00:17:32.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.854292 kernel: audit: type=1130 audit(1747095452.851:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.894290 kernel: Loading iSCSI transport class v2.0-870. May 13 00:17:32.907275 kernel: iscsi: registered transport (tcp) May 13 00:17:32.925286 kernel: iscsi: registered transport (qla4xxx) May 13 00:17:32.925348 kernel: QLogic iSCSI HBA Driver May 13 00:17:32.967179 systemd[1]: Finished dracut-cmdline.service. May 13 00:17:32.971376 kernel: audit: type=1130 audit(1747095452.967:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:32.968992 systemd[1]: Starting dracut-pre-udev.service... May 13 00:17:33.021278 kernel: raid6: neonx8 gen() 13640 MB/s May 13 00:17:33.038305 kernel: raid6: neonx8 xor() 10790 MB/s May 13 00:17:33.055271 kernel: raid6: neonx4 gen() 12692 MB/s May 13 00:17:33.072265 kernel: raid6: neonx4 xor() 11153 MB/s May 13 00:17:33.089269 kernel: raid6: neonx2 gen() 12982 MB/s May 13 00:17:33.106267 kernel: raid6: neonx2 xor() 10295 MB/s May 13 00:17:33.123270 kernel: raid6: neonx1 gen() 10583 MB/s May 13 00:17:33.140268 kernel: raid6: neonx1 xor() 8760 MB/s May 13 00:17:33.157266 kernel: raid6: int64x8 gen() 6098 MB/s May 13 00:17:33.174266 kernel: raid6: int64x8 xor() 3531 MB/s May 13 00:17:33.191264 kernel: raid6: int64x4 gen() 7146 MB/s May 13 00:17:33.208266 kernel: raid6: int64x4 xor() 3838 MB/s May 13 00:17:33.225266 kernel: raid6: int64x2 gen() 6147 MB/s May 13 00:17:33.242269 kernel: raid6: int64x2 xor() 3320 MB/s May 13 00:17:33.259271 kernel: raid6: int64x1 gen() 5041 MB/s May 13 00:17:33.276586 kernel: raid6: int64x1 xor() 2646 MB/s May 13 00:17:33.276597 kernel: raid6: using algorithm neonx8 gen() 13640 MB/s May 13 00:17:33.276614 kernel: raid6: .... xor() 10790 MB/s, rmw enabled May 13 00:17:33.276623 kernel: raid6: using neon recovery algorithm May 13 00:17:33.287355 kernel: xor: measuring software checksum speed May 13 00:17:33.287370 kernel: 8regs : 17191 MB/sec May 13 00:17:33.288362 kernel: 32regs : 20707 MB/sec May 13 00:17:33.288377 kernel: arm64_neon : 27150 MB/sec May 13 00:17:33.288385 kernel: xor: using function: arm64_neon (27150 MB/sec) May 13 00:17:33.346278 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 13 00:17:33.360566 systemd[1]: Finished dracut-pre-udev.service. May 13 00:17:33.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:33.363000 audit: BPF prog-id=7 op=LOAD May 13 00:17:33.363989 systemd[1]: Starting systemd-udevd.service... May 13 00:17:33.364615 kernel: audit: type=1130 audit(1747095453.360:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:33.364640 kernel: audit: type=1334 audit(1747095453.363:10): prog-id=7 op=LOAD May 13 00:17:33.363000 audit: BPF prog-id=8 op=LOAD May 13 00:17:33.376315 systemd-udevd[495]: Using default interface naming scheme 'v252'. May 13 00:17:33.379688 systemd[1]: Started systemd-udevd.service. May 13 00:17:33.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:33.381084 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:17:33.394207 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation May 13 00:17:33.422794 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:17:33.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:33.424538 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:17:33.458170 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:17:33.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:33.483361 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:17:33.489678 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:17:33.489695 kernel: GPT:9289727 != 19775487 May 13 00:17:33.489705 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:17:33.489714 kernel: GPT:9289727 != 19775487 May 13 00:17:33.489722 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:17:33.489731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:17:33.506981 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:17:33.509934 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:17:33.511361 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:17:33.514800 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) May 13 00:17:33.516575 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:17:33.519448 systemd[1]: Starting disk-uuid.service... May 13 00:17:33.522646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:17:33.525483 disk-uuid[564]: Primary Header is updated. May 13 00:17:33.525483 disk-uuid[564]: Secondary Entries is updated. May 13 00:17:33.525483 disk-uuid[564]: Secondary Header is updated. May 13 00:17:33.529279 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:17:34.538274 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:17:34.538323 disk-uuid[565]: The operation has completed successfully. May 13 00:17:34.564148 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:17:34.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.564263 systemd[1]: Finished disk-uuid.service. May 13 00:17:34.565664 systemd[1]: Starting verity-setup.service... May 13 00:17:34.583274 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:17:34.604585 systemd[1]: Found device dev-mapper-usr.device. May 13 00:17:34.606529 systemd[1]: Mounting sysusr-usr.mount... May 13 00:17:34.608512 systemd[1]: Finished verity-setup.service. May 13 00:17:34.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.653989 systemd[1]: Mounted sysusr-usr.mount. May 13 00:17:34.655097 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:17:34.654734 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:17:34.655475 systemd[1]: Starting ignition-setup.service... May 13 00:17:34.657227 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:17:34.664530 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:17:34.664568 kernel: BTRFS info (device vda6): using free space tree May 13 00:17:34.664578 kernel: BTRFS info (device vda6): has skinny extents May 13 00:17:34.673658 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:17:34.680066 systemd[1]: Finished ignition-setup.service. May 13 00:17:34.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.681585 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:17:34.742090 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:17:34.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.743000 audit: BPF prog-id=9 op=LOAD May 13 00:17:34.744095 systemd[1]: Starting systemd-networkd.service... May 13 00:17:34.770222 systemd-networkd[735]: lo: Link UP May 13 00:17:34.770235 systemd-networkd[735]: lo: Gained carrier May 13 00:17:34.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.770639 systemd-networkd[735]: Enumeration completed May 13 00:17:34.770736 systemd[1]: Started systemd-networkd.service. May 13 00:17:34.770813 systemd-networkd[735]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:17:34.771588 systemd[1]: Reached target network.target. May 13 00:17:34.773241 systemd[1]: Starting iscsiuio.service... May 13 00:17:34.775220 systemd-networkd[735]: eth0: Link UP May 13 00:17:34.775224 systemd-networkd[735]: eth0: Gained carrier May 13 00:17:34.784962 systemd[1]: Started iscsiuio.service. May 13 00:17:34.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.786665 systemd[1]: Starting iscsid.service... May 13 00:17:34.790640 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:17:34.790640 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:17:34.790640 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:17:34.790640 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:17:34.790640 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:17:34.790640 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:17:34.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.793515 systemd[1]: Started iscsid.service. May 13 00:17:34.797370 systemd-networkd[735]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:17:34.797969 systemd[1]: Starting dracut-initqueue.service... May 13 00:17:34.810124 systemd[1]: Finished dracut-initqueue.service. May 13 00:17:34.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.811108 systemd[1]: Reached target remote-fs-pre.target. May 13 00:17:34.812334 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:17:34.813638 systemd[1]: Reached target remote-fs.target. May 13 00:17:34.815931 systemd[1]: Starting dracut-pre-mount.service... May 13 00:17:34.825738 systemd[1]: Finished dracut-pre-mount.service. May 13 00:17:34.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.841791 ignition[647]: Ignition 2.14.0 May 13 00:17:34.841804 ignition[647]: Stage: fetch-offline May 13 00:17:34.841850 ignition[647]: no configs at "/usr/lib/ignition/base.d" May 13 00:17:34.841859 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:17:34.842046 ignition[647]: parsed url from cmdline: "" May 13 00:17:34.842049 ignition[647]: no config URL provided May 13 00:17:34.842054 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:17:34.842062 ignition[647]: no config at "/usr/lib/ignition/user.ign" May 13 00:17:34.842081 ignition[647]: op(1): [started] loading QEMU firmware config module May 13 00:17:34.842086 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:17:34.846305 ignition[647]: op(1): [finished] loading QEMU firmware config module May 13 00:17:34.872111 ignition[647]: parsing config with SHA512: b982a41e10ae8836c1181f7e09b2ac44a2db732d1718db48d1fef71055c6b364464d243d8782b05342a643d069a6bd0b054d5cf749676033a2e5c54b9c927647 May 13 00:17:34.879580 unknown[647]: fetched base config from "system" May 13 00:17:34.880272 ignition[647]: fetch-offline: fetch-offline passed May 13 00:17:34.879591 unknown[647]: fetched user config from "qemu" May 13 00:17:34.880326 ignition[647]: Ignition finished successfully May 13 00:17:34.883083 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:17:34.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.884440 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:17:34.885679 systemd[1]: Starting ignition-kargs.service... May 13 00:17:34.895779 ignition[760]: Ignition 2.14.0 May 13 00:17:34.895790 ignition[760]: Stage: kargs May 13 00:17:34.895896 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 13 00:17:34.895907 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:17:34.897154 ignition[760]: kargs: kargs passed May 13 00:17:34.897203 ignition[760]: Ignition finished successfully May 13 00:17:34.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.900171 systemd[1]: Finished ignition-kargs.service. May 13 00:17:34.902183 systemd[1]: Starting ignition-disks.service... May 13 00:17:34.909607 ignition[766]: Ignition 2.14.0 May 13 00:17:34.909618 ignition[766]: Stage: disks May 13 00:17:34.909719 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 00:17:34.911852 systemd[1]: Finished ignition-disks.service. May 13 00:17:34.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.909729 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:17:34.913248 systemd[1]: Reached target initrd-root-device.target. May 13 00:17:34.910728 ignition[766]: disks: disks passed May 13 00:17:34.914386 systemd[1]: Reached target local-fs-pre.target. May 13 00:17:34.910776 ignition[766]: Ignition finished successfully May 13 00:17:34.915851 systemd[1]: Reached target local-fs.target. May 13 00:17:34.917097 systemd[1]: Reached target sysinit.target. May 13 00:17:34.918131 systemd[1]: Reached target basic.target. May 13 00:17:34.920200 systemd[1]: Starting systemd-fsck-root.service... May 13 00:17:34.931906 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 13 00:17:34.935451 systemd[1]: Finished systemd-fsck-root.service. May 13 00:17:34.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:34.937521 systemd[1]: Mounting sysroot.mount... May 13 00:17:34.944267 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:17:34.944744 systemd[1]: Mounted sysroot.mount. May 13 00:17:34.945369 systemd[1]: Reached target initrd-root-fs.target. May 13 00:17:34.947515 systemd[1]: Mounting sysroot-usr.mount... May 13 00:17:34.948330 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:17:34.948394 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:17:34.948429 systemd[1]: Reached target ignition-diskful.target. May 13 00:17:34.950693 systemd[1]: Mounted sysroot-usr.mount. May 13 00:17:34.953581 systemd[1]: Starting initrd-setup-root.service... May 13 00:17:34.957996 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:17:34.963027 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 13 00:17:34.967553 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:17:34.971176 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:17:34.998658 systemd[1]: Finished initrd-setup-root.service. May 13 00:17:34.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:35.000208 systemd[1]: Starting ignition-mount.service... May 13 00:17:35.001623 systemd[1]: Starting sysroot-boot.service... May 13 00:17:35.005932 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:17:35.014822 ignition[827]: INFO : Ignition 2.14.0 May 13 00:17:35.014822 ignition[827]: INFO : Stage: mount May 13 00:17:35.017169 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:17:35.017169 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:17:35.017169 ignition[827]: INFO : mount: mount passed May 13 00:17:35.017169 ignition[827]: INFO : Ignition finished successfully May 13 00:17:35.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:35.018408 systemd[1]: Finished ignition-mount.service. May 13 00:17:35.028816 systemd[1]: Finished sysroot-boot.service. May 13 00:17:35.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:35.614369 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:17:35.620539 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 13 00:17:35.620567 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:17:35.620577 kernel: BTRFS info (device vda6): using free space tree May 13 00:17:35.621467 kernel: BTRFS info (device vda6): has skinny extents May 13 00:17:35.624110 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:17:35.625480 systemd[1]: Starting ignition-files.service... May 13 00:17:35.639140 ignition[856]: INFO : Ignition 2.14.0 May 13 00:17:35.639140 ignition[856]: INFO : Stage: files May 13 00:17:35.640655 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:17:35.640655 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:17:35.640655 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 13 00:17:35.644589 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:17:35.644589 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:17:35.647589 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:17:35.647589 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:17:35.647589 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:17:35.647191 unknown[856]: wrote ssh authorized keys file for user: core May 13 00:17:35.653811 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:17:35.653811 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:17:35.653811 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:17:35.653811 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:17:35.736927 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:17:35.846755 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:17:35.848810 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:17:36.163048 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:17:36.446359 systemd-networkd[735]: eth0: Gained IPv6LL May 13 00:17:36.511607 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:17:36.511607 ignition[856]: INFO : files: op(c): [started] processing unit "containerd.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:17:36.515390 ignition[856]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:17:36.515390 ignition[856]: INFO : files: op(c): [finished] processing unit "containerd.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:17:36.515390 ignition[856]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:17:36.544650 ignition[856]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:17:36.547106 ignition[856]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:17:36.547106 ignition[856]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 13 00:17:36.547106 ignition[856]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:17:36.547106 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:17:36.547106 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:17:36.547106 ignition[856]: INFO : files: files passed May 13 00:17:36.547106 ignition[856]: INFO : Ignition finished successfully May 13 00:17:36.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.547295 systemd[1]: Finished ignition-files.service. May 13 00:17:36.549940 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:17:36.550900 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:17:36.562929 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:17:36.551547 systemd[1]: Starting ignition-quench.service... May 13 00:17:36.565101 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:17:36.555528 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:17:36.555614 systemd[1]: Finished ignition-quench.service. May 13 00:17:36.556821 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:17:36.557886 systemd[1]: Reached target ignition-complete.target. May 13 00:17:36.559730 systemd[1]: Starting initrd-parse-etc.service... May 13 00:17:36.571593 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:17:36.571680 systemd[1]: Finished initrd-parse-etc.service. May 13 00:17:36.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.573077 systemd[1]: Reached target initrd-fs.target. May 13 00:17:36.574221 systemd[1]: Reached target initrd.target. May 13 00:17:36.575339 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:17:36.576026 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:17:36.586016 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:17:36.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.587362 systemd[1]: Starting initrd-cleanup.service... May 13 00:17:36.594825 systemd[1]: Stopped target nss-lookup.target. May 13 00:17:36.595549 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:17:36.596791 systemd[1]: Stopped target timers.target. May 13 00:17:36.598040 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:17:36.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.598147 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:17:36.599231 systemd[1]: Stopped target initrd.target. May 13 00:17:36.600463 systemd[1]: Stopped target basic.target. May 13 00:17:36.601552 systemd[1]: Stopped target ignition-complete.target. May 13 00:17:36.602686 systemd[1]: Stopped target ignition-diskful.target. May 13 00:17:36.603828 systemd[1]: Stopped target initrd-root-device.target. May 13 00:17:36.605078 systemd[1]: Stopped target remote-fs.target. May 13 00:17:36.606260 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:17:36.607521 systemd[1]: Stopped target sysinit.target. May 13 00:17:36.608626 systemd[1]: Stopped target local-fs.target. May 13 00:17:36.609791 systemd[1]: Stopped target local-fs-pre.target. May 13 00:17:36.610899 systemd[1]: Stopped target swap.target. May 13 00:17:36.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.612039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:17:36.612140 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:17:36.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.613322 systemd[1]: Stopped target cryptsetup.target. May 13 00:17:36.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.614365 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:17:36.614461 systemd[1]: Stopped dracut-initqueue.service. May 13 00:17:36.615718 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:17:36.615805 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:17:36.616942 systemd[1]: Stopped target paths.target. May 13 00:17:36.618001 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:17:36.620479 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:17:36.621195 systemd[1]: Stopped target slices.target. May 13 00:17:36.622377 systemd[1]: Stopped target sockets.target. May 13 00:17:36.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.623409 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:17:36.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.623477 systemd[1]: Closed iscsid.socket. May 13 00:17:36.624797 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:17:36.624859 systemd[1]: Closed iscsiuio.socket. May 13 00:17:36.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.625974 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:17:36.626080 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:17:36.627199 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:17:36.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.627311 systemd[1]: Stopped ignition-files.service. May 13 00:17:36.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.629199 systemd[1]: Stopping ignition-mount.service... May 13 00:17:36.630137 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:17:36.638792 ignition[896]: INFO : Ignition 2.14.0 May 13 00:17:36.638792 ignition[896]: INFO : Stage: umount May 13 00:17:36.638792 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:17:36.638792 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:17:36.638792 ignition[896]: INFO : umount: umount passed May 13 00:17:36.638792 ignition[896]: INFO : Ignition finished successfully May 13 00:17:36.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.630264 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:17:36.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.632172 systemd[1]: Stopping sysroot-boot.service... May 13 00:17:36.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.633577 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:17:36.633692 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:17:36.635009 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:17:36.635095 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:17:36.639483 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:17:36.639560 systemd[1]: Finished initrd-cleanup.service. May 13 00:17:36.640681 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:17:36.640751 systemd[1]: Stopped ignition-mount.service. May 13 00:17:36.642511 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:17:36.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.642764 systemd[1]: Stopped target network.target. May 13 00:17:36.644396 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:17:36.644450 systemd[1]: Stopped ignition-disks.service. May 13 00:17:36.645810 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:17:36.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.645849 systemd[1]: Stopped ignition-kargs.service. May 13 00:17:36.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.647100 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:17:36.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.647134 systemd[1]: Stopped ignition-setup.service. May 13 00:17:36.648425 systemd[1]: Stopping systemd-networkd.service... May 13 00:17:36.649590 systemd[1]: Stopping systemd-resolved.service... May 13 00:17:36.655288 systemd-networkd[735]: eth0: DHCPv6 lease lost May 13 00:17:36.672000 audit: BPF prog-id=9 op=UNLOAD May 13 00:17:36.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.656245 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:17:36.656349 systemd[1]: Stopped systemd-networkd.service. May 13 00:17:36.657789 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:17:36.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.657819 systemd[1]: Closed systemd-networkd.socket. May 13 00:17:36.678000 audit: BPF prog-id=6 op=UNLOAD May 13 00:17:36.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.661318 systemd[1]: Stopping network-cleanup.service... May 13 00:17:36.662878 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:17:36.662936 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:17:36.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.664290 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:17:36.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.664334 systemd[1]: Stopped systemd-sysctl.service. May 13 00:17:36.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.666390 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:17:36.666432 systemd[1]: Stopped systemd-modules-load.service. May 13 00:17:36.667425 systemd[1]: Stopping systemd-udevd.service... May 13 00:17:36.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.671968 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:17:36.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.672457 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:17:36.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.672552 systemd[1]: Stopped systemd-resolved.service. May 13 00:17:36.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:36.675398 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:17:36.675525 systemd[1]: Stopped systemd-udevd.service. May 13 00:17:36.677111 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:17:36.677200 systemd[1]: Stopped network-cleanup.service. May 13 00:17:36.678570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:17:36.678607 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:17:36.679970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:17:36.680015 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:17:36.681467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:17:36.681514 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:17:36.683123 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:17:36.704000 audit: BPF prog-id=5 op=UNLOAD May 13 00:17:36.704000 audit: BPF prog-id=4 op=UNLOAD May 13 00:17:36.704000 audit: BPF prog-id=3 op=UNLOAD May 13 00:17:36.705000 audit: BPF prog-id=8 op=UNLOAD May 13 00:17:36.705000 audit: BPF prog-id=7 op=UNLOAD May 13 00:17:36.683163 systemd[1]: Stopped dracut-cmdline.service. May 13 00:17:36.684488 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:17:36.684528 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:17:36.686743 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:17:36.688318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:17:36.688371 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:17:36.690061 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:17:36.690147 systemd[1]: Stopped sysroot-boot.service. May 13 00:17:36.691119 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:17:36.691163 systemd[1]: Stopped initrd-setup-root.service. May 13 00:17:36.692960 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:17:36.693054 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:17:36.694507 systemd[1]: Reached target initrd-switch-root.target. May 13 00:17:36.696451 systemd[1]: Starting initrd-switch-root.service... May 13 00:17:36.701711 systemd[1]: Switching root. May 13 00:17:36.720693 iscsid[744]: iscsid shutting down. May 13 00:17:36.721355 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 13 00:17:36.721406 systemd-journald[290]: Journal stopped May 13 00:17:38.720413 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:17:38.720463 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:17:38.720476 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:17:38.720486 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:17:38.720496 kernel: SELinux: policy capability open_perms=1 May 13 00:17:38.720506 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:17:38.720516 kernel: SELinux: policy capability always_check_network=0 May 13 00:17:38.720525 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:17:38.720534 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:17:38.720544 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:17:38.720553 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:17:38.720570 systemd[1]: Successfully loaded SELinux policy in 35.500ms. May 13 00:17:38.720589 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.075ms. May 13 00:17:38.720607 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:17:38.720618 systemd[1]: Detected virtualization kvm. May 13 00:17:38.720629 systemd[1]: Detected architecture arm64. May 13 00:17:38.720639 systemd[1]: Detected first boot. May 13 00:17:38.720649 systemd[1]: Initializing machine ID from VM UUID. May 13 00:17:38.720659 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:17:38.720671 systemd[1]: Populated /etc with preset unit settings. May 13 00:17:38.720682 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:17:38.720693 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:17:38.720704 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:17:38.720716 systemd[1]: Queued start job for default target multi-user.target. May 13 00:17:38.720727 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:17:38.720738 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:17:38.720749 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:17:38.720759 systemd[1]: Created slice system-getty.slice. May 13 00:17:38.720769 systemd[1]: Created slice system-modprobe.slice. May 13 00:17:38.720782 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:17:38.720793 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:17:38.720803 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:17:38.720814 systemd[1]: Created slice user.slice. May 13 00:17:38.720827 systemd[1]: Started systemd-ask-password-console.path. May 13 00:17:38.720838 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:17:38.720848 systemd[1]: Set up automount boot.automount. May 13 00:17:38.720858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:17:38.720868 systemd[1]: Reached target integritysetup.target. May 13 00:17:38.720878 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:17:38.720889 systemd[1]: Reached target remote-fs.target. May 13 00:17:38.720899 systemd[1]: Reached target slices.target. May 13 00:17:38.720911 systemd[1]: Reached target swap.target. May 13 00:17:38.720921 systemd[1]: Reached target torcx.target. May 13 00:17:38.720931 systemd[1]: Reached target veritysetup.target. May 13 00:17:38.720942 systemd[1]: Listening on systemd-coredump.socket. May 13 00:17:38.720952 systemd[1]: Listening on systemd-initctl.socket. May 13 00:17:38.720962 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:17:38.720973 kernel: kauditd_printk_skb: 77 callbacks suppressed May 13 00:17:38.720990 kernel: audit: type=1400 audit(1747095458.629:81): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:17:38.721008 kernel: audit: type=1335 audit(1747095458.629:82): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:17:38.721018 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:17:38.721029 systemd[1]: Listening on systemd-journald.socket. May 13 00:17:38.721039 systemd[1]: Listening on systemd-networkd.socket. May 13 00:17:38.721049 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:17:38.721060 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:17:38.721079 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:17:38.721090 systemd[1]: Mounting dev-hugepages.mount... May 13 00:17:38.721100 systemd[1]: Mounting dev-mqueue.mount... May 13 00:17:38.721112 systemd[1]: Mounting media.mount... May 13 00:17:38.721122 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:17:38.721136 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:17:38.721147 systemd[1]: Mounting tmp.mount... May 13 00:17:38.721157 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:17:38.721167 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:17:38.721178 systemd[1]: Starting kmod-static-nodes.service... May 13 00:17:38.721189 systemd[1]: Starting modprobe@configfs.service... May 13 00:17:38.721199 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:17:38.721209 systemd[1]: Starting modprobe@drm.service... May 13 00:17:38.721221 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:17:38.721231 systemd[1]: Starting modprobe@fuse.service... May 13 00:17:38.721241 systemd[1]: Starting modprobe@loop.service... May 13 00:17:38.721258 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:17:38.721270 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:17:38.721280 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 00:17:38.721292 systemd[1]: Starting systemd-journald.service... May 13 00:17:38.721304 systemd[1]: Starting systemd-modules-load.service... May 13 00:17:38.721315 systemd[1]: Starting systemd-network-generator.service... May 13 00:17:38.721326 systemd[1]: Starting systemd-remount-fs.service... May 13 00:17:38.721336 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:17:38.721346 systemd[1]: Mounted dev-hugepages.mount. May 13 00:17:38.721356 systemd[1]: Mounted dev-mqueue.mount. May 13 00:17:38.721366 systemd[1]: Mounted media.mount. May 13 00:17:38.721376 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:17:38.721386 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:17:38.721396 systemd[1]: Mounted tmp.mount. May 13 00:17:38.721408 systemd[1]: Finished kmod-static-nodes.service. May 13 00:17:38.721419 kernel: audit: type=1130 audit(1747095458.708:83): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.721429 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:17:38.721439 systemd[1]: Finished modprobe@configfs.service. May 13 00:17:38.721450 kernel: audit: type=1130 audit(1747095458.715:84): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.721459 kernel: audit: type=1131 audit(1747095458.715:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.721469 kernel: fuse: init (API version 7.34) May 13 00:17:38.721479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:17:38.721490 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:17:38.721501 kernel: audit: type=1305 audit(1747095458.719:86): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:17:38.721512 systemd-journald[1024]: Journal started May 13 00:17:38.721551 systemd-journald[1024]: Runtime Journal (/run/log/journal/38c535d99e0c4fd6b1275beeef027dd9) is 6.0M, max 48.7M, 42.6M free. May 13 00:17:38.629000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:17:38.629000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:17:38.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.719000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:17:38.719000 audit[1024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffed219660 a2=4000 a3=1 items=0 ppid=1 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:17:38.729559 kernel: loop: module loaded May 13 00:17:38.729607 kernel: audit: type=1300 audit(1747095458.719:86): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffed219660 a2=4000 a3=1 items=0 ppid=1 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:17:38.729625 kernel: audit: type=1327 audit(1747095458.719:86): proctitle="/usr/lib/systemd/systemd-journald" May 13 00:17:38.729639 kernel: audit: type=1130 audit(1747095458.726:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.729651 systemd[1]: Started systemd-journald.service. May 13 00:17:38.729666 kernel: audit: type=1131 audit(1747095458.726:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.719000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:17:38.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.733224 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:17:38.733531 systemd[1]: Finished modprobe@drm.service. May 13 00:17:38.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.734388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:17:38.734917 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:17:38.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.735895 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:17:38.736108 systemd[1]: Finished modprobe@fuse.service. May 13 00:17:38.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.737031 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:17:38.737234 systemd[1]: Finished modprobe@loop.service. May 13 00:17:38.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.738199 systemd[1]: Finished systemd-modules-load.service. May 13 00:17:38.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.740990 systemd[1]: Finished systemd-network-generator.service. May 13 00:17:38.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.742731 systemd[1]: Finished systemd-remount-fs.service. May 13 00:17:38.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.743894 systemd[1]: Reached target network-pre.target. May 13 00:17:38.745799 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:17:38.747669 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:17:38.748486 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:17:38.750688 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:17:38.752572 systemd[1]: Starting systemd-journal-flush.service... May 13 00:17:38.753456 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:17:38.754490 systemd[1]: Starting systemd-random-seed.service... May 13 00:17:38.755384 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:17:38.756455 systemd[1]: Starting systemd-sysctl.service... May 13 00:17:38.759677 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:17:38.760480 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:17:38.762913 systemd-journald[1024]: Time spent on flushing to /var/log/journal/38c535d99e0c4fd6b1275beeef027dd9 is 15.455ms for 924 entries. May 13 00:17:38.762913 systemd-journald[1024]: System Journal (/var/log/journal/38c535d99e0c4fd6b1275beeef027dd9) is 8.0M, max 195.6M, 187.6M free. May 13 00:17:38.787522 systemd-journald[1024]: Received client request to flush runtime journal. May 13 00:17:38.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.768404 systemd[1]: Finished systemd-random-seed.service. May 13 00:17:38.788857 udevadm[1073]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:17:38.769144 systemd[1]: Reached target first-boot-complete.target. May 13 00:17:38.773315 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:17:38.775025 systemd[1]: Starting systemd-udev-settle.service... May 13 00:17:38.776598 systemd[1]: Finished systemd-sysctl.service. May 13 00:17:38.784170 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:17:38.786191 systemd[1]: Starting systemd-sysusers.service... May 13 00:17:38.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.789699 systemd[1]: Finished systemd-journal-flush.service. May 13 00:17:38.808083 systemd[1]: Finished systemd-sysusers.service. May 13 00:17:38.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:38.809924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:17:38.828652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:17:38.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.120323 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:17:39.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.122176 systemd[1]: Starting systemd-udevd.service... May 13 00:17:39.143237 systemd-udevd[1090]: Using default interface naming scheme 'v252'. May 13 00:17:39.154705 systemd[1]: Started systemd-udevd.service. May 13 00:17:39.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.156714 systemd[1]: Starting systemd-networkd.service... May 13 00:17:39.162193 systemd[1]: Starting systemd-userdbd.service... May 13 00:17:39.174917 systemd[1]: Found device dev-ttyAMA0.device. May 13 00:17:39.215648 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:17:39.217141 systemd[1]: Started systemd-userdbd.service. May 13 00:17:39.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.259218 systemd-networkd[1097]: lo: Link UP May 13 00:17:39.259228 systemd-networkd[1097]: lo: Gained carrier May 13 00:17:39.259565 systemd-networkd[1097]: Enumeration completed May 13 00:17:39.259667 systemd[1]: Started systemd-networkd.service. May 13 00:17:39.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.260715 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:17:39.261738 systemd-networkd[1097]: eth0: Link UP May 13 00:17:39.261747 systemd-networkd[1097]: eth0: Gained carrier May 13 00:17:39.265735 systemd[1]: Finished systemd-udev-settle.service. May 13 00:17:39.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.267774 systemd[1]: Starting lvm2-activation-early.service... May 13 00:17:39.280403 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:17:39.282942 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:17:39.309081 systemd[1]: Finished lvm2-activation-early.service. May 13 00:17:39.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.310067 systemd[1]: Reached target cryptsetup.target. May 13 00:17:39.311913 systemd[1]: Starting lvm2-activation.service... May 13 00:17:39.315402 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:17:39.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.350085 systemd[1]: Finished lvm2-activation.service. May 13 00:17:39.351008 systemd[1]: Reached target local-fs-pre.target. May 13 00:17:39.351854 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:17:39.351884 systemd[1]: Reached target local-fs.target. May 13 00:17:39.352675 systemd[1]: Reached target machines.target. May 13 00:17:39.354539 systemd[1]: Starting ldconfig.service... May 13 00:17:39.355495 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:17:39.355547 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:39.356682 systemd[1]: Starting systemd-boot-update.service... May 13 00:17:39.358364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:17:39.360226 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:17:39.362126 systemd[1]: Starting systemd-sysext.service... May 13 00:17:39.363182 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) May 13 00:17:39.364194 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:17:39.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.369557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:17:39.373750 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:17:39.378832 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:17:39.379084 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:17:39.390299 kernel: loop0: detected capacity change from 0 to 194096 May 13 00:17:39.431611 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:17:39.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.435139 systemd-fsck[1138]: fsck.fat 4.2 (2021-01-31) May 13 00:17:39.435139 systemd-fsck[1138]: /dev/vda1: 236 files, 117310/258078 clusters May 13 00:17:39.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.437918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:17:39.440280 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:17:39.453290 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:17:39.459096 (sd-sysext)[1148]: Using extensions 'kubernetes'. May 13 00:17:39.460009 (sd-sysext)[1148]: Merged extensions into '/usr'. May 13 00:17:39.474461 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:17:39.475612 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:17:39.477276 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:17:39.478962 systemd[1]: Starting modprobe@loop.service... May 13 00:17:39.479668 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:17:39.479793 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:39.480548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:17:39.480691 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:17:39.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.481822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:17:39.481950 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:17:39.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.483144 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:17:39.485532 systemd[1]: Finished modprobe@loop.service. May 13 00:17:39.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.486637 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:17:39.486732 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:17:39.534141 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:17:39.538382 systemd[1]: Finished ldconfig.service. May 13 00:17:39.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.699803 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:17:39.701568 systemd[1]: Mounting boot.mount... May 13 00:17:39.703310 systemd[1]: Mounting usr-share-oem.mount... May 13 00:17:39.709632 systemd[1]: Mounted boot.mount. May 13 00:17:39.710365 systemd[1]: Mounted usr-share-oem.mount. May 13 00:17:39.712150 systemd[1]: Finished systemd-sysext.service. May 13 00:17:39.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.714756 systemd[1]: Starting ensure-sysext.service... May 13 00:17:39.716393 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:17:39.717482 systemd[1]: Finished systemd-boot-update.service. May 13 00:17:39.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.721470 systemd[1]: Reloading. May 13 00:17:39.725495 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:17:39.726576 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:17:39.727832 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:17:39.760975 /usr/lib/systemd/system-generators/torcx-generator[1186]: time="2025-05-13T00:17:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:17:39.761001 /usr/lib/systemd/system-generators/torcx-generator[1186]: time="2025-05-13T00:17:39Z" level=info msg="torcx already run" May 13 00:17:39.823062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:17:39.823080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:17:39.838400 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:17:39.883230 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:17:39.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.886794 systemd[1]: Starting audit-rules.service... May 13 00:17:39.888419 systemd[1]: Starting clean-ca-certificates.service... May 13 00:17:39.890133 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:17:39.892451 systemd[1]: Starting systemd-resolved.service... May 13 00:17:39.894566 systemd[1]: Starting systemd-timesyncd.service... May 13 00:17:39.896366 systemd[1]: Starting systemd-update-utmp.service... May 13 00:17:39.897674 systemd[1]: Finished clean-ca-certificates.service. May 13 00:17:39.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.901624 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:17:39.902839 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:17:39.905180 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:17:39.905000 audit[1239]: SYSTEM_BOOT pid=1239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:17:39.907175 systemd[1]: Starting modprobe@loop.service... May 13 00:17:39.908809 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:17:39.908932 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:39.909030 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:17:39.909910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:17:39.910061 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:17:39.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.911420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:17:39.911594 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:17:39.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.914375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:17:39.915051 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:17:39.915218 systemd[1]: Finished modprobe@loop.service. May 13 00:17:39.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.916616 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:17:39.917719 systemd[1]: Finished systemd-update-utmp.service. May 13 00:17:39.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.927511 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:17:39.928868 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:17:39.931382 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:17:39.933334 systemd[1]: Starting modprobe@loop.service... May 13 00:17:39.934158 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:17:39.934325 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:39.934416 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:17:39.935299 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:17:39.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.940016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:17:39.940168 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:17:39.941503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:17:39.941638 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:17:39.942854 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:17:39.943004 systemd[1]: Finished modprobe@loop.service. May 13 00:17:39.946192 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:17:39.948534 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:17:39.950487 systemd[1]: Starting modprobe@drm.service... May 13 00:17:39.952788 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:17:39.955226 systemd[1]: Starting modprobe@loop.service... May 13 00:17:39.956188 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:17:39.956510 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:39.958074 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:17:39.960677 systemd[1]: Starting systemd-update-done.service... May 13 00:17:39.961626 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:17:39.962941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:17:39.963311 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:17:39.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.964877 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:17:39.965220 systemd[1]: Finished modprobe@drm.service. May 13 00:17:39.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.966676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:17:39.966817 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:17:39.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.968189 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:17:39.968516 systemd[1]: Finished modprobe@loop.service. May 13 00:17:39.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.970382 systemd[1]: Finished systemd-update-done.service. May 13 00:17:39.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.971858 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:17:39.971925 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:17:39.973456 systemd[1]: Finished ensure-sysext.service. May 13 00:17:39.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:39.976523 systemd[1]: Started systemd-timesyncd.service. May 13 00:17:40.377759 systemd-timesyncd[1238]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:17:40.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:17:40.377831 systemd-timesyncd[1238]: Initial clock synchronization to Tue 2025-05-13 00:17:40.377684 UTC. May 13 00:17:40.378232 systemd[1]: Reached target time-set.target. May 13 00:17:40.379633 systemd-resolved[1236]: Positive Trust Anchors: May 13 00:17:40.379646 systemd-resolved[1236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:17:40.379673 systemd-resolved[1236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:17:40.385000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:17:40.385000 audit[1283]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe57ab110 a2=420 a3=0 items=0 ppid=1232 pid=1283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:17:40.385000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:17:40.386580 augenrules[1283]: No rules May 13 00:17:40.387476 systemd[1]: Finished audit-rules.service. May 13 00:17:40.403906 systemd-resolved[1236]: Defaulting to hostname 'linux'. May 13 00:17:40.407733 systemd[1]: Started systemd-resolved.service. May 13 00:17:40.408452 systemd[1]: Reached target network.target. May 13 00:17:40.409053 systemd[1]: Reached target nss-lookup.target. May 13 00:17:40.409660 systemd[1]: Reached target sysinit.target. May 13 00:17:40.410330 systemd[1]: Started motdgen.path. May 13 00:17:40.410893 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:17:40.411901 systemd[1]: Started logrotate.timer. May 13 00:17:40.412572 systemd[1]: Started mdadm.timer. May 13 00:17:40.413131 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:17:40.413770 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:17:40.413798 systemd[1]: Reached target paths.target. May 13 00:17:40.414342 systemd[1]: Reached target timers.target. May 13 00:17:40.415254 systemd[1]: Listening on dbus.socket. May 13 00:17:40.417197 systemd[1]: Starting docker.socket... May 13 00:17:40.419012 systemd[1]: Listening on sshd.socket. May 13 00:17:40.419705 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:40.420111 systemd[1]: Listening on docker.socket. May 13 00:17:40.420733 systemd[1]: Reached target sockets.target. May 13 00:17:40.421345 systemd[1]: Reached target basic.target. May 13 00:17:40.422083 systemd[1]: System is tainted: cgroupsv1 May 13 00:17:40.422140 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:17:40.422161 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:17:40.423260 systemd[1]: Starting containerd.service... May 13 00:17:40.424997 systemd[1]: Starting dbus.service... May 13 00:17:40.426843 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:17:40.428903 systemd[1]: Starting extend-filesystems.service... May 13 00:17:40.429626 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:17:40.430980 systemd[1]: Starting motdgen.service... May 13 00:17:40.433021 systemd[1]: Starting prepare-helm.service... May 13 00:17:40.434851 jq[1294]: false May 13 00:17:40.435069 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:17:40.439745 systemd[1]: Starting sshd-keygen.service... May 13 00:17:40.442307 systemd[1]: Starting systemd-logind.service... May 13 00:17:40.442968 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:17:40.443118 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:17:40.444453 systemd[1]: Starting update-engine.service... May 13 00:17:40.446173 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:17:40.448586 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:17:40.448858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:17:40.453136 jq[1310]: true May 13 00:17:40.454760 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:17:40.455069 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:17:40.465272 jq[1317]: true May 13 00:17:40.471027 extend-filesystems[1295]: Found loop1 May 13 00:17:40.472755 tar[1314]: linux-arm64/helm May 13 00:17:40.473139 extend-filesystems[1295]: Found vda May 13 00:17:40.474438 extend-filesystems[1295]: Found vda1 May 13 00:17:40.475147 extend-filesystems[1295]: Found vda2 May 13 00:17:40.476016 extend-filesystems[1295]: Found vda3 May 13 00:17:40.476698 extend-filesystems[1295]: Found usr May 13 00:17:40.477439 extend-filesystems[1295]: Found vda4 May 13 00:17:40.478094 extend-filesystems[1295]: Found vda6 May 13 00:17:40.478684 extend-filesystems[1295]: Found vda7 May 13 00:17:40.480150 extend-filesystems[1295]: Found vda9 May 13 00:17:40.480731 extend-filesystems[1295]: Checking size of /dev/vda9 May 13 00:17:40.484901 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:17:40.485217 systemd[1]: Finished motdgen.service. May 13 00:17:40.512412 dbus-daemon[1293]: [system] SELinux support is enabled May 13 00:17:40.512730 systemd[1]: Started dbus.service. May 13 00:17:40.515157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:17:40.515186 systemd[1]: Reached target system-config.target. May 13 00:17:40.515848 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:17:40.515870 systemd[1]: Reached target user-config.target. May 13 00:17:40.516056 extend-filesystems[1295]: Resized partition /dev/vda9 May 13 00:17:40.530778 extend-filesystems[1351]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:17:40.538896 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:17:40.547971 update_engine[1309]: I0513 00:17:40.547645 1309 main.cc:92] Flatcar Update Engine starting May 13 00:17:40.549654 systemd[1]: Started update-engine.service. May 13 00:17:40.552029 update_engine[1309]: I0513 00:17:40.550132 1309 update_check_scheduler.cc:74] Next update check in 7m27s May 13 00:17:40.549899 systemd-logind[1307]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:17:40.550825 systemd-logind[1307]: New seat seat0. May 13 00:17:40.551904 systemd[1]: Started locksmithd.service. May 13 00:17:40.553296 bash[1342]: Updated "/home/core/.ssh/authorized_keys" May 13 00:17:40.554304 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:17:40.555510 systemd[1]: Started systemd-logind.service. May 13 00:17:40.558860 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:17:40.581379 extend-filesystems[1351]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:17:40.581379 extend-filesystems[1351]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:17:40.581379 extend-filesystems[1351]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:17:40.585423 extend-filesystems[1295]: Resized filesystem in /dev/vda9 May 13 00:17:40.584572 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:17:40.584823 systemd[1]: Finished extend-filesystems.service. May 13 00:17:40.590072 env[1320]: time="2025-05-13T00:17:40.590024302Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:17:40.608896 env[1320]: time="2025-05-13T00:17:40.608789262Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:17:40.609003 env[1320]: time="2025-05-13T00:17:40.608962822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610135182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610169622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610420862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610439782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610452902Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610462982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:17:40.610841 env[1320]: time="2025-05-13T00:17:40.610548662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:17:40.611039 env[1320]: time="2025-05-13T00:17:40.610846902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:17:40.611039 env[1320]: time="2025-05-13T00:17:40.610998822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:17:40.611039 env[1320]: time="2025-05-13T00:17:40.611015982Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:17:40.611097 env[1320]: time="2025-05-13T00:17:40.611069022Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:17:40.611097 env[1320]: time="2025-05-13T00:17:40.611082182Z" level=info msg="metadata content store policy set" policy=shared May 13 00:17:40.615688 env[1320]: time="2025-05-13T00:17:40.615652342Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:17:40.615688 env[1320]: time="2025-05-13T00:17:40.615687662Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:17:40.615790 env[1320]: time="2025-05-13T00:17:40.615701222Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:17:40.615790 env[1320]: time="2025-05-13T00:17:40.615740582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:17:40.615790 env[1320]: time="2025-05-13T00:17:40.615755022Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:17:40.615790 env[1320]: time="2025-05-13T00:17:40.615768622Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:17:40.615907 env[1320]: time="2025-05-13T00:17:40.615780742Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:17:40.616199 env[1320]: time="2025-05-13T00:17:40.616175382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:17:40.616240 env[1320]: time="2025-05-13T00:17:40.616201422Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:17:40.616240 env[1320]: time="2025-05-13T00:17:40.616215582Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:17:40.616240 env[1320]: time="2025-05-13T00:17:40.616229542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:17:40.616296 env[1320]: time="2025-05-13T00:17:40.616242822Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:17:40.616379 env[1320]: time="2025-05-13T00:17:40.616357142Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:17:40.616454 env[1320]: time="2025-05-13T00:17:40.616438542Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:17:40.616871 env[1320]: time="2025-05-13T00:17:40.616836262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:17:40.616913 env[1320]: time="2025-05-13T00:17:40.616885702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:17:40.616913 env[1320]: time="2025-05-13T00:17:40.616900222Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:17:40.617040 env[1320]: time="2025-05-13T00:17:40.617023582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617071 env[1320]: time="2025-05-13T00:17:40.617041502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617071 env[1320]: time="2025-05-13T00:17:40.617054582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617071 env[1320]: time="2025-05-13T00:17:40.617067022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617135 env[1320]: time="2025-05-13T00:17:40.617079342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617135 env[1320]: time="2025-05-13T00:17:40.617091622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617135 env[1320]: time="2025-05-13T00:17:40.617103102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617135 env[1320]: time="2025-05-13T00:17:40.617113942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617135 env[1320]: time="2025-05-13T00:17:40.617127422Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:17:40.617287 env[1320]: time="2025-05-13T00:17:40.617265662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617318 env[1320]: time="2025-05-13T00:17:40.617288982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617318 env[1320]: time="2025-05-13T00:17:40.617302182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617318 env[1320]: time="2025-05-13T00:17:40.617313542Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:17:40.617375 env[1320]: time="2025-05-13T00:17:40.617327742Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:17:40.617375 env[1320]: time="2025-05-13T00:17:40.617340342Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:17:40.617375 env[1320]: time="2025-05-13T00:17:40.617357782Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:17:40.617440 env[1320]: time="2025-05-13T00:17:40.617390502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:17:40.617645 env[1320]: time="2025-05-13T00:17:40.617590662Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:17:40.618253 env[1320]: time="2025-05-13T00:17:40.617649742Z" level=info msg="Connect containerd service" May 13 00:17:40.618253 env[1320]: time="2025-05-13T00:17:40.617681662Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.620376982Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.620772462Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.620827342Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.621541102Z" level=info msg="Start subscribing containerd event" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.621590662Z" level=info msg="Start recovering state" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.621650982Z" level=info msg="Start event monitor" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.621669662Z" level=info msg="Start snapshots syncer" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.621678982Z" level=info msg="Start cni network conf syncer for default" May 13 00:17:40.621770 env[1320]: time="2025-05-13T00:17:40.621686302Z" level=info msg="Start streaming server" May 13 00:17:40.620970 systemd[1]: Started containerd.service. May 13 00:17:40.622289 env[1320]: time="2025-05-13T00:17:40.622261542Z" level=info msg="containerd successfully booted in 0.033242s" May 13 00:17:40.653228 locksmithd[1352]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:17:40.749970 systemd-networkd[1097]: eth0: Gained IPv6LL May 13 00:17:40.752296 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:17:40.753344 systemd[1]: Reached target network-online.target. May 13 00:17:40.755461 systemd[1]: Starting kubelet.service... May 13 00:17:40.884099 tar[1314]: linux-arm64/LICENSE May 13 00:17:40.884263 tar[1314]: linux-arm64/README.md May 13 00:17:40.888439 systemd[1]: Finished prepare-helm.service. May 13 00:17:41.278269 systemd[1]: Started kubelet.service. May 13 00:17:41.806037 kubelet[1378]: E0513 00:17:41.805995 1378 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:17:41.807993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:17:41.808151 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:17:42.169718 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:17:42.187660 systemd[1]: Finished sshd-keygen.service. May 13 00:17:42.189857 systemd[1]: Starting issuegen.service... May 13 00:17:42.194671 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:17:42.194892 systemd[1]: Finished issuegen.service. May 13 00:17:42.196858 systemd[1]: Starting systemd-user-sessions.service... May 13 00:17:42.202565 systemd[1]: Finished systemd-user-sessions.service. May 13 00:17:42.204564 systemd[1]: Started getty@tty1.service. May 13 00:17:42.206417 systemd[1]: Started serial-getty@ttyAMA0.service. May 13 00:17:42.207258 systemd[1]: Reached target getty.target. May 13 00:17:42.207940 systemd[1]: Reached target multi-user.target. May 13 00:17:42.210026 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:17:42.216236 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:17:42.216443 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:17:42.217310 systemd[1]: Startup finished in 4.837s (kernel) + 5.045s (userspace) = 9.882s. May 13 00:17:45.235342 systemd[1]: Created slice system-sshd.slice. May 13 00:17:45.236881 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:40606.service. May 13 00:17:45.280440 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 40606 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:17:45.284115 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:17:45.293587 systemd[1]: Created slice user-500.slice. May 13 00:17:45.294621 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:17:45.296414 systemd-logind[1307]: New session 1 of user core. May 13 00:17:45.303177 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:17:45.304455 systemd[1]: Starting user@500.service... May 13 00:17:45.307464 (systemd)[1410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:17:45.365717 systemd[1410]: Queued start job for default target default.target. May 13 00:17:45.365946 systemd[1410]: Reached target paths.target. May 13 00:17:45.365962 systemd[1410]: Reached target sockets.target. May 13 00:17:45.365972 systemd[1410]: Reached target timers.target. May 13 00:17:45.365982 systemd[1410]: Reached target basic.target. May 13 00:17:45.366023 systemd[1410]: Reached target default.target. May 13 00:17:45.366046 systemd[1410]: Startup finished in 53ms. May 13 00:17:45.366117 systemd[1]: Started user@500.service. May 13 00:17:45.367033 systemd[1]: Started session-1.scope. May 13 00:17:45.417080 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:40620.service. May 13 00:17:45.462074 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:17:45.463362 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:17:45.466864 systemd-logind[1307]: New session 2 of user core. May 13 00:17:45.467526 systemd[1]: Started session-2.scope. May 13 00:17:45.519472 sshd[1419]: pam_unix(sshd:session): session closed for user core May 13 00:17:45.521740 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:40626.service. May 13 00:17:45.522394 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:40620.service: Deactivated successfully. May 13 00:17:45.523400 systemd-logind[1307]: Session 2 logged out. Waiting for processes to exit. May 13 00:17:45.523506 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:17:45.524313 systemd-logind[1307]: Removed session 2. May 13 00:17:45.558667 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 40626 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:17:45.559760 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:17:45.562728 systemd-logind[1307]: New session 3 of user core. May 13 00:17:45.563478 systemd[1]: Started session-3.scope. May 13 00:17:45.612188 sshd[1424]: pam_unix(sshd:session): session closed for user core May 13 00:17:45.614241 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:40640.service. May 13 00:17:45.614876 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:40626.service: Deactivated successfully. May 13 00:17:45.616053 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:17:45.616078 systemd-logind[1307]: Session 3 logged out. Waiting for processes to exit. May 13 00:17:45.617002 systemd-logind[1307]: Removed session 3. May 13 00:17:45.650189 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:17:45.651600 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:17:45.654867 systemd-logind[1307]: New session 4 of user core. May 13 00:17:45.655499 systemd[1]: Started session-4.scope. May 13 00:17:45.708088 sshd[1431]: pam_unix(sshd:session): session closed for user core May 13 00:17:45.710224 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:40650.service. May 13 00:17:45.710775 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:40640.service: Deactivated successfully. May 13 00:17:45.711879 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:17:45.712753 systemd-logind[1307]: Session 4 logged out. Waiting for processes to exit. May 13 00:17:45.715254 systemd-logind[1307]: Removed session 4. May 13 00:17:45.746564 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 40650 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:17:45.747616 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:17:45.750631 systemd-logind[1307]: New session 5 of user core. May 13 00:17:45.752602 systemd[1]: Started session-5.scope. May 13 00:17:45.815172 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:17:45.815396 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:17:45.875794 systemd[1]: Starting docker.service... May 13 00:17:45.956845 env[1456]: time="2025-05-13T00:17:45.956777822Z" level=info msg="Starting up" May 13 00:17:45.958174 env[1456]: time="2025-05-13T00:17:45.958146022Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:17:45.958256 env[1456]: time="2025-05-13T00:17:45.958242422Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:17:45.958432 env[1456]: time="2025-05-13T00:17:45.958416222Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:17:45.958523 env[1456]: time="2025-05-13T00:17:45.958503302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:17:45.960613 env[1456]: time="2025-05-13T00:17:45.960576982Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:17:45.960613 env[1456]: time="2025-05-13T00:17:45.960598102Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:17:45.960613 env[1456]: time="2025-05-13T00:17:45.960610702Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:17:45.960726 env[1456]: time="2025-05-13T00:17:45.960620582Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:17:45.965404 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2543595626-merged.mount: Deactivated successfully. May 13 00:17:46.144980 env[1456]: time="2025-05-13T00:17:46.144898702Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 00:17:46.144980 env[1456]: time="2025-05-13T00:17:46.144927102Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 00:17:46.145369 env[1456]: time="2025-05-13T00:17:46.145347662Z" level=info msg="Loading containers: start." May 13 00:17:46.270822 kernel: Initializing XFRM netlink socket May 13 00:17:46.293229 env[1456]: time="2025-05-13T00:17:46.293197502Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:17:46.350782 systemd-networkd[1097]: docker0: Link UP May 13 00:17:46.376177 env[1456]: time="2025-05-13T00:17:46.376137382Z" level=info msg="Loading containers: done." May 13 00:17:46.401785 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3770606022-merged.mount: Deactivated successfully. May 13 00:17:46.402704 env[1456]: time="2025-05-13T00:17:46.402669182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:17:46.403035 env[1456]: time="2025-05-13T00:17:46.403013302Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:17:46.403220 env[1456]: time="2025-05-13T00:17:46.403200742Z" level=info msg="Daemon has completed initialization" May 13 00:17:46.418378 systemd[1]: Started docker.service. May 13 00:17:46.423910 env[1456]: time="2025-05-13T00:17:46.423853742Z" level=info msg="API listen on /run/docker.sock" May 13 00:17:47.257505 env[1320]: time="2025-05-13T00:17:47.257445822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:17:47.923390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034709241.mount: Deactivated successfully. May 13 00:17:49.405978 env[1320]: time="2025-05-13T00:17:49.405922422Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:49.407379 env[1320]: time="2025-05-13T00:17:49.407350982Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:49.409295 env[1320]: time="2025-05-13T00:17:49.409266302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:49.411785 env[1320]: time="2025-05-13T00:17:49.411754222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:49.412496 env[1320]: time="2025-05-13T00:17:49.412461982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 00:17:49.422583 env[1320]: time="2025-05-13T00:17:49.422552662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:17:51.008489 env[1320]: time="2025-05-13T00:17:51.008297862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:51.012036 env[1320]: time="2025-05-13T00:17:51.011635462Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:51.013849 env[1320]: time="2025-05-13T00:17:51.013817222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:51.016175 env[1320]: time="2025-05-13T00:17:51.016108302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:51.017248 env[1320]: time="2025-05-13T00:17:51.016799342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 00:17:51.030512 env[1320]: time="2025-05-13T00:17:51.030433662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:17:51.886103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:17:51.886272 systemd[1]: Stopped kubelet.service. May 13 00:17:51.887767 systemd[1]: Starting kubelet.service... May 13 00:17:51.971204 systemd[1]: Started kubelet.service. May 13 00:17:52.020021 kubelet[1614]: E0513 00:17:52.019968 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:17:52.022676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:17:52.022836 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:17:52.267400 env[1320]: time="2025-05-13T00:17:52.267258182Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:52.269073 env[1320]: time="2025-05-13T00:17:52.269022742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:52.270614 env[1320]: time="2025-05-13T00:17:52.270576142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:52.272292 env[1320]: time="2025-05-13T00:17:52.272248902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:52.273196 env[1320]: time="2025-05-13T00:17:52.273098622Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 00:17:52.282079 env[1320]: time="2025-05-13T00:17:52.282041582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:17:53.395974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20220187.mount: Deactivated successfully. May 13 00:17:54.137121 env[1320]: time="2025-05-13T00:17:54.137076062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:54.138261 env[1320]: time="2025-05-13T00:17:54.138231702Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:54.139565 env[1320]: time="2025-05-13T00:17:54.139537462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:54.140573 env[1320]: time="2025-05-13T00:17:54.140544062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:54.141227 env[1320]: time="2025-05-13T00:17:54.141197342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:17:54.150743 env[1320]: time="2025-05-13T00:17:54.150719462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:17:54.715387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049947241.mount: Deactivated successfully. May 13 00:17:55.494541 env[1320]: time="2025-05-13T00:17:55.494486502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:55.497322 env[1320]: time="2025-05-13T00:17:55.497290702Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:55.500920 env[1320]: time="2025-05-13T00:17:55.500077902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:55.501620 env[1320]: time="2025-05-13T00:17:55.501582382Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:55.503451 env[1320]: time="2025-05-13T00:17:55.503408422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:17:55.515752 env[1320]: time="2025-05-13T00:17:55.515643622Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:17:56.030750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759530978.mount: Deactivated successfully. May 13 00:17:56.037162 env[1320]: time="2025-05-13T00:17:56.037117222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:56.039471 env[1320]: time="2025-05-13T00:17:56.039220462Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:56.040680 env[1320]: time="2025-05-13T00:17:56.040647342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:56.042556 env[1320]: time="2025-05-13T00:17:56.042491342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:56.043205 env[1320]: time="2025-05-13T00:17:56.042992542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 00:17:56.051749 env[1320]: time="2025-05-13T00:17:56.051511622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:17:56.628960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456465647.mount: Deactivated successfully. May 13 00:17:58.976069 env[1320]: time="2025-05-13T00:17:58.976021462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:58.977874 env[1320]: time="2025-05-13T00:17:58.977846182Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:58.979750 env[1320]: time="2025-05-13T00:17:58.979718382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:58.982134 env[1320]: time="2025-05-13T00:17:58.982107862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:17:58.982875 env[1320]: time="2025-05-13T00:17:58.982849142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 00:18:02.136067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:18:02.136230 systemd[1]: Stopped kubelet.service. May 13 00:18:02.137687 systemd[1]: Starting kubelet.service... May 13 00:18:02.228102 systemd[1]: Started kubelet.service. May 13 00:18:02.279180 kubelet[1727]: E0513 00:18:02.279134 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:18:02.281010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:18:02.281175 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:18:05.208418 systemd[1]: Stopped kubelet.service. May 13 00:18:05.210614 systemd[1]: Starting kubelet.service... May 13 00:18:05.228683 systemd[1]: Reloading. May 13 00:18:05.282561 /usr/lib/systemd/system-generators/torcx-generator[1766]: time="2025-05-13T00:18:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:18:05.282590 /usr/lib/systemd/system-generators/torcx-generator[1766]: time="2025-05-13T00:18:05Z" level=info msg="torcx already run" May 13 00:18:05.369001 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:18:05.369023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:18:05.384365 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:18:05.447268 systemd[1]: Started kubelet.service. May 13 00:18:05.449966 systemd[1]: Stopping kubelet.service... May 13 00:18:05.450618 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:18:05.450980 systemd[1]: Stopped kubelet.service. May 13 00:18:05.453280 systemd[1]: Starting kubelet.service... May 13 00:18:05.535991 systemd[1]: Started kubelet.service. May 13 00:18:05.575702 kubelet[1822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:18:05.575702 kubelet[1822]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:18:05.575702 kubelet[1822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:18:05.576089 kubelet[1822]: I0513 00:18:05.575796 1822 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:18:06.056117 kubelet[1822]: I0513 00:18:06.056070 1822 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:18:06.056117 kubelet[1822]: I0513 00:18:06.056105 1822 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:18:06.056324 kubelet[1822]: I0513 00:18:06.056308 1822 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:18:06.092371 kubelet[1822]: E0513 00:18:06.092329 1822 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.092545 kubelet[1822]: I0513 00:18:06.092326 1822 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:18:06.103626 kubelet[1822]: I0513 00:18:06.103567 1822 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:18:06.104154 kubelet[1822]: I0513 00:18:06.104117 1822 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:18:06.104315 kubelet[1822]: I0513 00:18:06.104151 1822 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:18:06.104404 kubelet[1822]: I0513 00:18:06.104381 1822 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:18:06.104404 kubelet[1822]: I0513 00:18:06.104391 1822 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:18:06.104657 kubelet[1822]: I0513 00:18:06.104631 1822 state_mem.go:36] "Initialized new in-memory state store" May 13 00:18:06.106256 kubelet[1822]: I0513 00:18:06.106225 1822 kubelet.go:400] "Attempting to sync node with API server" May 13 00:18:06.106256 kubelet[1822]: I0513 00:18:06.106254 1822 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:18:06.106487 kubelet[1822]: W0513 00:18:06.106421 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.106577 kubelet[1822]: E0513 00:18:06.106564 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.106645 kubelet[1822]: I0513 00:18:06.106488 1822 kubelet.go:312] "Adding apiserver pod source" May 13 00:18:06.106718 kubelet[1822]: I0513 00:18:06.106706 1822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:18:06.108924 kubelet[1822]: W0513 00:18:06.107184 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.108924 kubelet[1822]: E0513 00:18:06.107238 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.110380 kubelet[1822]: I0513 00:18:06.110337 1822 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:18:06.110731 kubelet[1822]: I0513 00:18:06.110704 1822 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:18:06.110904 kubelet[1822]: W0513 00:18:06.110889 1822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:18:06.111756 kubelet[1822]: I0513 00:18:06.111724 1822 server.go:1264] "Started kubelet" May 13 00:18:06.112110 kubelet[1822]: I0513 00:18:06.112069 1822 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:18:06.113130 kubelet[1822]: I0513 00:18:06.113079 1822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:18:06.113691 kubelet[1822]: I0513 00:18:06.113671 1822 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:18:06.113793 kubelet[1822]: I0513 00:18:06.113767 1822 server.go:455] "Adding debug handlers to kubelet server" May 13 00:18:06.117391 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:18:06.117517 kubelet[1822]: I0513 00:18:06.117496 1822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:18:06.123884 kubelet[1822]: I0513 00:18:06.123864 1822 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:18:06.125041 kubelet[1822]: I0513 00:18:06.125023 1822 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:18:06.132412 kubelet[1822]: I0513 00:18:06.132387 1822 reconciler.go:26] "Reconciler: start to sync state" May 13 00:18:06.133042 kubelet[1822]: W0513 00:18:06.132822 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.133042 kubelet[1822]: E0513 00:18:06.132876 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.133666 kubelet[1822]: E0513 00:18:06.133622 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" May 13 00:18:06.134111 kubelet[1822]: I0513 00:18:06.133924 1822 factory.go:221] Registration of the systemd container factory successfully May 13 00:18:06.134509 kubelet[1822]: I0513 00:18:06.134485 1822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:18:06.135285 kubelet[1822]: E0513 00:18:06.135253 1822 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:18:06.135715 kubelet[1822]: E0513 00:18:06.135316 1822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee168e606106 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:18:06.111703302 +0000 UTC m=+0.572159401,LastTimestamp:2025-05-13 00:18:06.111703302 +0000 UTC m=+0.572159401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:18:06.135916 kubelet[1822]: I0513 00:18:06.135900 1822 factory.go:221] Registration of the containerd container factory successfully May 13 00:18:06.144681 kubelet[1822]: I0513 00:18:06.144632 1822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:18:06.145608 kubelet[1822]: I0513 00:18:06.145576 1822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:18:06.145608 kubelet[1822]: I0513 00:18:06.145611 1822 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:18:06.145706 kubelet[1822]: I0513 00:18:06.145634 1822 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:18:06.145706 kubelet[1822]: E0513 00:18:06.145677 1822 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:18:06.149662 kubelet[1822]: W0513 00:18:06.149632 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.149773 kubelet[1822]: E0513 00:18:06.149758 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:06.155716 kubelet[1822]: I0513 00:18:06.155694 1822 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:18:06.155866 kubelet[1822]: I0513 00:18:06.155852 1822 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:18:06.155936 kubelet[1822]: I0513 00:18:06.155926 1822 state_mem.go:36] "Initialized new in-memory state store" May 13 00:18:06.232910 kubelet[1822]: I0513 00:18:06.232787 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:18:06.234951 kubelet[1822]: E0513 00:18:06.234918 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 13 00:18:06.246077 kubelet[1822]: E0513 00:18:06.246033 1822 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:18:06.254648 kubelet[1822]: I0513 00:18:06.254622 1822 policy_none.go:49] "None policy: Start" May 13 00:18:06.256284 kubelet[1822]: I0513 00:18:06.256167 1822 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:18:06.256284 kubelet[1822]: I0513 00:18:06.256199 1822 state_mem.go:35] "Initializing new in-memory state store" May 13 00:18:06.262298 kubelet[1822]: I0513 00:18:06.262271 1822 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:18:06.262484 kubelet[1822]: I0513 00:18:06.262443 1822 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:18:06.262564 kubelet[1822]: I0513 00:18:06.262549 1822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:18:06.264135 kubelet[1822]: E0513 00:18:06.264110 1822 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:18:06.335054 kubelet[1822]: E0513 00:18:06.334912 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" May 13 00:18:06.437543 kubelet[1822]: I0513 00:18:06.436980 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:18:06.437543 kubelet[1822]: E0513 00:18:06.437272 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 13 00:18:06.446948 kubelet[1822]: I0513 00:18:06.446877 1822 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:18:06.450426 kubelet[1822]: I0513 00:18:06.448523 1822 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:18:06.459784 kubelet[1822]: I0513 00:18:06.458137 1822 topology_manager.go:215] "Topology Admit Handler" podUID="300990f24b5e4e2d5807b597b83ec6a4" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:18:06.534694 kubelet[1822]: I0513 00:18:06.534651 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/300990f24b5e4e2d5807b597b83ec6a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"300990f24b5e4e2d5807b597b83ec6a4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:18:06.534694 kubelet[1822]: I0513 00:18:06.534695 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:06.534888 kubelet[1822]: I0513 00:18:06.534718 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/300990f24b5e4e2d5807b597b83ec6a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"300990f24b5e4e2d5807b597b83ec6a4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:18:06.534888 kubelet[1822]: I0513 00:18:06.534736 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/300990f24b5e4e2d5807b597b83ec6a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"300990f24b5e4e2d5807b597b83ec6a4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:18:06.534888 kubelet[1822]: I0513 00:18:06.534751 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:06.534888 kubelet[1822]: I0513 00:18:06.534768 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:06.534888 kubelet[1822]: I0513 00:18:06.534783 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:06.534998 kubelet[1822]: I0513 00:18:06.534808 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:06.534998 kubelet[1822]: I0513 00:18:06.534836 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:18:06.736150 kubelet[1822]: E0513 00:18:06.736108 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" May 13 00:18:06.753420 kubelet[1822]: E0513 00:18:06.753259 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:06.754251 env[1320]: time="2025-05-13T00:18:06.753951462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:18:06.763992 kubelet[1822]: E0513 00:18:06.763966 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:06.764659 env[1320]: time="2025-05-13T00:18:06.764626702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:18:06.767081 kubelet[1822]: E0513 00:18:06.767063 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:06.767537 env[1320]: time="2025-05-13T00:18:06.767506022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:300990f24b5e4e2d5807b597b83ec6a4,Namespace:kube-system,Attempt:0,}" May 13 00:18:06.838628 kubelet[1822]: I0513 00:18:06.838558 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:18:06.838911 kubelet[1822]: E0513 00:18:06.838879 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 13 00:18:07.005491 kubelet[1822]: W0513 00:18:07.005348 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.005491 kubelet[1822]: E0513 00:18:07.005419 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.018933 kubelet[1822]: W0513 00:18:07.018876 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.018933 kubelet[1822]: E0513 00:18:07.018933 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.272495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980246171.mount: Deactivated successfully. May 13 00:18:07.278168 env[1320]: time="2025-05-13T00:18:07.278135062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.280051 env[1320]: time="2025-05-13T00:18:07.280023942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.281010 env[1320]: time="2025-05-13T00:18:07.280984382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.282030 env[1320]: time="2025-05-13T00:18:07.281998622Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.283664 env[1320]: time="2025-05-13T00:18:07.283637822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.285219 env[1320]: time="2025-05-13T00:18:07.285191142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.287403 env[1320]: time="2025-05-13T00:18:07.287360062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.288037 env[1320]: time="2025-05-13T00:18:07.288012542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.291215 env[1320]: time="2025-05-13T00:18:07.291187582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.293513 env[1320]: time="2025-05-13T00:18:07.293485102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.294226 env[1320]: time="2025-05-13T00:18:07.294201342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.295113 env[1320]: time="2025-05-13T00:18:07.295088462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:07.316534 env[1320]: time="2025-05-13T00:18:07.316479302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:07.316660 env[1320]: time="2025-05-13T00:18:07.316637302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:07.316735 env[1320]: time="2025-05-13T00:18:07.316714662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:07.317172 env[1320]: time="2025-05-13T00:18:07.317117902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03463d213c514c0764373fd1a67e1b150295ce781e167642655ecdd135d5023e pid=1865 runtime=io.containerd.runc.v2 May 13 00:18:07.317383 env[1320]: time="2025-05-13T00:18:07.317330222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:07.317462 env[1320]: time="2025-05-13T00:18:07.317375622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:07.317462 env[1320]: time="2025-05-13T00:18:07.317388742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:07.317527 env[1320]: time="2025-05-13T00:18:07.317509942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6473770191a32a481f23f1ed3a62ffe4b76ee7c7db53d5cf2291fc7af3a9d622 pid=1884 runtime=io.containerd.runc.v2 May 13 00:18:07.322295 env[1320]: time="2025-05-13T00:18:07.322225982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:07.322438 env[1320]: time="2025-05-13T00:18:07.322273582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:07.322438 env[1320]: time="2025-05-13T00:18:07.322284742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:07.322524 env[1320]: time="2025-05-13T00:18:07.322459742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9517d93c3e93d045b01f5948b79a77d58e6aaf54e7c8b75e5623f7dabb5910d4 pid=1886 runtime=io.containerd.runc.v2 May 13 00:18:07.323207 kubelet[1822]: W0513 00:18:07.323152 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.323296 kubelet[1822]: E0513 00:18:07.323216 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.349096 kubelet[1822]: W0513 00:18:07.349009 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.349096 kubelet[1822]: E0513 00:18:07.349072 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 13 00:18:07.392444 env[1320]: time="2025-05-13T00:18:07.392401822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"03463d213c514c0764373fd1a67e1b150295ce781e167642655ecdd135d5023e\"" May 13 00:18:07.400617 kubelet[1822]: E0513 00:18:07.400366 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:07.401754 env[1320]: time="2025-05-13T00:18:07.401716582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:300990f24b5e4e2d5807b597b83ec6a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9517d93c3e93d045b01f5948b79a77d58e6aaf54e7c8b75e5623f7dabb5910d4\"" May 13 00:18:07.403032 kubelet[1822]: E0513 00:18:07.403010 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:07.403267 env[1320]: time="2025-05-13T00:18:07.403221422Z" level=info msg="CreateContainer within sandbox \"03463d213c514c0764373fd1a67e1b150295ce781e167642655ecdd135d5023e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:18:07.404875 env[1320]: time="2025-05-13T00:18:07.404836982Z" level=info msg="CreateContainer within sandbox \"9517d93c3e93d045b01f5948b79a77d58e6aaf54e7c8b75e5623f7dabb5910d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:18:07.418566 env[1320]: time="2025-05-13T00:18:07.418538022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6473770191a32a481f23f1ed3a62ffe4b76ee7c7db53d5cf2291fc7af3a9d622\"" May 13 00:18:07.419261 kubelet[1822]: E0513 00:18:07.419082 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:07.421242 env[1320]: time="2025-05-13T00:18:07.421209462Z" level=info msg="CreateContainer within sandbox \"6473770191a32a481f23f1ed3a62ffe4b76ee7c7db53d5cf2291fc7af3a9d622\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:18:07.421700 env[1320]: time="2025-05-13T00:18:07.421662982Z" level=info msg="CreateContainer within sandbox \"03463d213c514c0764373fd1a67e1b150295ce781e167642655ecdd135d5023e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b284cbcf6f11b41bef62c91bd601dfc8651aeb6cb8377c6616332e5535482c39\"" May 13 00:18:07.422185 env[1320]: time="2025-05-13T00:18:07.422153222Z" level=info msg="StartContainer for \"b284cbcf6f11b41bef62c91bd601dfc8651aeb6cb8377c6616332e5535482c39\"" May 13 00:18:07.424225 env[1320]: time="2025-05-13T00:18:07.424181342Z" level=info msg="CreateContainer within sandbox \"9517d93c3e93d045b01f5948b79a77d58e6aaf54e7c8b75e5623f7dabb5910d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb96e92b000ea36c2a4a3e6c8fdb38956e68d5702ae4568e505435dabf3efa7e\"" May 13 00:18:07.428384 env[1320]: time="2025-05-13T00:18:07.428354222Z" level=info msg="StartContainer for \"fb96e92b000ea36c2a4a3e6c8fdb38956e68d5702ae4568e505435dabf3efa7e\"" May 13 00:18:07.433288 env[1320]: time="2025-05-13T00:18:07.433252182Z" level=info msg="CreateContainer within sandbox \"6473770191a32a481f23f1ed3a62ffe4b76ee7c7db53d5cf2291fc7af3a9d622\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c36bc7a330cfed8958478b8b73c6b89a9aa781e9fb68b60db6b9a23f9fcb384\"" May 13 00:18:07.433734 env[1320]: time="2025-05-13T00:18:07.433706542Z" level=info msg="StartContainer for \"2c36bc7a330cfed8958478b8b73c6b89a9aa781e9fb68b60db6b9a23f9fcb384\"" May 13 00:18:07.523133 env[1320]: time="2025-05-13T00:18:07.523038582Z" level=info msg="StartContainer for \"fb96e92b000ea36c2a4a3e6c8fdb38956e68d5702ae4568e505435dabf3efa7e\" returns successfully" May 13 00:18:07.523516 env[1320]: time="2025-05-13T00:18:07.523320102Z" level=info msg="StartContainer for \"b284cbcf6f11b41bef62c91bd601dfc8651aeb6cb8377c6616332e5535482c39\" returns successfully" May 13 00:18:07.542824 kubelet[1822]: E0513 00:18:07.539412 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" May 13 00:18:07.557025 env[1320]: time="2025-05-13T00:18:07.556991342Z" level=info msg="StartContainer for \"2c36bc7a330cfed8958478b8b73c6b89a9aa781e9fb68b60db6b9a23f9fcb384\" returns successfully" May 13 00:18:07.641317 kubelet[1822]: I0513 00:18:07.640987 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:18:07.641317 kubelet[1822]: E0513 00:18:07.641271 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 13 00:18:08.154233 kubelet[1822]: E0513 00:18:08.154141 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:08.156754 kubelet[1822]: E0513 00:18:08.156729 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:08.159139 kubelet[1822]: E0513 00:18:08.159112 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:09.160892 kubelet[1822]: E0513 00:18:09.160860 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:09.213033 kubelet[1822]: E0513 00:18:09.212992 1822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:18:09.242522 kubelet[1822]: I0513 00:18:09.242490 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:18:09.378980 kubelet[1822]: I0513 00:18:09.378944 1822 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:18:10.108687 kubelet[1822]: I0513 00:18:10.108647 1822 apiserver.go:52] "Watching apiserver" May 13 00:18:10.126176 kubelet[1822]: I0513 00:18:10.126132 1822 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:18:10.429746 kubelet[1822]: E0513 00:18:10.429711 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:11.163182 kubelet[1822]: E0513 00:18:11.163151 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:11.494345 systemd[1]: Reloading. May 13 00:18:11.543108 /usr/lib/systemd/system-generators/torcx-generator[2114]: time="2025-05-13T00:18:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:18:11.543136 /usr/lib/systemd/system-generators/torcx-generator[2114]: time="2025-05-13T00:18:11Z" level=info msg="torcx already run" May 13 00:18:11.618356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:18:11.618375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:18:11.635605 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:18:11.708328 kubelet[1822]: I0513 00:18:11.708249 1822 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:18:11.708363 systemd[1]: Stopping kubelet.service... May 13 00:18:11.727194 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:18:11.727482 systemd[1]: Stopped kubelet.service. May 13 00:18:11.729136 systemd[1]: Starting kubelet.service... May 13 00:18:11.810710 systemd[1]: Started kubelet.service. May 13 00:18:11.872813 kubelet[2167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:18:11.873151 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:18:11.873199 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:18:11.873333 kubelet[2167]: I0513 00:18:11.873291 2167 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:18:11.880001 kubelet[2167]: I0513 00:18:11.879936 2167 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:18:11.880140 kubelet[2167]: I0513 00:18:11.880127 2167 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:18:11.880919 kubelet[2167]: I0513 00:18:11.880897 2167 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:18:11.882874 kubelet[2167]: I0513 00:18:11.882853 2167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:18:11.884778 kubelet[2167]: I0513 00:18:11.884748 2167 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:18:11.894109 kubelet[2167]: I0513 00:18:11.894086 2167 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:18:11.894568 kubelet[2167]: I0513 00:18:11.894542 2167 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:18:11.894734 kubelet[2167]: I0513 00:18:11.894570 2167 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:18:11.894852 kubelet[2167]: I0513 00:18:11.894741 2167 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:18:11.894852 kubelet[2167]: I0513 00:18:11.894751 2167 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:18:11.894852 kubelet[2167]: I0513 00:18:11.894783 2167 state_mem.go:36] "Initialized new in-memory state store" May 13 00:18:11.894936 kubelet[2167]: I0513 00:18:11.894925 2167 kubelet.go:400] "Attempting to sync node with API server" May 13 00:18:11.894936 kubelet[2167]: I0513 00:18:11.894937 2167 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:18:11.895054 kubelet[2167]: I0513 00:18:11.894965 2167 kubelet.go:312] "Adding apiserver pod source" May 13 00:18:11.895054 kubelet[2167]: I0513 00:18:11.894980 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:18:11.904267 kubelet[2167]: I0513 00:18:11.904046 2167 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:18:11.904380 kubelet[2167]: I0513 00:18:11.904337 2167 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:18:11.913606 kubelet[2167]: I0513 00:18:11.904822 2167 server.go:1264] "Started kubelet" May 13 00:18:11.913606 kubelet[2167]: I0513 00:18:11.906858 2167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:18:11.913606 kubelet[2167]: I0513 00:18:11.907244 2167 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:18:11.913606 kubelet[2167]: I0513 00:18:11.907331 2167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:18:11.913606 kubelet[2167]: I0513 00:18:11.908934 2167 server.go:455] "Adding debug handlers to kubelet server" May 13 00:18:11.913606 kubelet[2167]: I0513 00:18:11.913148 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:18:11.921690 kubelet[2167]: I0513 00:18:11.921661 2167 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:18:11.924619 kubelet[2167]: I0513 00:18:11.924595 2167 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:18:11.924850 kubelet[2167]: I0513 00:18:11.924795 2167 factory.go:221] Registration of the systemd container factory successfully May 13 00:18:11.924949 kubelet[2167]: I0513 00:18:11.924931 2167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:18:11.925807 kubelet[2167]: I0513 00:18:11.925783 2167 reconciler.go:26] "Reconciler: start to sync state" May 13 00:18:11.935858 kubelet[2167]: E0513 00:18:11.933076 2167 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:18:11.936874 kubelet[2167]: I0513 00:18:11.936856 2167 factory.go:221] Registration of the containerd container factory successfully May 13 00:18:11.951083 kubelet[2167]: I0513 00:18:11.951031 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:18:11.952400 kubelet[2167]: I0513 00:18:11.952355 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:18:11.952400 kubelet[2167]: I0513 00:18:11.952394 2167 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:18:11.952706 kubelet[2167]: I0513 00:18:11.952409 2167 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:18:11.952706 kubelet[2167]: E0513 00:18:11.952450 2167 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:18:11.979953 kubelet[2167]: I0513 00:18:11.979922 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:18:11.979953 kubelet[2167]: I0513 00:18:11.979956 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:18:11.980104 kubelet[2167]: I0513 00:18:11.979978 2167 state_mem.go:36] "Initialized new in-memory state store" May 13 00:18:11.980147 kubelet[2167]: I0513 00:18:11.980130 2167 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:18:11.980178 kubelet[2167]: I0513 00:18:11.980146 2167 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:18:11.980178 kubelet[2167]: I0513 00:18:11.980165 2167 policy_none.go:49] "None policy: Start" May 13 00:18:11.980907 kubelet[2167]: I0513 00:18:11.980883 2167 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:18:11.980973 kubelet[2167]: I0513 00:18:11.980913 2167 state_mem.go:35] "Initializing new in-memory state store" May 13 00:18:11.981094 kubelet[2167]: I0513 00:18:11.981078 2167 state_mem.go:75] "Updated machine memory state" May 13 00:18:11.982269 kubelet[2167]: I0513 00:18:11.982244 2167 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:18:11.982467 kubelet[2167]: I0513 00:18:11.982419 2167 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:18:11.982595 kubelet[2167]: I0513 00:18:11.982578 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:18:12.024930 kubelet[2167]: I0513 00:18:12.024893 2167 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:18:12.030758 kubelet[2167]: I0513 00:18:12.030729 2167 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:18:12.030879 kubelet[2167]: I0513 00:18:12.030840 2167 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:18:12.053428 kubelet[2167]: I0513 00:18:12.053363 2167 topology_manager.go:215] "Topology Admit Handler" podUID="300990f24b5e4e2d5807b597b83ec6a4" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:18:12.053601 kubelet[2167]: I0513 00:18:12.053483 2167 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:18:12.053601 kubelet[2167]: I0513 00:18:12.053518 2167 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:18:12.061070 kubelet[2167]: E0513 00:18:12.060969 2167 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:18:12.126395 kubelet[2167]: I0513 00:18:12.126352 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:18:12.126523 kubelet[2167]: I0513 00:18:12.126403 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/300990f24b5e4e2d5807b597b83ec6a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"300990f24b5e4e2d5807b597b83ec6a4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:18:12.126523 kubelet[2167]: I0513 00:18:12.126428 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/300990f24b5e4e2d5807b597b83ec6a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"300990f24b5e4e2d5807b597b83ec6a4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:18:12.126523 kubelet[2167]: I0513 00:18:12.126447 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/300990f24b5e4e2d5807b597b83ec6a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"300990f24b5e4e2d5807b597b83ec6a4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:18:12.126523 kubelet[2167]: I0513 00:18:12.126476 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:12.126523 kubelet[2167]: I0513 00:18:12.126494 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:12.126648 kubelet[2167]: I0513 00:18:12.126510 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:12.126648 kubelet[2167]: I0513 00:18:12.126525 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:12.126648 kubelet[2167]: I0513 00:18:12.126561 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:18:12.360820 kubelet[2167]: E0513 00:18:12.360658 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:12.361684 kubelet[2167]: E0513 00:18:12.361649 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:12.362247 kubelet[2167]: E0513 00:18:12.362221 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:12.895902 kubelet[2167]: I0513 00:18:12.895861 2167 apiserver.go:52] "Watching apiserver" May 13 00:18:12.925556 kubelet[2167]: I0513 00:18:12.925525 2167 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:18:12.962685 kubelet[2167]: E0513 00:18:12.962643 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:12.963542 kubelet[2167]: E0513 00:18:12.963515 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:12.999330 kubelet[2167]: E0513 00:18:12.999257 2167 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:18:12.999330 kubelet[2167]: E0513 00:18:13.000317 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:13.005698 kubelet[2167]: I0513 00:18:13.004855 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.004842861 podStartE2EDuration="3.004842861s" podCreationTimestamp="2025-05-13 00:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:18:13.004313256 +0000 UTC m=+1.189816595" watchObservedRunningTime="2025-05-13 00:18:13.004842861 +0000 UTC m=+1.190346160" May 13 00:18:13.016719 kubelet[2167]: I0513 00:18:13.016624 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.016610818 podStartE2EDuration="1.016610818s" podCreationTimestamp="2025-05-13 00:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:18:13.015719969 +0000 UTC m=+1.201223268" watchObservedRunningTime="2025-05-13 00:18:13.016610818 +0000 UTC m=+1.202114117" May 13 00:18:13.023359 kubelet[2167]: I0513 00:18:13.023226 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.023214683 podStartE2EDuration="1.023214683s" podCreationTimestamp="2025-05-13 00:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:18:13.022786719 +0000 UTC m=+1.208290018" watchObservedRunningTime="2025-05-13 00:18:13.023214683 +0000 UTC m=+1.208717982" May 13 00:18:13.655836 sudo[1444]: pam_unix(sudo:session): session closed for user root May 13 00:18:13.658066 sshd[1438]: pam_unix(sshd:session): session closed for user core May 13 00:18:13.660861 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:40650.service: Deactivated successfully. May 13 00:18:13.662427 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:18:13.663000 systemd-logind[1307]: Session 5 logged out. Waiting for processes to exit. May 13 00:18:13.664220 systemd-logind[1307]: Removed session 5. May 13 00:18:13.964052 kubelet[2167]: E0513 00:18:13.963967 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:16.363633 kubelet[2167]: E0513 00:18:16.363586 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:16.835788 kubelet[2167]: E0513 00:18:16.835742 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:21.476242 kubelet[2167]: E0513 00:18:21.474845 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:21.976486 kubelet[2167]: E0513 00:18:21.976447 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:25.219386 kubelet[2167]: I0513 00:18:25.219354 2167 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:18:25.220086 env[1320]: time="2025-05-13T00:18:25.220000551Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:18:25.220582 kubelet[2167]: I0513 00:18:25.220549 2167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:18:25.346162 update_engine[1309]: I0513 00:18:25.346117 1309 update_attempter.cc:509] Updating boot flags... May 13 00:18:25.725220 kubelet[2167]: I0513 00:18:25.725171 2167 topology_manager.go:215] "Topology Admit Handler" podUID="42cf7f10-783d-4a74-86dc-6ddba04f87de" podNamespace="kube-system" podName="kube-proxy-pg4m6" May 13 00:18:25.726420 kubelet[2167]: I0513 00:18:25.726378 2167 topology_manager.go:215] "Topology Admit Handler" podUID="eb145995-5c3f-47d2-befe-7ec16cb8f4d0" podNamespace="kube-flannel" podName="kube-flannel-ds-sqfbs" May 13 00:18:25.833737 kubelet[2167]: I0513 00:18:25.833694 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-cni-plugin\") pod \"kube-flannel-ds-sqfbs\" (UID: \"eb145995-5c3f-47d2-befe-7ec16cb8f4d0\") " pod="kube-flannel/kube-flannel-ds-sqfbs" May 13 00:18:25.833945 kubelet[2167]: I0513 00:18:25.833924 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzql5\" (UniqueName: \"kubernetes.io/projected/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-kube-api-access-wzql5\") pod \"kube-flannel-ds-sqfbs\" (UID: \"eb145995-5c3f-47d2-befe-7ec16cb8f4d0\") " pod="kube-flannel/kube-flannel-ds-sqfbs" May 13 00:18:25.834092 kubelet[2167]: I0513 00:18:25.834053 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-cni\") pod \"kube-flannel-ds-sqfbs\" (UID: \"eb145995-5c3f-47d2-befe-7ec16cb8f4d0\") " pod="kube-flannel/kube-flannel-ds-sqfbs" May 13 00:18:25.834191 kubelet[2167]: I0513 00:18:25.834129 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-xtables-lock\") pod \"kube-flannel-ds-sqfbs\" (UID: \"eb145995-5c3f-47d2-befe-7ec16cb8f4d0\") " pod="kube-flannel/kube-flannel-ds-sqfbs" May 13 00:18:25.834191 kubelet[2167]: I0513 00:18:25.834183 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtck\" (UniqueName: \"kubernetes.io/projected/42cf7f10-783d-4a74-86dc-6ddba04f87de-kube-api-access-kgtck\") pod \"kube-proxy-pg4m6\" (UID: \"42cf7f10-783d-4a74-86dc-6ddba04f87de\") " pod="kube-system/kube-proxy-pg4m6" May 13 00:18:25.834276 kubelet[2167]: I0513 00:18:25.834207 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42cf7f10-783d-4a74-86dc-6ddba04f87de-kube-proxy\") pod \"kube-proxy-pg4m6\" (UID: \"42cf7f10-783d-4a74-86dc-6ddba04f87de\") " pod="kube-system/kube-proxy-pg4m6" May 13 00:18:25.834276 kubelet[2167]: I0513 00:18:25.834225 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42cf7f10-783d-4a74-86dc-6ddba04f87de-xtables-lock\") pod \"kube-proxy-pg4m6\" (UID: \"42cf7f10-783d-4a74-86dc-6ddba04f87de\") " pod="kube-system/kube-proxy-pg4m6" May 13 00:18:25.834276 kubelet[2167]: I0513 00:18:25.834251 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42cf7f10-783d-4a74-86dc-6ddba04f87de-lib-modules\") pod \"kube-proxy-pg4m6\" (UID: \"42cf7f10-783d-4a74-86dc-6ddba04f87de\") " pod="kube-system/kube-proxy-pg4m6" May 13 00:18:25.834276 kubelet[2167]: I0513 00:18:25.834267 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-run\") pod \"kube-flannel-ds-sqfbs\" (UID: \"eb145995-5c3f-47d2-befe-7ec16cb8f4d0\") " pod="kube-flannel/kube-flannel-ds-sqfbs" May 13 00:18:25.834385 kubelet[2167]: I0513 00:18:25.834324 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-flannel-cfg\") pod \"kube-flannel-ds-sqfbs\" (UID: \"eb145995-5c3f-47d2-befe-7ec16cb8f4d0\") " pod="kube-flannel/kube-flannel-ds-sqfbs" May 13 00:18:25.944496 kubelet[2167]: E0513 00:18:25.944463 2167 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 00:18:25.944663 kubelet[2167]: E0513 00:18:25.944642 2167 projected.go:200] Error preparing data for projected volume kube-api-access-kgtck for pod kube-system/kube-proxy-pg4m6: configmap "kube-root-ca.crt" not found May 13 00:18:25.944786 kubelet[2167]: E0513 00:18:25.944772 2167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/42cf7f10-783d-4a74-86dc-6ddba04f87de-kube-api-access-kgtck podName:42cf7f10-783d-4a74-86dc-6ddba04f87de nodeName:}" failed. No retries permitted until 2025-05-13 00:18:26.444747411 +0000 UTC m=+14.630250710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kgtck" (UniqueName: "kubernetes.io/projected/42cf7f10-783d-4a74-86dc-6ddba04f87de-kube-api-access-kgtck") pod "kube-proxy-pg4m6" (UID: "42cf7f10-783d-4a74-86dc-6ddba04f87de") : configmap "kube-root-ca.crt" not found May 13 00:18:25.945060 kubelet[2167]: E0513 00:18:25.944666 2167 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 00:18:25.945188 kubelet[2167]: E0513 00:18:25.945172 2167 projected.go:200] Error preparing data for projected volume kube-api-access-wzql5 for pod kube-flannel/kube-flannel-ds-sqfbs: configmap "kube-root-ca.crt" not found May 13 00:18:25.945300 kubelet[2167]: E0513 00:18:25.945285 2167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-kube-api-access-wzql5 podName:eb145995-5c3f-47d2-befe-7ec16cb8f4d0 nodeName:}" failed. No retries permitted until 2025-05-13 00:18:26.445270494 +0000 UTC m=+14.630773793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzql5" (UniqueName: "kubernetes.io/projected/eb145995-5c3f-47d2-befe-7ec16cb8f4d0-kube-api-access-wzql5") pod "kube-flannel-ds-sqfbs" (UID: "eb145995-5c3f-47d2-befe-7ec16cb8f4d0") : configmap "kube-root-ca.crt" not found May 13 00:18:26.370997 kubelet[2167]: E0513 00:18:26.370967 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:26.630384 kubelet[2167]: E0513 00:18:26.630222 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:26.631097 env[1320]: time="2025-05-13T00:18:26.631040277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg4m6,Uid:42cf7f10-783d-4a74-86dc-6ddba04f87de,Namespace:kube-system,Attempt:0,}" May 13 00:18:26.636744 kubelet[2167]: E0513 00:18:26.636695 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:26.637902 env[1320]: time="2025-05-13T00:18:26.637150423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sqfbs,Uid:eb145995-5c3f-47d2-befe-7ec16cb8f4d0,Namespace:kube-flannel,Attempt:0,}" May 13 00:18:26.652393 env[1320]: time="2025-05-13T00:18:26.652340728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:26.652528 env[1320]: time="2025-05-13T00:18:26.652505529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:26.652601 env[1320]: time="2025-05-13T00:18:26.652582529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:26.652856 env[1320]: time="2025-05-13T00:18:26.652778050Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6796b0d1e9bc54b79369e720f06906f2968e3a863e4008818fa25abf5c8158ed pid=2260 runtime=io.containerd.runc.v2 May 13 00:18:26.661929 env[1320]: time="2025-05-13T00:18:26.661835449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:26.661929 env[1320]: time="2025-05-13T00:18:26.661880409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:26.661929 env[1320]: time="2025-05-13T00:18:26.661890889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:26.662284 env[1320]: time="2025-05-13T00:18:26.662232811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232 pid=2283 runtime=io.containerd.runc.v2 May 13 00:18:26.723913 env[1320]: time="2025-05-13T00:18:26.723868034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg4m6,Uid:42cf7f10-783d-4a74-86dc-6ddba04f87de,Namespace:kube-system,Attempt:0,} returns sandbox id \"6796b0d1e9bc54b79369e720f06906f2968e3a863e4008818fa25abf5c8158ed\"" May 13 00:18:26.724826 kubelet[2167]: E0513 00:18:26.724521 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:26.727769 env[1320]: time="2025-05-13T00:18:26.727715770Z" level=info msg="CreateContainer within sandbox \"6796b0d1e9bc54b79369e720f06906f2968e3a863e4008818fa25abf5c8158ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:18:26.742463 env[1320]: time="2025-05-13T00:18:26.742292832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sqfbs,Uid:eb145995-5c3f-47d2-befe-7ec16cb8f4d0,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\"" May 13 00:18:26.743221 kubelet[2167]: E0513 00:18:26.743201 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:26.745494 env[1320]: time="2025-05-13T00:18:26.745407886Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 00:18:26.747652 env[1320]: time="2025-05-13T00:18:26.747597015Z" level=info msg="CreateContainer within sandbox \"6796b0d1e9bc54b79369e720f06906f2968e3a863e4008818fa25abf5c8158ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25f51de4ef1e9e510eb8800717e9acd5d5c5cb06d05a49ef81248475c063ccfc\"" May 13 00:18:26.748204 env[1320]: time="2025-05-13T00:18:26.748091657Z" level=info msg="StartContainer for \"25f51de4ef1e9e510eb8800717e9acd5d5c5cb06d05a49ef81248475c063ccfc\"" May 13 00:18:26.820919 env[1320]: time="2025-05-13T00:18:26.820875128Z" level=info msg="StartContainer for \"25f51de4ef1e9e510eb8800717e9acd5d5c5cb06d05a49ef81248475c063ccfc\" returns successfully" May 13 00:18:26.843746 kubelet[2167]: E0513 00:18:26.843335 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:26.984947 kubelet[2167]: E0513 00:18:26.984787 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:27.948441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992915514.mount: Deactivated successfully. May 13 00:18:27.988004 env[1320]: time="2025-05-13T00:18:27.987952447Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:27.990366 env[1320]: time="2025-05-13T00:18:27.990311936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:27.992135 env[1320]: time="2025-05-13T00:18:27.992100944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:27.994137 env[1320]: time="2025-05-13T00:18:27.994102032Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:27.994815 env[1320]: time="2025-05-13T00:18:27.994772474Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 13 00:18:27.997170 env[1320]: time="2025-05-13T00:18:27.997128844Z" level=info msg="CreateContainer within sandbox \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 00:18:28.008175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143544613.mount: Deactivated successfully. May 13 00:18:28.011729 env[1320]: time="2025-05-13T00:18:28.011676099Z" level=info msg="CreateContainer within sandbox \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"53aa8d7cc64b9803fd6c7e01a9302aa75578a25e764e2b7b6c437734e76f984e\"" May 13 00:18:28.013006 env[1320]: time="2025-05-13T00:18:28.012971344Z" level=info msg="StartContainer for \"53aa8d7cc64b9803fd6c7e01a9302aa75578a25e764e2b7b6c437734e76f984e\"" May 13 00:18:28.062761 env[1320]: time="2025-05-13T00:18:28.062716451Z" level=info msg="StartContainer for \"53aa8d7cc64b9803fd6c7e01a9302aa75578a25e764e2b7b6c437734e76f984e\" returns successfully" May 13 00:18:28.109779 env[1320]: time="2025-05-13T00:18:28.109731227Z" level=info msg="shim disconnected" id=53aa8d7cc64b9803fd6c7e01a9302aa75578a25e764e2b7b6c437734e76f984e May 13 00:18:28.109779 env[1320]: time="2025-05-13T00:18:28.109780307Z" level=warning msg="cleaning up after shim disconnected" id=53aa8d7cc64b9803fd6c7e01a9302aa75578a25e764e2b7b6c437734e76f984e namespace=k8s.io May 13 00:18:28.110170 env[1320]: time="2025-05-13T00:18:28.109791107Z" level=info msg="cleaning up dead shim" May 13 00:18:28.116329 env[1320]: time="2025-05-13T00:18:28.116267852Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2530 runtime=io.containerd.runc.v2\n" May 13 00:18:28.989916 kubelet[2167]: E0513 00:18:28.989882 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:28.991005 env[1320]: time="2025-05-13T00:18:28.990849293Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 00:18:29.002325 kubelet[2167]: I0513 00:18:29.002254 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pg4m6" podStartSLOduration=4.002235496 podStartE2EDuration="4.002235496s" podCreationTimestamp="2025-05-13 00:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:18:26.996638718 +0000 UTC m=+15.182142017" watchObservedRunningTime="2025-05-13 00:18:29.002235496 +0000 UTC m=+17.187738795" May 13 00:18:30.192945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742661814.mount: Deactivated successfully. May 13 00:18:30.879379 env[1320]: time="2025-05-13T00:18:30.879326906Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:30.880721 env[1320]: time="2025-05-13T00:18:30.880679550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:30.882626 env[1320]: time="2025-05-13T00:18:30.882597117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:30.884279 env[1320]: time="2025-05-13T00:18:30.884248002Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:18:30.885113 env[1320]: time="2025-05-13T00:18:30.885084685Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 13 00:18:30.889383 env[1320]: time="2025-05-13T00:18:30.889349179Z" level=info msg="CreateContainer within sandbox \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:18:30.903606 env[1320]: time="2025-05-13T00:18:30.903554666Z" level=info msg="CreateContainer within sandbox \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98f170cfb62cbbbc953dded0f7de2009d42a0f7dd76d96443e240abdb275954e\"" May 13 00:18:30.904205 env[1320]: time="2025-05-13T00:18:30.904083308Z" level=info msg="StartContainer for \"98f170cfb62cbbbc953dded0f7de2009d42a0f7dd76d96443e240abdb275954e\"" May 13 00:18:30.958975 env[1320]: time="2025-05-13T00:18:30.958932368Z" level=info msg="StartContainer for \"98f170cfb62cbbbc953dded0f7de2009d42a0f7dd76d96443e240abdb275954e\" returns successfully" May 13 00:18:30.995554 kubelet[2167]: E0513 00:18:30.995511 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:31.049322 kubelet[2167]: I0513 00:18:31.048397 2167 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:18:31.051415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98f170cfb62cbbbc953dded0f7de2009d42a0f7dd76d96443e240abdb275954e-rootfs.mount: Deactivated successfully. May 13 00:18:31.078186 env[1320]: time="2025-05-13T00:18:31.078129066Z" level=info msg="shim disconnected" id=98f170cfb62cbbbc953dded0f7de2009d42a0f7dd76d96443e240abdb275954e May 13 00:18:31.078186 env[1320]: time="2025-05-13T00:18:31.078176346Z" level=warning msg="cleaning up after shim disconnected" id=98f170cfb62cbbbc953dded0f7de2009d42a0f7dd76d96443e240abdb275954e namespace=k8s.io May 13 00:18:31.078186 env[1320]: time="2025-05-13T00:18:31.078186346Z" level=info msg="cleaning up dead shim" May 13 00:18:31.084989 kubelet[2167]: I0513 00:18:31.084887 2167 topology_manager.go:215] "Topology Admit Handler" podUID="1a182c9a-e9df-463b-8e17-a3a5e06874fa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wtw6c" May 13 00:18:31.086599 kubelet[2167]: I0513 00:18:31.086445 2167 topology_manager.go:215] "Topology Admit Handler" podUID="19b6df63-dc7b-466a-93d6-45002398138e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vwv9m" May 13 00:18:31.094405 env[1320]: time="2025-05-13T00:18:31.093982395Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:18:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2587 runtime=io.containerd.runc.v2\n" May 13 00:18:31.172960 kubelet[2167]: I0513 00:18:31.172908 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpbs\" (UniqueName: \"kubernetes.io/projected/1a182c9a-e9df-463b-8e17-a3a5e06874fa-kube-api-access-kkpbs\") pod \"coredns-7db6d8ff4d-wtw6c\" (UID: \"1a182c9a-e9df-463b-8e17-a3a5e06874fa\") " pod="kube-system/coredns-7db6d8ff4d-wtw6c" May 13 00:18:31.172960 kubelet[2167]: I0513 00:18:31.172957 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19b6df63-dc7b-466a-93d6-45002398138e-config-volume\") pod \"coredns-7db6d8ff4d-vwv9m\" (UID: \"19b6df63-dc7b-466a-93d6-45002398138e\") " pod="kube-system/coredns-7db6d8ff4d-vwv9m" May 13 00:18:31.173190 kubelet[2167]: I0513 00:18:31.172980 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a182c9a-e9df-463b-8e17-a3a5e06874fa-config-volume\") pod \"coredns-7db6d8ff4d-wtw6c\" (UID: \"1a182c9a-e9df-463b-8e17-a3a5e06874fa\") " pod="kube-system/coredns-7db6d8ff4d-wtw6c" May 13 00:18:31.173190 kubelet[2167]: I0513 00:18:31.173025 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpxnm\" (UniqueName: \"kubernetes.io/projected/19b6df63-dc7b-466a-93d6-45002398138e-kube-api-access-kpxnm\") pod \"coredns-7db6d8ff4d-vwv9m\" (UID: \"19b6df63-dc7b-466a-93d6-45002398138e\") " pod="kube-system/coredns-7db6d8ff4d-vwv9m" May 13 00:18:31.393074 kubelet[2167]: E0513 00:18:31.393032 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:31.393861 env[1320]: time="2025-05-13T00:18:31.393754881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vwv9m,Uid:19b6df63-dc7b-466a-93d6-45002398138e,Namespace:kube-system,Attempt:0,}" May 13 00:18:31.394121 kubelet[2167]: E0513 00:18:31.394099 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:31.394649 env[1320]: time="2025-05-13T00:18:31.394489764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wtw6c,Uid:1a182c9a-e9df-463b-8e17-a3a5e06874fa,Namespace:kube-system,Attempt:0,}" May 13 00:18:31.445007 env[1320]: time="2025-05-13T00:18:31.444861959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wtw6c,Uid:1a182c9a-e9df-463b-8e17-a3a5e06874fa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88ee6f75afd3940b65c7329ac358167bf5ddd4453584f7f19a231ccac3008b8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:18:31.445958 kubelet[2167]: E0513 00:18:31.445919 2167 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ee6f75afd3940b65c7329ac358167bf5ddd4453584f7f19a231ccac3008b8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:18:31.446042 kubelet[2167]: E0513 00:18:31.445981 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ee6f75afd3940b65c7329ac358167bf5ddd4453584f7f19a231ccac3008b8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wtw6c" May 13 00:18:31.446042 kubelet[2167]: E0513 00:18:31.446002 2167 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ee6f75afd3940b65c7329ac358167bf5ddd4453584f7f19a231ccac3008b8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wtw6c" May 13 00:18:31.446098 kubelet[2167]: E0513 00:18:31.446034 2167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wtw6c_kube-system(1a182c9a-e9df-463b-8e17-a3a5e06874fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wtw6c_kube-system(1a182c9a-e9df-463b-8e17-a3a5e06874fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88ee6f75afd3940b65c7329ac358167bf5ddd4453584f7f19a231ccac3008b8d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wtw6c" podUID="1a182c9a-e9df-463b-8e17-a3a5e06874fa" May 13 00:18:31.449834 env[1320]: time="2025-05-13T00:18:31.449768855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vwv9m,Uid:19b6df63-dc7b-466a-93d6-45002398138e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ad8ab0cc6d7f47113336ea5bd57f4f9983823da48bc3baa3f2ef0ce6611e827\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:18:31.450139 kubelet[2167]: E0513 00:18:31.449998 2167 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad8ab0cc6d7f47113336ea5bd57f4f9983823da48bc3baa3f2ef0ce6611e827\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:18:31.450139 kubelet[2167]: E0513 00:18:31.450043 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad8ab0cc6d7f47113336ea5bd57f4f9983823da48bc3baa3f2ef0ce6611e827\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-vwv9m" May 13 00:18:31.450139 kubelet[2167]: E0513 00:18:31.450060 2167 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad8ab0cc6d7f47113336ea5bd57f4f9983823da48bc3baa3f2ef0ce6611e827\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-vwv9m" May 13 00:18:31.450139 kubelet[2167]: E0513 00:18:31.450089 2167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vwv9m_kube-system(19b6df63-dc7b-466a-93d6-45002398138e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vwv9m_kube-system(19b6df63-dc7b-466a-93d6-45002398138e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad8ab0cc6d7f47113336ea5bd57f4f9983823da48bc3baa3f2ef0ce6611e827\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-vwv9m" podUID="19b6df63-dc7b-466a-93d6-45002398138e" May 13 00:18:31.997702 kubelet[2167]: E0513 00:18:31.997670 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:32.004253 env[1320]: time="2025-05-13T00:18:32.004212608Z" level=info msg="CreateContainer within sandbox \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 00:18:32.020248 env[1320]: time="2025-05-13T00:18:32.020201695Z" level=info msg="CreateContainer within sandbox \"986ad52ff725f644d05f5a8ed5b747043d8f35a0e80202ded133d21c43cc6232\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8469647a4e6f3560bb8d241140876a8f147763cbabf0b723e98abe88c8a59840\"" May 13 00:18:32.020893 env[1320]: time="2025-05-13T00:18:32.020734376Z" level=info msg="StartContainer for \"8469647a4e6f3560bb8d241140876a8f147763cbabf0b723e98abe88c8a59840\"" May 13 00:18:32.052258 systemd[1]: run-netns-cni\x2d74e55b91\x2dd25f\x2d4b20\x2dcf87\x2d7391377af6de.mount: Deactivated successfully. May 13 00:18:32.052397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88ee6f75afd3940b65c7329ac358167bf5ddd4453584f7f19a231ccac3008b8d-shm.mount: Deactivated successfully. May 13 00:18:32.107689 env[1320]: time="2025-05-13T00:18:32.107636068Z" level=info msg="StartContainer for \"8469647a4e6f3560bb8d241140876a8f147763cbabf0b723e98abe88c8a59840\" returns successfully" May 13 00:18:33.002833 kubelet[2167]: E0513 00:18:33.002784 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:33.013573 kubelet[2167]: I0513 00:18:33.012785 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-sqfbs" podStartSLOduration=3.870483121 podStartE2EDuration="8.01276845s" podCreationTimestamp="2025-05-13 00:18:25 +0000 UTC" firstStartedPulling="2025-05-13 00:18:26.744862803 +0000 UTC m=+14.930366102" lastFinishedPulling="2025-05-13 00:18:30.887148132 +0000 UTC m=+19.072651431" observedRunningTime="2025-05-13 00:18:33.012517809 +0000 UTC m=+21.198021108" watchObservedRunningTime="2025-05-13 00:18:33.01276845 +0000 UTC m=+21.198271749" May 13 00:18:33.188620 systemd-networkd[1097]: flannel.1: Link UP May 13 00:18:33.188628 systemd-networkd[1097]: flannel.1: Gained carrier May 13 00:18:34.003384 kubelet[2167]: E0513 00:18:34.003343 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:34.509997 systemd-networkd[1097]: flannel.1: Gained IPv6LL May 13 00:18:35.347758 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:54160.service. May 13 00:18:35.386743 sshd[2779]: Accepted publickey for core from 10.0.0.1 port 54160 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:35.388062 sshd[2779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:35.393086 systemd-logind[1307]: New session 6 of user core. May 13 00:18:35.394400 systemd[1]: Started session-6.scope. May 13 00:18:35.519540 sshd[2779]: pam_unix(sshd:session): session closed for user core May 13 00:18:35.522356 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:54160.service: Deactivated successfully. May 13 00:18:35.523345 systemd-logind[1307]: Session 6 logged out. Waiting for processes to exit. May 13 00:18:35.523408 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:18:35.524041 systemd-logind[1307]: Removed session 6. May 13 00:18:40.522728 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:54166.service. May 13 00:18:40.562465 sshd[2816]: Accepted publickey for core from 10.0.0.1 port 54166 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:40.563863 sshd[2816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:40.567320 systemd-logind[1307]: New session 7 of user core. May 13 00:18:40.568157 systemd[1]: Started session-7.scope. May 13 00:18:40.683367 sshd[2816]: pam_unix(sshd:session): session closed for user core May 13 00:18:40.685645 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:54166.service: Deactivated successfully. May 13 00:18:40.686865 systemd-logind[1307]: Session 7 logged out. Waiting for processes to exit. May 13 00:18:40.687094 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:18:40.689050 systemd-logind[1307]: Removed session 7. May 13 00:18:41.953952 kubelet[2167]: E0513 00:18:41.953918 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:41.955710 env[1320]: time="2025-05-13T00:18:41.954974859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vwv9m,Uid:19b6df63-dc7b-466a-93d6-45002398138e,Namespace:kube-system,Attempt:0,}" May 13 00:18:41.976990 systemd-networkd[1097]: cni0: Link UP May 13 00:18:41.976997 systemd-networkd[1097]: cni0: Gained carrier May 13 00:18:41.977607 systemd-networkd[1097]: cni0: Lost carrier May 13 00:18:41.999894 systemd-networkd[1097]: veth641d7006: Link UP May 13 00:18:42.006207 kernel: cni0: port 1(veth641d7006) entered blocking state May 13 00:18:42.006291 kernel: cni0: port 1(veth641d7006) entered disabled state May 13 00:18:42.008696 kernel: device veth641d7006 entered promiscuous mode May 13 00:18:42.008758 kernel: cni0: port 1(veth641d7006) entered blocking state May 13 00:18:42.008777 kernel: cni0: port 1(veth641d7006) entered forwarding state May 13 00:18:42.009851 kernel: cni0: port 1(veth641d7006) entered disabled state May 13 00:18:42.018089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth641d7006: link becomes ready May 13 00:18:42.018166 kernel: cni0: port 1(veth641d7006) entered blocking state May 13 00:18:42.018184 kernel: cni0: port 1(veth641d7006) entered forwarding state May 13 00:18:42.018182 systemd-networkd[1097]: veth641d7006: Gained carrier May 13 00:18:42.018403 systemd-networkd[1097]: cni0: Gained carrier May 13 00:18:42.019771 env[1320]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 13 00:18:42.019771 env[1320]: delegateAdd: netconf sent to delegate plugin: May 13 00:18:42.030644 env[1320]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:18:42.030459939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:42.030644 env[1320]: time="2025-05-13T00:18:42.030607539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:42.030644 env[1320]: time="2025-05-13T00:18:42.030618259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:42.031050 env[1320]: time="2025-05-13T00:18:42.030969299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d863a244aba39441fab9198ed1e6ed10912cc81eb4b91fcace559c195773f79e pid=2880 runtime=io.containerd.runc.v2 May 13 00:18:42.061997 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:18:42.079784 env[1320]: time="2025-05-13T00:18:42.079736934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vwv9m,Uid:19b6df63-dc7b-466a-93d6-45002398138e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d863a244aba39441fab9198ed1e6ed10912cc81eb4b91fcace559c195773f79e\"" May 13 00:18:42.080448 kubelet[2167]: E0513 00:18:42.080423 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:42.084835 env[1320]: time="2025-05-13T00:18:42.084794541Z" level=info msg="CreateContainer within sandbox \"d863a244aba39441fab9198ed1e6ed10912cc81eb4b91fcace559c195773f79e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:18:42.098346 env[1320]: time="2025-05-13T00:18:42.098312922Z" level=info msg="CreateContainer within sandbox \"d863a244aba39441fab9198ed1e6ed10912cc81eb4b91fcace559c195773f79e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"580e4e1a28650850aacdcde9f9a8ba1f3a25c0f08a25b8425cad3cbfc660f3f0\"" May 13 00:18:42.099681 env[1320]: time="2025-05-13T00:18:42.098925523Z" level=info msg="StartContainer for \"580e4e1a28650850aacdcde9f9a8ba1f3a25c0f08a25b8425cad3cbfc660f3f0\"" May 13 00:18:42.158746 env[1320]: time="2025-05-13T00:18:42.158692694Z" level=info msg="StartContainer for \"580e4e1a28650850aacdcde9f9a8ba1f3a25c0f08a25b8425cad3cbfc660f3f0\" returns successfully" May 13 00:18:42.965135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514243369.mount: Deactivated successfully. May 13 00:18:43.022336 kubelet[2167]: E0513 00:18:43.022301 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:43.043456 kubelet[2167]: I0513 00:18:43.043399 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vwv9m" podStartSLOduration=17.043381355 podStartE2EDuration="17.043381355s" podCreationTimestamp="2025-05-13 00:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:18:43.032799379 +0000 UTC m=+31.218302678" watchObservedRunningTime="2025-05-13 00:18:43.043381355 +0000 UTC m=+31.228884614" May 13 00:18:43.277973 systemd-networkd[1097]: veth641d7006: Gained IPv6LL May 13 00:18:43.789921 systemd-networkd[1097]: cni0: Gained IPv6LL May 13 00:18:43.954023 kubelet[2167]: E0513 00:18:43.953981 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:43.954436 env[1320]: time="2025-05-13T00:18:43.954389533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wtw6c,Uid:1a182c9a-e9df-463b-8e17-a3a5e06874fa,Namespace:kube-system,Attempt:0,}" May 13 00:18:43.973012 systemd-networkd[1097]: veth0351bc1c: Link UP May 13 00:18:43.975875 kernel: cni0: port 2(veth0351bc1c) entered blocking state May 13 00:18:43.975942 kernel: cni0: port 2(veth0351bc1c) entered disabled state May 13 00:18:43.975975 kernel: device veth0351bc1c entered promiscuous mode May 13 00:18:43.982831 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:18:43.982898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0351bc1c: link becomes ready May 13 00:18:43.982925 kernel: cni0: port 2(veth0351bc1c) entered blocking state May 13 00:18:43.982939 kernel: cni0: port 2(veth0351bc1c) entered forwarding state May 13 00:18:43.983016 systemd-networkd[1097]: veth0351bc1c: Gained carrier May 13 00:18:43.984279 env[1320]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000020928), "name":"cbr0", "type":"bridge"} May 13 00:18:43.984279 env[1320]: delegateAdd: netconf sent to delegate plugin: May 13 00:18:43.992195 env[1320]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:18:43.992138747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:18:43.992344 env[1320]: time="2025-05-13T00:18:43.992318827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:18:43.992452 env[1320]: time="2025-05-13T00:18:43.992430987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:18:43.992699 env[1320]: time="2025-05-13T00:18:43.992661987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/973b0c938b2c19e7464993d624fdd5323edc4ff0f210aedd605b47254c749e3f pid=3015 runtime=io.containerd.runc.v2 May 13 00:18:44.006957 systemd[1]: run-containerd-runc-k8s.io-973b0c938b2c19e7464993d624fdd5323edc4ff0f210aedd605b47254c749e3f-runc.z59Kti.mount: Deactivated successfully. May 13 00:18:44.023822 kubelet[2167]: E0513 00:18:44.023721 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:44.027224 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:18:44.047546 env[1320]: time="2025-05-13T00:18:44.047437221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wtw6c,Uid:1a182c9a-e9df-463b-8e17-a3a5e06874fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"973b0c938b2c19e7464993d624fdd5323edc4ff0f210aedd605b47254c749e3f\"" May 13 00:18:44.048987 kubelet[2167]: E0513 00:18:44.048959 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:44.053592 env[1320]: time="2025-05-13T00:18:44.053558350Z" level=info msg="CreateContainer within sandbox \"973b0c938b2c19e7464993d624fdd5323edc4ff0f210aedd605b47254c749e3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:18:44.070996 env[1320]: time="2025-05-13T00:18:44.070926693Z" level=info msg="CreateContainer within sandbox \"973b0c938b2c19e7464993d624fdd5323edc4ff0f210aedd605b47254c749e3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0e1d28610d8445601cf101a580fbebba5a5f77fb061bcd7a09e31950679f7c1\"" May 13 00:18:44.072836 env[1320]: time="2025-05-13T00:18:44.071536414Z" level=info msg="StartContainer for \"c0e1d28610d8445601cf101a580fbebba5a5f77fb061bcd7a09e31950679f7c1\"" May 13 00:18:44.136782 env[1320]: time="2025-05-13T00:18:44.136736581Z" level=info msg="StartContainer for \"c0e1d28610d8445601cf101a580fbebba5a5f77fb061bcd7a09e31950679f7c1\" returns successfully" May 13 00:18:45.026667 kubelet[2167]: E0513 00:18:45.026627 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:45.027082 kubelet[2167]: E0513 00:18:45.027065 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:45.038548 kubelet[2167]: I0513 00:18:45.038495 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wtw6c" podStartSLOduration=19.038476582 podStartE2EDuration="19.038476582s" podCreationTimestamp="2025-05-13 00:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:18:45.03695714 +0000 UTC m=+33.222460439" watchObservedRunningTime="2025-05-13 00:18:45.038476582 +0000 UTC m=+33.223979881" May 13 00:18:45.453928 systemd-networkd[1097]: veth0351bc1c: Gained IPv6LL May 13 00:18:45.686433 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:58788.service. May 13 00:18:45.729304 sshd[3090]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:45.730573 sshd[3090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:45.735890 systemd-logind[1307]: New session 8 of user core. May 13 00:18:45.736384 systemd[1]: Started session-8.scope. May 13 00:18:45.868988 sshd[3090]: pam_unix(sshd:session): session closed for user core May 13 00:18:45.869671 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:58796.service. May 13 00:18:45.872459 systemd-logind[1307]: Session 8 logged out. Waiting for processes to exit. May 13 00:18:45.872638 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:58788.service: Deactivated successfully. May 13 00:18:45.873506 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:18:45.874004 systemd-logind[1307]: Removed session 8. May 13 00:18:45.911697 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 58796 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:45.913226 sshd[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:45.919705 systemd[1]: Started session-9.scope. May 13 00:18:45.920036 systemd-logind[1307]: New session 9 of user core. May 13 00:18:46.081034 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:58808.service. May 13 00:18:46.081077 sshd[3105]: pam_unix(sshd:session): session closed for user core May 13 00:18:46.098214 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:58796.service: Deactivated successfully. May 13 00:18:46.102561 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:18:46.103782 systemd-logind[1307]: Session 9 logged out. Waiting for processes to exit. May 13 00:18:46.110262 systemd-logind[1307]: Removed session 9. May 13 00:18:46.135519 sshd[3118]: Accepted publickey for core from 10.0.0.1 port 58808 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:46.137054 sshd[3118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:46.143156 systemd-logind[1307]: New session 10 of user core. May 13 00:18:46.144472 systemd[1]: Started session-10.scope. May 13 00:18:46.261528 sshd[3118]: pam_unix(sshd:session): session closed for user core May 13 00:18:46.263970 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:58808.service: Deactivated successfully. May 13 00:18:46.264798 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:18:46.265717 systemd-logind[1307]: Session 10 logged out. Waiting for processes to exit. May 13 00:18:46.266441 systemd-logind[1307]: Removed session 10. May 13 00:18:51.265047 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:58822.service. May 13 00:18:51.304636 sshd[3155]: Accepted publickey for core from 10.0.0.1 port 58822 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:51.306293 sshd[3155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:51.311097 systemd-logind[1307]: New session 11 of user core. May 13 00:18:51.311791 systemd[1]: Started session-11.scope. May 13 00:18:51.399387 kubelet[2167]: E0513 00:18:51.397795 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:51.471416 sshd[3155]: pam_unix(sshd:session): session closed for user core May 13 00:18:51.474001 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:58836.service. May 13 00:18:51.474470 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:58822.service: Deactivated successfully. May 13 00:18:51.475519 systemd-logind[1307]: Session 11 logged out. Waiting for processes to exit. May 13 00:18:51.475558 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:18:51.476299 systemd-logind[1307]: Removed session 11. May 13 00:18:51.511747 sshd[3172]: Accepted publickey for core from 10.0.0.1 port 58836 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:51.513143 sshd[3172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:51.516284 systemd-logind[1307]: New session 12 of user core. May 13 00:18:51.517412 systemd[1]: Started session-12.scope. May 13 00:18:51.715228 sshd[3172]: pam_unix(sshd:session): session closed for user core May 13 00:18:51.716712 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:58844.service. May 13 00:18:51.717765 systemd-logind[1307]: Session 12 logged out. Waiting for processes to exit. May 13 00:18:51.717947 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:58836.service: Deactivated successfully. May 13 00:18:51.718714 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:18:51.719177 systemd-logind[1307]: Removed session 12. May 13 00:18:51.753025 sshd[3184]: Accepted publickey for core from 10.0.0.1 port 58844 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:51.754174 sshd[3184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:51.757677 systemd-logind[1307]: New session 13 of user core. May 13 00:18:51.758583 systemd[1]: Started session-13.scope. May 13 00:18:52.037702 kubelet[2167]: E0513 00:18:52.037672 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:18:52.977122 sshd[3184]: pam_unix(sshd:session): session closed for user core May 13 00:18:52.979397 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:56612.service. May 13 00:18:52.986345 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:58844.service: Deactivated successfully. May 13 00:18:52.987992 systemd-logind[1307]: Session 13 logged out. Waiting for processes to exit. May 13 00:18:52.988000 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:18:52.999871 systemd-logind[1307]: Removed session 13. May 13 00:18:53.026424 sshd[3204]: Accepted publickey for core from 10.0.0.1 port 56612 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:53.027623 sshd[3204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:53.032118 systemd[1]: Started session-14.scope. May 13 00:18:53.032466 systemd-logind[1307]: New session 14 of user core. May 13 00:18:53.258577 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:56628.service. May 13 00:18:53.260039 sshd[3204]: pam_unix(sshd:session): session closed for user core May 13 00:18:53.262841 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:56612.service: Deactivated successfully. May 13 00:18:53.265654 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:18:53.265655 systemd-logind[1307]: Session 14 logged out. Waiting for processes to exit. May 13 00:18:53.269521 systemd-logind[1307]: Removed session 14. May 13 00:18:53.301909 sshd[3218]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:53.303364 sshd[3218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:53.307964 systemd-logind[1307]: New session 15 of user core. May 13 00:18:53.308823 systemd[1]: Started session-15.scope. May 13 00:18:53.424167 sshd[3218]: pam_unix(sshd:session): session closed for user core May 13 00:18:53.426818 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:56628.service: Deactivated successfully. May 13 00:18:53.427839 systemd-logind[1307]: Session 15 logged out. Waiting for processes to exit. May 13 00:18:53.428031 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:18:53.428973 systemd-logind[1307]: Removed session 15. May 13 00:18:58.427640 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:56642.service. May 13 00:18:58.463605 sshd[3282]: Accepted publickey for core from 10.0.0.1 port 56642 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:18:58.464867 sshd[3282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:18:58.468148 systemd-logind[1307]: New session 16 of user core. May 13 00:18:58.468981 systemd[1]: Started session-16.scope. May 13 00:18:58.577999 sshd[3282]: pam_unix(sshd:session): session closed for user core May 13 00:18:58.580641 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:56642.service: Deactivated successfully. May 13 00:18:58.581576 systemd-logind[1307]: Session 16 logged out. Waiting for processes to exit. May 13 00:18:58.581622 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:18:58.582373 systemd-logind[1307]: Removed session 16. May 13 00:19:03.580574 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:53792.service. May 13 00:19:03.616959 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 53792 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:03.618699 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:03.622167 systemd-logind[1307]: New session 17 of user core. May 13 00:19:03.623042 systemd[1]: Started session-17.scope. May 13 00:19:03.747776 sshd[3317]: pam_unix(sshd:session): session closed for user core May 13 00:19:03.750524 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:53792.service: Deactivated successfully. May 13 00:19:03.751586 systemd-logind[1307]: Session 17 logged out. Waiting for processes to exit. May 13 00:19:03.751652 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:19:03.752440 systemd-logind[1307]: Removed session 17. May 13 00:19:08.750457 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:53794.service. May 13 00:19:08.786276 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:08.787927 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:08.792633 systemd[1]: Started session-18.scope. May 13 00:19:08.792836 systemd-logind[1307]: New session 18 of user core. May 13 00:19:08.904179 sshd[3352]: pam_unix(sshd:session): session closed for user core May 13 00:19:08.906501 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:53794.service: Deactivated successfully. May 13 00:19:08.907487 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:19:08.907789 systemd-logind[1307]: Session 18 logged out. Waiting for processes to exit. May 13 00:19:08.908493 systemd-logind[1307]: Removed session 18. May 13 00:19:13.907375 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:53420.service. May 13 00:19:13.943510 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 53420 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:19:13.945133 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:19:13.948909 systemd-logind[1307]: New session 19 of user core. May 13 00:19:13.949414 systemd[1]: Started session-19.scope. May 13 00:19:14.055205 sshd[3390]: pam_unix(sshd:session): session closed for user core May 13 00:19:14.057450 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:53420.service: Deactivated successfully. May 13 00:19:14.058433 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:19:14.058449 systemd-logind[1307]: Session 19 logged out. Waiting for processes to exit. May 13 00:19:14.059325 systemd-logind[1307]: Removed session 19.