May 14 00:51:36.726963 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:51:36.726982 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:51:36.726990 kernel: efi: EFI v2.70 by EDK II May 14 00:51:36.726996 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:51:36.727001 kernel: random: crng init done May 14 00:51:36.727006 kernel: ACPI: Early table checksum verification disabled May 14 00:51:36.727012 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:51:36.727019 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:51:36.727025 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727030 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727035 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727040 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727046 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727051 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727059 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727065 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727070 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:51:36.727076 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:51:36.727082 kernel: NUMA: Failed to initialise from firmware May 14 00:51:36.727088 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:51:36.727105 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 14 00:51:36.727111 kernel: Zone ranges: May 14 00:51:36.727117 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:51:36.727124 kernel: DMA32 empty May 14 00:51:36.727129 kernel: Normal empty May 14 00:51:36.727135 kernel: Movable zone start for each node May 14 00:51:36.727140 kernel: Early memory node ranges May 14 00:51:36.727150 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:51:36.727156 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:51:36.727162 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:51:36.727167 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:51:36.727173 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:51:36.727179 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:51:36.727184 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:51:36.727190 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:51:36.727197 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:51:36.727203 kernel: psci: probing for conduit method from ACPI. May 14 00:51:36.727209 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:51:36.727214 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:51:36.727220 kernel: psci: Trusted OS migration not required May 14 00:51:36.727228 kernel: psci: SMC Calling Convention v1.1 May 14 00:51:36.727234 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:51:36.727242 kernel: ACPI: SRAT not present May 14 00:51:36.727248 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:51:36.727254 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:51:36.727261 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:51:36.727267 kernel: Detected PIPT I-cache on CPU0 May 14 00:51:36.727273 kernel: CPU features: detected: GIC system register CPU interface May 14 00:51:36.727279 kernel: CPU features: detected: Hardware dirty bit management May 14 00:51:36.727284 kernel: CPU features: detected: Spectre-v4 May 14 00:51:36.727291 kernel: CPU features: detected: Spectre-BHB May 14 00:51:36.727297 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:51:36.727304 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:51:36.727310 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:51:36.727315 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:51:36.727322 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:51:36.727328 kernel: Policy zone: DMA May 14 00:51:36.727335 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:51:36.727341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:51:36.727347 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:51:36.727353 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:51:36.727359 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:51:36.727367 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 14 00:51:36.727373 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:51:36.727379 kernel: trace event string verifier disabled May 14 00:51:36.727385 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:51:36.727392 kernel: rcu: RCU event tracing is enabled. May 14 00:51:36.727398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:51:36.727404 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:51:36.727416 kernel: Tracing variant of Tasks RCU enabled. May 14 00:51:36.727422 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:51:36.727429 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:51:36.727435 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:51:36.727442 kernel: GICv3: 256 SPIs implemented May 14 00:51:36.727448 kernel: GICv3: 0 Extended SPIs implemented May 14 00:51:36.727454 kernel: GICv3: Distributor has no Range Selector support May 14 00:51:36.727460 kernel: Root IRQ handler: gic_handle_irq May 14 00:51:36.727466 kernel: GICv3: 16 PPIs implemented May 14 00:51:36.727472 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:51:36.727478 kernel: ACPI: SRAT not present May 14 00:51:36.727484 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:51:36.727490 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:51:36.727496 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:51:36.727502 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:51:36.727508 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:51:36.727516 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:36.727522 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:51:36.727529 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:51:36.727535 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:51:36.727541 kernel: arm-pv: using stolen time PV May 14 00:51:36.727547 kernel: Console: colour dummy device 80x25 May 14 00:51:36.727553 kernel: ACPI: Core revision 20210730 May 14 00:51:36.727559 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:51:36.727566 kernel: pid_max: default: 32768 minimum: 301 May 14 00:51:36.727572 kernel: LSM: Security Framework initializing May 14 00:51:36.727593 kernel: SELinux: Initializing. May 14 00:51:36.727599 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:51:36.727606 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:51:36.727612 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:51:36.727619 kernel: rcu: Hierarchical SRCU implementation. May 14 00:51:36.727625 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:51:36.727631 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:51:36.727637 kernel: Remapping and enabling EFI services. May 14 00:51:36.727643 kernel: smp: Bringing up secondary CPUs ... May 14 00:51:36.727651 kernel: Detected PIPT I-cache on CPU1 May 14 00:51:36.727657 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:51:36.727663 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:51:36.727670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:36.727676 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:51:36.727682 kernel: Detected PIPT I-cache on CPU2 May 14 00:51:36.727688 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:51:36.727694 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:51:36.727701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:36.727707 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:51:36.727714 kernel: Detected PIPT I-cache on CPU3 May 14 00:51:36.727720 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:51:36.727726 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:51:36.727733 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:51:36.727743 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:51:36.727751 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:51:36.727757 kernel: SMP: Total of 4 processors activated. May 14 00:51:36.727763 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:51:36.727770 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:51:36.727777 kernel: CPU features: detected: Common not Private translations May 14 00:51:36.727783 kernel: CPU features: detected: CRC32 instructions May 14 00:51:36.727791 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:51:36.727799 kernel: CPU features: detected: LSE atomic instructions May 14 00:51:36.727805 kernel: CPU features: detected: Privileged Access Never May 14 00:51:36.727812 kernel: CPU features: detected: RAS Extension Support May 14 00:51:36.727819 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:51:36.727825 kernel: CPU: All CPU(s) started at EL1 May 14 00:51:36.727833 kernel: alternatives: patching kernel code May 14 00:51:36.727839 kernel: devtmpfs: initialized May 14 00:51:36.727846 kernel: KASLR enabled May 14 00:51:36.727852 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:51:36.727859 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:51:36.727866 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:51:36.727872 kernel: SMBIOS 3.0.0 present. May 14 00:51:36.727879 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:51:36.727886 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:51:36.727893 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:51:36.727900 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:51:36.727907 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:51:36.727913 kernel: audit: initializing netlink subsys (disabled) May 14 00:51:36.727920 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 14 00:51:36.727926 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:51:36.727933 kernel: cpuidle: using governor menu May 14 00:51:36.727939 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:51:36.727946 kernel: ASID allocator initialised with 32768 entries May 14 00:51:36.727953 kernel: ACPI: bus type PCI registered May 14 00:51:36.727960 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:51:36.727967 kernel: Serial: AMBA PL011 UART driver May 14 00:51:36.727974 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:51:36.727981 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:51:36.727987 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:51:36.730434 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:51:36.730443 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:51:36.730450 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:51:36.730462 kernel: ACPI: Added _OSI(Module Device) May 14 00:51:36.730469 kernel: ACPI: Added _OSI(Processor Device) May 14 00:51:36.730476 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:51:36.730482 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:51:36.730489 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:51:36.730495 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:51:36.730507 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:51:36.730514 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:51:36.730521 kernel: ACPI: Interpreter enabled May 14 00:51:36.730529 kernel: ACPI: Using GIC for interrupt routing May 14 00:51:36.730536 kernel: ACPI: MCFG table detected, 1 entries May 14 00:51:36.730542 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:51:36.730549 kernel: printk: console [ttyAMA0] enabled May 14 00:51:36.730556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:51:36.730682 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:51:36.730745 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:51:36.730801 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:51:36.730861 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:51:36.730918 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:51:36.730927 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:51:36.730933 kernel: PCI host bridge to bus 0000:00 May 14 00:51:36.730999 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:51:36.731053 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:51:36.731164 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:51:36.731222 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:51:36.731295 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:51:36.731425 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:51:36.731503 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:51:36.731572 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:51:36.731634 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:51:36.731692 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:51:36.731754 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:51:36.731814 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:51:36.731896 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:51:36.731954 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:51:36.732022 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:51:36.732032 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:51:36.732039 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:51:36.732048 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:51:36.732055 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:51:36.732061 kernel: iommu: Default domain type: Translated May 14 00:51:36.732068 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:51:36.732075 kernel: vgaarb: loaded May 14 00:51:36.732081 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:51:36.732088 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:51:36.732106 kernel: PTP clock support registered May 14 00:51:36.732113 kernel: Registered efivars operations May 14 00:51:36.732121 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:51:36.732128 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:51:36.732134 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:51:36.732141 kernel: pnp: PnP ACPI init May 14 00:51:36.732208 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:51:36.732217 kernel: pnp: PnP ACPI: found 1 devices May 14 00:51:36.732224 kernel: NET: Registered PF_INET protocol family May 14 00:51:36.732231 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:51:36.732239 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:51:36.732246 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:51:36.732252 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:51:36.732259 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:51:36.732266 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:51:36.732272 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:51:36.732279 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:51:36.732285 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:51:36.732292 kernel: PCI: CLS 0 bytes, default 64 May 14 00:51:36.732299 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:51:36.732306 kernel: kvm [1]: HYP mode not available May 14 00:51:36.732313 kernel: Initialise system trusted keyrings May 14 00:51:36.732319 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:51:36.732326 kernel: Key type asymmetric registered May 14 00:51:36.732333 kernel: Asymmetric key parser 'x509' registered May 14 00:51:36.732339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:51:36.732346 kernel: io scheduler mq-deadline registered May 14 00:51:36.732352 kernel: io scheduler kyber registered May 14 00:51:36.732360 kernel: io scheduler bfq registered May 14 00:51:36.732366 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:51:36.732373 kernel: ACPI: button: Power Button [PWRB] May 14 00:51:36.732380 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:51:36.732448 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:51:36.732458 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:51:36.732464 kernel: thunder_xcv, ver 1.0 May 14 00:51:36.732471 kernel: thunder_bgx, ver 1.0 May 14 00:51:36.732477 kernel: nicpf, ver 1.0 May 14 00:51:36.732485 kernel: nicvf, ver 1.0 May 14 00:51:36.732552 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:51:36.732608 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:51:36 UTC (1747183896) May 14 00:51:36.732617 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:51:36.732624 kernel: NET: Registered PF_INET6 protocol family May 14 00:51:36.732630 kernel: Segment Routing with IPv6 May 14 00:51:36.732637 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:51:36.732644 kernel: NET: Registered PF_PACKET protocol family May 14 00:51:36.732652 kernel: Key type dns_resolver registered May 14 00:51:36.732658 kernel: registered taskstats version 1 May 14 00:51:36.732665 kernel: Loading compiled-in X.509 certificates May 14 00:51:36.732672 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:51:36.732678 kernel: Key type .fscrypt registered May 14 00:51:36.732685 kernel: Key type fscrypt-provisioning registered May 14 00:51:36.732691 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:51:36.732698 kernel: ima: Allocated hash algorithm: sha1 May 14 00:51:36.732704 kernel: ima: No architecture policies found May 14 00:51:36.732712 kernel: clk: Disabling unused clocks May 14 00:51:36.732719 kernel: Freeing unused kernel memory: 36480K May 14 00:51:36.732725 kernel: Run /init as init process May 14 00:51:36.732732 kernel: with arguments: May 14 00:51:36.732738 kernel: /init May 14 00:51:36.732744 kernel: with environment: May 14 00:51:36.732751 kernel: HOME=/ May 14 00:51:36.732757 kernel: TERM=linux May 14 00:51:36.732763 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:51:36.732773 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:51:36.732782 systemd[1]: Detected virtualization kvm. May 14 00:51:36.732789 systemd[1]: Detected architecture arm64. May 14 00:51:36.732796 systemd[1]: Running in initrd. May 14 00:51:36.732803 systemd[1]: No hostname configured, using default hostname. May 14 00:51:36.732809 systemd[1]: Hostname set to . May 14 00:51:36.732817 systemd[1]: Initializing machine ID from VM UUID. May 14 00:51:36.732825 systemd[1]: Queued start job for default target initrd.target. May 14 00:51:36.732832 systemd[1]: Started systemd-ask-password-console.path. May 14 00:51:36.732839 systemd[1]: Reached target cryptsetup.target. May 14 00:51:36.732846 systemd[1]: Reached target paths.target. May 14 00:51:36.732853 systemd[1]: Reached target slices.target. May 14 00:51:36.732859 systemd[1]: Reached target swap.target. May 14 00:51:36.732866 systemd[1]: Reached target timers.target. May 14 00:51:36.732874 systemd[1]: Listening on iscsid.socket. May 14 00:51:36.732882 systemd[1]: Listening on iscsiuio.socket. May 14 00:51:36.732889 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:51:36.732896 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:51:36.732903 systemd[1]: Listening on systemd-journald.socket. May 14 00:51:36.732910 systemd[1]: Listening on systemd-networkd.socket. May 14 00:51:36.732917 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:51:36.732924 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:51:36.732931 systemd[1]: Reached target sockets.target. May 14 00:51:36.732937 systemd[1]: Starting kmod-static-nodes.service... May 14 00:51:36.732946 systemd[1]: Finished network-cleanup.service. May 14 00:51:36.732953 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:51:36.732960 systemd[1]: Starting systemd-journald.service... May 14 00:51:36.732966 systemd[1]: Starting systemd-modules-load.service... May 14 00:51:36.732973 systemd[1]: Starting systemd-resolved.service... May 14 00:51:36.732980 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:51:36.732987 systemd[1]: Finished kmod-static-nodes.service. May 14 00:51:36.732994 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:51:36.733001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:51:36.733009 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:51:36.733016 kernel: audit: type=1130 audit(1747183896.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.733023 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:51:36.733031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:51:36.733040 systemd-journald[290]: Journal started May 14 00:51:36.733079 systemd-journald[290]: Runtime Journal (/run/log/journal/8bc3e99848c24c5dbfd8863b246d8eb4) is 6.0M, max 48.7M, 42.6M free. May 14 00:51:36.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.731511 systemd-modules-load[291]: Inserted module 'overlay' May 14 00:51:36.737516 systemd[1]: Started systemd-journald.service. May 14 00:51:36.737534 kernel: audit: type=1130 audit(1747183896.733:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.742114 kernel: audit: type=1130 audit(1747183896.738:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.746450 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:51:36.747970 systemd[1]: Starting dracut-cmdline.service... May 14 00:51:36.752323 kernel: audit: type=1130 audit(1747183896.747:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.756485 systemd-resolved[292]: Positive Trust Anchors: May 14 00:51:36.756498 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:51:36.759659 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:51:36.756526 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:51:36.760627 systemd-resolved[292]: Defaulting to hostname 'linux'. May 14 00:51:36.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.761346 systemd[1]: Started systemd-resolved.service. May 14 00:51:36.770204 kernel: audit: type=1130 audit(1747183896.765:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.770221 kernel: Bridge firewalling registered May 14 00:51:36.768392 systemd[1]: Reached target nss-lookup.target. May 14 00:51:36.768825 systemd-modules-load[291]: Inserted module 'br_netfilter' May 14 00:51:36.771795 dracut-cmdline[309]: dracut-dracut-053 May 14 00:51:36.772594 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:51:36.780125 kernel: SCSI subsystem initialized May 14 00:51:36.787282 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:51:36.787310 kernel: device-mapper: uevent: version 1.0.3 May 14 00:51:36.788300 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:51:36.790563 systemd-modules-load[291]: Inserted module 'dm_multipath' May 14 00:51:36.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.791655 systemd[1]: Finished systemd-modules-load.service. May 14 00:51:36.797091 kernel: audit: type=1130 audit(1747183896.792:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.793155 systemd[1]: Starting systemd-sysctl.service... May 14 00:51:36.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.802114 systemd[1]: Finished systemd-sysctl.service. May 14 00:51:36.806115 kernel: audit: type=1130 audit(1747183896.802:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.831115 kernel: Loading iSCSI transport class v2.0-870. May 14 00:51:36.843111 kernel: iscsi: registered transport (tcp) May 14 00:51:36.859277 kernel: iscsi: registered transport (qla4xxx) May 14 00:51:36.859292 kernel: QLogic iSCSI HBA Driver May 14 00:51:36.892324 systemd[1]: Finished dracut-cmdline.service. May 14 00:51:36.896129 kernel: audit: type=1130 audit(1747183896.892:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:36.893818 systemd[1]: Starting dracut-pre-udev.service... May 14 00:51:36.938129 kernel: raid6: neonx8 gen() 13763 MB/s May 14 00:51:36.954117 kernel: raid6: neonx8 xor() 10830 MB/s May 14 00:51:36.971116 kernel: raid6: neonx4 gen() 13514 MB/s May 14 00:51:36.988129 kernel: raid6: neonx4 xor() 11039 MB/s May 14 00:51:37.005121 kernel: raid6: neonx2 gen() 12889 MB/s May 14 00:51:37.022119 kernel: raid6: neonx2 xor() 10255 MB/s May 14 00:51:37.039119 kernel: raid6: neonx1 gen() 10489 MB/s May 14 00:51:37.056128 kernel: raid6: neonx1 xor() 8766 MB/s May 14 00:51:37.073128 kernel: raid6: int64x8 gen() 6254 MB/s May 14 00:51:37.090124 kernel: raid6: int64x8 xor() 3528 MB/s May 14 00:51:37.107126 kernel: raid6: int64x4 gen() 7229 MB/s May 14 00:51:37.124125 kernel: raid6: int64x4 xor() 3843 MB/s May 14 00:51:37.141127 kernel: raid6: int64x2 gen() 6128 MB/s May 14 00:51:37.158115 kernel: raid6: int64x2 xor() 3306 MB/s May 14 00:51:37.175125 kernel: raid6: int64x1 gen() 5027 MB/s May 14 00:51:37.192205 kernel: raid6: int64x1 xor() 2636 MB/s May 14 00:51:37.192227 kernel: raid6: using algorithm neonx8 gen() 13763 MB/s May 14 00:51:37.192244 kernel: raid6: .... xor() 10830 MB/s, rmw enabled May 14 00:51:37.193279 kernel: raid6: using neon recovery algorithm May 14 00:51:37.203117 kernel: xor: measuring software checksum speed May 14 00:51:37.204342 kernel: 8regs : 15097 MB/sec May 14 00:51:37.204363 kernel: 32regs : 20697 MB/sec May 14 00:51:37.205587 kernel: arm64_neon : 24542 MB/sec May 14 00:51:37.205598 kernel: xor: using function: arm64_neon (24542 MB/sec) May 14 00:51:37.259115 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:51:37.268800 systemd[1]: Finished dracut-pre-udev.service. May 14 00:51:37.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:37.270673 systemd[1]: Starting systemd-udevd.service... May 14 00:51:37.274005 kernel: audit: type=1130 audit(1747183897.269:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:37.270000 audit: BPF prog-id=7 op=LOAD May 14 00:51:37.270000 audit: BPF prog-id=8 op=LOAD May 14 00:51:37.285180 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 14 00:51:37.288514 systemd[1]: Started systemd-udevd.service. May 14 00:51:37.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:37.290993 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:51:37.300955 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation May 14 00:51:37.325632 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:51:37.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:37.327059 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:51:37.362858 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:51:37.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:37.387514 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:51:37.396241 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:51:37.396256 kernel: GPT:9289727 != 19775487 May 14 00:51:37.396265 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:51:37.396273 kernel: GPT:9289727 != 19775487 May 14 00:51:37.396286 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:51:37.396294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:51:37.407624 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:51:37.409182 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:51:37.414116 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (557) May 14 00:51:37.418570 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:51:37.423868 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:51:37.427274 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:51:37.428872 systemd[1]: Starting disk-uuid.service... May 14 00:51:37.434442 disk-uuid[564]: Primary Header is updated. May 14 00:51:37.434442 disk-uuid[564]: Secondary Entries is updated. May 14 00:51:37.434442 disk-uuid[564]: Secondary Header is updated. May 14 00:51:37.437551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:51:38.447114 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:51:38.447157 disk-uuid[565]: The operation has completed successfully. May 14 00:51:38.464693 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:51:38.465830 systemd[1]: Finished disk-uuid.service. May 14 00:51:38.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.472495 systemd[1]: Starting verity-setup.service... May 14 00:51:38.491259 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:51:38.511630 systemd[1]: Found device dev-mapper-usr.device. May 14 00:51:38.513723 systemd[1]: Mounting sysusr-usr.mount... May 14 00:51:38.515463 systemd[1]: Finished verity-setup.service. May 14 00:51:38.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.557857 systemd[1]: Mounted sysusr-usr.mount. May 14 00:51:38.559162 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:51:38.558686 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:51:38.559309 systemd[1]: Starting ignition-setup.service... May 14 00:51:38.561596 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:51:38.567616 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:51:38.567646 kernel: BTRFS info (device vda6): using free space tree May 14 00:51:38.567660 kernel: BTRFS info (device vda6): has skinny extents May 14 00:51:38.577143 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:51:38.583899 systemd[1]: Finished ignition-setup.service. May 14 00:51:38.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.585371 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:51:38.641810 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:51:38.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.643000 audit: BPF prog-id=9 op=LOAD May 14 00:51:38.643977 systemd[1]: Starting systemd-networkd.service... May 14 00:51:38.661646 ignition[656]: Ignition 2.14.0 May 14 00:51:38.661655 ignition[656]: Stage: fetch-offline May 14 00:51:38.661691 ignition[656]: no configs at "/usr/lib/ignition/base.d" May 14 00:51:38.661701 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:38.661836 ignition[656]: parsed url from cmdline: "" May 14 00:51:38.661840 ignition[656]: no config URL provided May 14 00:51:38.661844 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:51:38.661851 ignition[656]: no config at "/usr/lib/ignition/user.ign" May 14 00:51:38.661867 ignition[656]: op(1): [started] loading QEMU firmware config module May 14 00:51:38.661871 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:51:38.667542 ignition[656]: op(1): [finished] loading QEMU firmware config module May 14 00:51:38.670122 systemd-networkd[741]: lo: Link UP May 14 00:51:38.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.670126 systemd-networkd[741]: lo: Gained carrier May 14 00:51:38.670479 systemd-networkd[741]: Enumeration completed May 14 00:51:38.670572 systemd[1]: Started systemd-networkd.service. May 14 00:51:38.670655 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:51:38.671592 systemd-networkd[741]: eth0: Link UP May 14 00:51:38.671595 systemd-networkd[741]: eth0: Gained carrier May 14 00:51:38.672223 systemd[1]: Reached target network.target. May 14 00:51:38.674162 systemd[1]: Starting iscsiuio.service... May 14 00:51:38.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.683370 systemd[1]: Started iscsiuio.service. May 14 00:51:38.685780 systemd[1]: Starting iscsid.service... May 14 00:51:38.689177 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:51:38.689177 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:51:38.689177 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:51:38.689177 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:51:38.689177 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:51:38.689177 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:51:38.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.692049 systemd[1]: Started iscsid.service. May 14 00:51:38.697741 systemd[1]: Starting dracut-initqueue.service... May 14 00:51:38.699959 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:51:38.707115 systemd[1]: Finished dracut-initqueue.service. May 14 00:51:38.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.708023 systemd[1]: Reached target remote-fs-pre.target. May 14 00:51:38.709613 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:51:38.711199 systemd[1]: Reached target remote-fs.target. May 14 00:51:38.713452 systemd[1]: Starting dracut-pre-mount.service... May 14 00:51:38.717114 ignition[656]: parsing config with SHA512: 7ee648da332cfa6e3dede07e0190f06018f390131716301b6403ae563d0d511b224b28ede22469ef2a3113ba0825d31a0a58845c0acd96039876f4ec9b5a4e5a May 14 00:51:38.723215 unknown[656]: fetched base config from "system" May 14 00:51:38.724063 unknown[656]: fetched user config from "qemu" May 14 00:51:38.724215 systemd[1]: Finished dracut-pre-mount.service. May 14 00:51:38.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.726055 ignition[656]: fetch-offline: fetch-offline passed May 14 00:51:38.726132 ignition[656]: Ignition finished successfully May 14 00:51:38.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.726920 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:51:38.727842 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:51:38.728474 systemd[1]: Starting ignition-kargs.service... May 14 00:51:38.736287 ignition[762]: Ignition 2.14.0 May 14 00:51:38.736297 ignition[762]: Stage: kargs May 14 00:51:38.736378 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 14 00:51:38.736387 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:38.738546 systemd[1]: Finished ignition-kargs.service. May 14 00:51:38.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.737190 ignition[762]: kargs: kargs passed May 14 00:51:38.737227 ignition[762]: Ignition finished successfully May 14 00:51:38.740818 systemd[1]: Starting ignition-disks.service... May 14 00:51:38.746445 ignition[768]: Ignition 2.14.0 May 14 00:51:38.746455 ignition[768]: Stage: disks May 14 00:51:38.746535 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 14 00:51:38.746545 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:38.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.748173 systemd[1]: Finished ignition-disks.service. May 14 00:51:38.747427 ignition[768]: disks: disks passed May 14 00:51:38.749249 systemd[1]: Reached target initrd-root-device.target. May 14 00:51:38.747466 ignition[768]: Ignition finished successfully May 14 00:51:38.750947 systemd[1]: Reached target local-fs-pre.target. May 14 00:51:38.752286 systemd[1]: Reached target local-fs.target. May 14 00:51:38.753436 systemd[1]: Reached target sysinit.target. May 14 00:51:38.754781 systemd[1]: Reached target basic.target. May 14 00:51:38.756761 systemd[1]: Starting systemd-fsck-root.service... May 14 00:51:38.767086 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:51:38.770537 systemd[1]: Finished systemd-fsck-root.service. May 14 00:51:38.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.773660 systemd[1]: Mounting sysroot.mount... May 14 00:51:38.780119 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:51:38.780493 systemd[1]: Mounted sysroot.mount. May 14 00:51:38.781237 systemd[1]: Reached target initrd-root-fs.target. May 14 00:51:38.783785 systemd[1]: Mounting sysroot-usr.mount... May 14 00:51:38.784671 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:51:38.784707 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:51:38.784730 systemd[1]: Reached target ignition-diskful.target. May 14 00:51:38.786524 systemd[1]: Mounted sysroot-usr.mount. May 14 00:51:38.788408 systemd[1]: Starting initrd-setup-root.service... May 14 00:51:38.792480 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:51:38.795931 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 14 00:51:38.799864 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:51:38.803871 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:51:38.827872 systemd[1]: Finished initrd-setup-root.service. May 14 00:51:38.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.829364 systemd[1]: Starting ignition-mount.service... May 14 00:51:38.830627 systemd[1]: Starting sysroot-boot.service... May 14 00:51:38.834471 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:51:38.843007 ignition[829]: INFO : Ignition 2.14.0 May 14 00:51:38.843007 ignition[829]: INFO : Stage: mount May 14 00:51:38.845248 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:51:38.845248 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:38.845248 ignition[829]: INFO : mount: mount passed May 14 00:51:38.845248 ignition[829]: INFO : Ignition finished successfully May 14 00:51:38.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.845485 systemd[1]: Finished ignition-mount.service. May 14 00:51:38.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:38.849673 systemd[1]: Finished sysroot-boot.service. May 14 00:51:39.522759 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:51:39.529258 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 14 00:51:39.529286 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:51:39.531428 kernel: BTRFS info (device vda6): using free space tree May 14 00:51:39.531442 kernel: BTRFS info (device vda6): has skinny extents May 14 00:51:39.534262 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:51:39.535936 systemd[1]: Starting ignition-files.service... May 14 00:51:39.549323 ignition[857]: INFO : Ignition 2.14.0 May 14 00:51:39.549323 ignition[857]: INFO : Stage: files May 14 00:51:39.550908 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:51:39.550908 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:39.550908 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 14 00:51:39.554861 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:51:39.554861 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:51:39.558254 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:51:39.559579 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:51:39.559579 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:51:39.559579 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 14 00:51:39.559579 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 14 00:51:39.559579 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:51:39.559579 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 00:51:39.558936 unknown[857]: wrote ssh authorized keys file for user: core May 14 00:51:39.603659 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:51:39.942733 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:51:39.942733 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:51:39.946486 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 00:51:40.323909 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:51:40.509275 systemd-networkd[741]: eth0: Gained IPv6LL May 14 00:51:40.685033 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:51:40.685033 ignition[857]: INFO : files: op(c): [started] processing unit "containerd.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 14 00:51:40.689259 ignition[857]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 14 00:51:40.689259 ignition[857]: INFO : files: op(c): [finished] processing unit "containerd.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:51:40.689259 ignition[857]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:51:40.720603 ignition[857]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:51:40.722199 ignition[857]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:51:40.722199 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:51:40.722199 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:51:40.722199 ignition[857]: INFO : files: files passed May 14 00:51:40.722199 ignition[857]: INFO : Ignition finished successfully May 14 00:51:40.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.722082 systemd[1]: Finished ignition-files.service. May 14 00:51:40.724061 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:51:40.725213 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:51:40.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.735761 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:51:40.725871 systemd[1]: Starting ignition-quench.service... May 14 00:51:40.739036 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:51:40.729545 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:51:40.729628 systemd[1]: Finished ignition-quench.service. May 14 00:51:40.732500 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:51:40.734168 systemd[1]: Reached target ignition-complete.target. May 14 00:51:40.737063 systemd[1]: Starting initrd-parse-etc.service... May 14 00:51:40.748670 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:51:40.748756 systemd[1]: Finished initrd-parse-etc.service. May 14 00:51:40.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.750471 systemd[1]: Reached target initrd-fs.target. May 14 00:51:40.751695 systemd[1]: Reached target initrd.target. May 14 00:51:40.752993 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:51:40.753719 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:51:40.763587 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:51:40.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.765124 systemd[1]: Starting initrd-cleanup.service... May 14 00:51:40.772632 systemd[1]: Stopped target nss-lookup.target. May 14 00:51:40.773505 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:51:40.774913 systemd[1]: Stopped target timers.target. May 14 00:51:40.776296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:51:40.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.776396 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:51:40.777677 systemd[1]: Stopped target initrd.target. May 14 00:51:40.779200 systemd[1]: Stopped target basic.target. May 14 00:51:40.780578 systemd[1]: Stopped target ignition-complete.target. May 14 00:51:40.781966 systemd[1]: Stopped target ignition-diskful.target. May 14 00:51:40.783304 systemd[1]: Stopped target initrd-root-device.target. May 14 00:51:40.784784 systemd[1]: Stopped target remote-fs.target. May 14 00:51:40.786161 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:51:40.787661 systemd[1]: Stopped target sysinit.target. May 14 00:51:40.788967 systemd[1]: Stopped target local-fs.target. May 14 00:51:40.790351 systemd[1]: Stopped target local-fs-pre.target. May 14 00:51:40.791658 systemd[1]: Stopped target swap.target. May 14 00:51:40.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.792889 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:51:40.793002 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:51:40.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.794355 systemd[1]: Stopped target cryptsetup.target. May 14 00:51:40.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.795572 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:51:40.795675 systemd[1]: Stopped dracut-initqueue.service. May 14 00:51:40.797138 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:51:40.797239 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:51:40.798593 systemd[1]: Stopped target paths.target. May 14 00:51:40.799834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:51:40.804123 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:51:40.805749 systemd[1]: Stopped target slices.target. May 14 00:51:40.807087 systemd[1]: Stopped target sockets.target. May 14 00:51:40.808398 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:51:40.808480 systemd[1]: Closed iscsid.socket. May 14 00:51:40.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.809616 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:51:40.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.809717 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:51:40.811167 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:51:40.811263 systemd[1]: Stopped ignition-files.service. May 14 00:51:40.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.813168 systemd[1]: Stopping ignition-mount.service... May 14 00:51:40.814071 systemd[1]: Stopping iscsiuio.service... May 14 00:51:40.815789 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:51:40.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.815924 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:51:40.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.825084 ignition[897]: INFO : Ignition 2.14.0 May 14 00:51:40.825084 ignition[897]: INFO : Stage: umount May 14 00:51:40.825084 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:51:40.825084 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:51:40.825084 ignition[897]: INFO : umount: umount passed May 14 00:51:40.825084 ignition[897]: INFO : Ignition finished successfully May 14 00:51:40.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.818298 systemd[1]: Stopping sysroot-boot.service... May 14 00:51:40.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.819434 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:51:40.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.819558 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:51:40.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.822690 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:51:40.822780 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:51:40.825781 systemd[1]: iscsiuio.service: Deactivated successfully. May 14 00:51:40.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.825885 systemd[1]: Stopped iscsiuio.service. May 14 00:51:40.827164 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:51:40.827241 systemd[1]: Stopped ignition-mount.service. May 14 00:51:40.829858 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:51:40.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.830409 systemd[1]: Stopped target network.target. May 14 00:51:40.831377 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:51:40.831410 systemd[1]: Closed iscsiuio.socket. May 14 00:51:40.832567 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:51:40.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.832614 systemd[1]: Stopped ignition-disks.service. May 14 00:51:40.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.834086 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:51:40.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.834148 systemd[1]: Stopped ignition-kargs.service. May 14 00:51:40.835549 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:51:40.835599 systemd[1]: Stopped ignition-setup.service. May 14 00:51:40.837467 systemd[1]: Stopping systemd-networkd.service... May 14 00:51:40.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.838949 systemd[1]: Stopping systemd-resolved.service... May 14 00:51:40.840718 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:51:40.840815 systemd[1]: Finished initrd-cleanup.service. May 14 00:51:40.844150 systemd-networkd[741]: eth0: DHCPv6 lease lost May 14 00:51:40.866000 audit: BPF prog-id=9 op=UNLOAD May 14 00:51:40.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.846131 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:51:40.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.846331 systemd[1]: Stopped systemd-networkd.service. May 14 00:51:40.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.870000 audit: BPF prog-id=6 op=UNLOAD May 14 00:51:40.848012 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:51:40.848042 systemd[1]: Closed systemd-networkd.socket. May 14 00:51:40.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.849966 systemd[1]: Stopping network-cleanup.service... May 14 00:51:40.851483 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:51:40.851539 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:51:40.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.852966 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:51:40.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.853010 systemd[1]: Stopped systemd-sysctl.service. May 14 00:51:40.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.855269 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:51:40.855314 systemd[1]: Stopped systemd-modules-load.service. May 14 00:51:40.859357 systemd[1]: Stopping systemd-udevd.service... May 14 00:51:40.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.861395 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:51:40.861921 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:51:40.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:40.862735 systemd[1]: Stopped systemd-resolved.service. May 14 00:51:40.866469 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:51:40.866550 systemd[1]: Stopped sysroot-boot.service. May 14 00:51:40.867803 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:51:40.867878 systemd[1]: Stopped network-cleanup.service. May 14 00:51:40.869003 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:51:40.869046 systemd[1]: Stopped initrd-setup-root.service. May 14 00:51:40.871633 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:51:40.871742 systemd[1]: Stopped systemd-udevd.service. May 14 00:51:40.873379 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:51:40.873414 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:51:40.874532 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:51:40.899000 audit: BPF prog-id=5 op=UNLOAD May 14 00:51:40.899000 audit: BPF prog-id=4 op=UNLOAD May 14 00:51:40.899000 audit: BPF prog-id=3 op=UNLOAD May 14 00:51:40.874563 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:51:40.875909 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:51:40.900000 audit: BPF prog-id=8 op=UNLOAD May 14 00:51:40.900000 audit: BPF prog-id=7 op=UNLOAD May 14 00:51:40.875950 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:51:40.877479 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:51:40.877519 systemd[1]: Stopped dracut-cmdline.service. May 14 00:51:40.878819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:51:40.878855 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:51:40.880958 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:51:40.882630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:51:40.882683 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:51:40.885820 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:51:40.885897 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:51:40.886867 systemd[1]: Reached target initrd-switch-root.target. May 14 00:51:40.889068 systemd[1]: Starting initrd-switch-root.service... May 14 00:51:40.897209 systemd[1]: Switching root. May 14 00:51:40.916922 iscsid[748]: iscsid shutting down. May 14 00:51:40.917605 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 14 00:51:40.917657 systemd-journald[290]: Journal stopped May 14 00:51:42.920771 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:51:42.920819 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:51:42.920832 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:51:42.920842 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:51:42.920852 kernel: SELinux: policy capability open_perms=1 May 14 00:51:42.920861 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:51:42.920870 kernel: SELinux: policy capability always_check_network=0 May 14 00:51:42.920879 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:51:42.920889 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:51:42.920898 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:51:42.920918 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:51:42.920932 systemd[1]: Successfully loaded SELinux policy in 37.188ms. May 14 00:51:42.920954 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.868ms. May 14 00:51:42.920966 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:51:42.920981 systemd[1]: Detected virtualization kvm. May 14 00:51:42.920991 systemd[1]: Detected architecture arm64. May 14 00:51:42.921001 systemd[1]: Detected first boot. May 14 00:51:42.921011 systemd[1]: Initializing machine ID from VM UUID. May 14 00:51:42.921023 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:51:42.921033 kernel: kauditd_printk_skb: 70 callbacks suppressed May 14 00:51:42.921044 kernel: audit: type=1400 audit(1747183901.156:81): avc: denied { associate } for pid=949 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 14 00:51:42.921055 kernel: audit: type=1300 audit(1747183901.156:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=400024f672 a1=4000152ae0 a2=4000158a00 a3=32 items=0 ppid=932 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:51:42.921066 kernel: audit: type=1327 audit(1747183901.156:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:51:42.921076 kernel: audit: type=1400 audit(1747183901.159:82): avc: denied { associate } for pid=949 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 14 00:51:42.921086 kernel: audit: type=1300 audit(1747183901.159:82): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400024f749 a2=1ed a3=0 items=2 ppid=932 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:51:42.921115 kernel: audit: type=1307 audit(1747183901.159:82): cwd="/" May 14 00:51:42.921125 kernel: audit: type=1302 audit(1747183901.159:82): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:51:42.921135 kernel: audit: type=1302 audit(1747183901.159:82): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:51:42.921146 kernel: audit: type=1327 audit(1747183901.159:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:51:42.921157 systemd[1]: Populated /etc with preset unit settings. May 14 00:51:42.921169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:51:42.921181 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:51:42.921193 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:51:42.921204 systemd[1]: Queued start job for default target multi-user.target. May 14 00:51:42.921214 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:51:42.921225 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:51:42.921239 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:51:42.921249 systemd[1]: Created slice system-getty.slice. May 14 00:51:42.921259 systemd[1]: Created slice system-modprobe.slice. May 14 00:51:42.921272 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:51:42.921283 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:51:42.921310 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:51:42.921321 systemd[1]: Created slice user.slice. May 14 00:51:42.921331 systemd[1]: Started systemd-ask-password-console.path. May 14 00:51:42.921341 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:51:42.921352 systemd[1]: Set up automount boot.automount. May 14 00:51:42.921363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:51:42.921374 systemd[1]: Reached target integritysetup.target. May 14 00:51:42.921384 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:51:42.921394 systemd[1]: Reached target remote-fs.target. May 14 00:51:42.921404 systemd[1]: Reached target slices.target. May 14 00:51:42.921414 systemd[1]: Reached target swap.target. May 14 00:51:42.921424 systemd[1]: Reached target torcx.target. May 14 00:51:42.921434 systemd[1]: Reached target veritysetup.target. May 14 00:51:42.921451 systemd[1]: Listening on systemd-coredump.socket. May 14 00:51:42.921461 systemd[1]: Listening on systemd-initctl.socket. May 14 00:51:42.921473 kernel: audit: type=1400 audit(1747183902.841:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:51:42.921483 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:51:42.921494 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:51:42.921505 systemd[1]: Listening on systemd-journald.socket. May 14 00:51:42.921517 systemd[1]: Listening on systemd-networkd.socket. May 14 00:51:42.921527 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:51:42.921537 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:51:42.921547 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:51:42.921558 systemd[1]: Mounting dev-hugepages.mount... May 14 00:51:42.921569 systemd[1]: Mounting dev-mqueue.mount... May 14 00:51:42.921579 systemd[1]: Mounting media.mount... May 14 00:51:42.921589 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:51:42.921600 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:51:42.921610 systemd[1]: Mounting tmp.mount... May 14 00:51:42.921620 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:51:42.921630 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:51:42.921641 systemd[1]: Starting kmod-static-nodes.service... May 14 00:51:42.921652 systemd[1]: Starting modprobe@configfs.service... May 14 00:51:42.921662 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:51:42.921672 systemd[1]: Starting modprobe@drm.service... May 14 00:51:42.921684 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:51:42.921694 systemd[1]: Starting modprobe@fuse.service... May 14 00:51:42.921704 systemd[1]: Starting modprobe@loop.service... May 14 00:51:42.921714 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:51:42.921725 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 14 00:51:42.921736 kernel: fuse: init (API version 7.34) May 14 00:51:42.921747 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 14 00:51:42.921757 systemd[1]: Starting systemd-journald.service... May 14 00:51:42.921767 kernel: loop: module loaded May 14 00:51:42.921875 systemd[1]: Starting systemd-modules-load.service... May 14 00:51:42.921889 systemd[1]: Starting systemd-network-generator.service... May 14 00:51:42.921900 systemd[1]: Starting systemd-remount-fs.service... May 14 00:51:42.921910 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:51:42.921921 systemd[1]: Mounted dev-hugepages.mount. May 14 00:51:42.921932 systemd[1]: Mounted dev-mqueue.mount. May 14 00:51:42.921942 systemd[1]: Mounted media.mount. May 14 00:51:42.921953 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:51:42.921966 systemd-journald[1032]: Journal started May 14 00:51:42.922006 systemd-journald[1032]: Runtime Journal (/run/log/journal/8bc3e99848c24c5dbfd8863b246d8eb4) is 6.0M, max 48.7M, 42.6M free. May 14 00:51:42.841000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:51:42.841000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 14 00:51:42.919000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:51:42.919000 audit[1032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd0f887e0 a2=4000 a3=1 items=0 ppid=1 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:51:42.919000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:51:42.925117 systemd[1]: Started systemd-journald.service. May 14 00:51:42.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.925560 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:51:42.926506 systemd[1]: Mounted tmp.mount. May 14 00:51:42.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.927432 systemd[1]: Finished kmod-static-nodes.service. May 14 00:51:42.928471 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:51:42.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.928611 systemd[1]: Finished modprobe@configfs.service. May 14 00:51:42.929720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:51:42.929858 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:51:42.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.930905 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:51:42.931040 systemd[1]: Finished modprobe@drm.service. May 14 00:51:42.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.932129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:51:42.932257 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:51:42.933475 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:51:42.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.934600 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:51:42.934739 systemd[1]: Finished modprobe@fuse.service. May 14 00:51:42.935848 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:51:42.936002 systemd[1]: Finished modprobe@loop.service. May 14 00:51:42.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.937234 systemd[1]: Finished systemd-modules-load.service. May 14 00:51:42.938370 systemd[1]: Finished systemd-network-generator.service. May 14 00:51:42.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.942365 systemd[1]: Finished systemd-remount-fs.service. May 14 00:51:42.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.943501 systemd[1]: Reached target network-pre.target. May 14 00:51:42.945414 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:51:42.947299 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:51:42.948015 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:51:42.949768 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:51:42.951695 systemd[1]: Starting systemd-journal-flush.service... May 14 00:51:42.952680 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:51:42.953655 systemd[1]: Starting systemd-random-seed.service... May 14 00:51:42.954539 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:51:42.955657 systemd[1]: Starting systemd-sysctl.service... May 14 00:51:42.963899 systemd-journald[1032]: Time spent on flushing to /var/log/journal/8bc3e99848c24c5dbfd8863b246d8eb4 is 17.192ms for 929 entries. May 14 00:51:42.963899 systemd-journald[1032]: System Journal (/var/log/journal/8bc3e99848c24c5dbfd8863b246d8eb4) is 8.0M, max 195.6M, 187.6M free. May 14 00:51:42.986405 systemd-journald[1032]: Received client request to flush runtime journal. May 14 00:51:42.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.971015 systemd[1]: Starting systemd-sysusers.service... May 14 00:51:42.974446 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:51:42.977278 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:51:42.978449 systemd[1]: Finished systemd-random-seed.service. May 14 00:51:42.979678 systemd[1]: Finished systemd-sysctl.service. May 14 00:51:42.980760 systemd[1]: Reached target first-boot-complete.target. May 14 00:51:42.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.987649 systemd[1]: Finished systemd-journal-flush.service. May 14 00:51:42.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:42.992309 systemd[1]: Finished systemd-sysusers.service. May 14 00:51:42.994185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:51:43.003693 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:51:43.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.005744 systemd[1]: Starting systemd-udev-settle.service... May 14 00:51:43.013384 udevadm[1085]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:51:43.013644 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:51:43.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.319565 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:51:43.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.321701 systemd[1]: Starting systemd-udevd.service... May 14 00:51:43.340321 systemd-udevd[1088]: Using default interface naming scheme 'v252'. May 14 00:51:43.357664 systemd[1]: Started systemd-udevd.service. May 14 00:51:43.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.359912 systemd[1]: Starting systemd-networkd.service... May 14 00:51:43.371121 systemd[1]: Starting systemd-userdbd.service... May 14 00:51:43.382915 systemd[1]: Found device dev-ttyAMA0.device. May 14 00:51:43.416927 systemd[1]: Started systemd-userdbd.service. May 14 00:51:43.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.439962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:51:43.468516 systemd-networkd[1090]: lo: Link UP May 14 00:51:43.468761 systemd-networkd[1090]: lo: Gained carrier May 14 00:51:43.469300 systemd-networkd[1090]: Enumeration completed May 14 00:51:43.469519 systemd-networkd[1090]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:51:43.469539 systemd[1]: Finished systemd-udev-settle.service. May 14 00:51:43.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.470641 systemd[1]: Started systemd-networkd.service. May 14 00:51:43.471395 systemd-networkd[1090]: eth0: Link UP May 14 00:51:43.471497 systemd-networkd[1090]: eth0: Gained carrier May 14 00:51:43.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.472752 systemd[1]: Starting lvm2-activation-early.service... May 14 00:51:43.489069 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:51:43.496215 systemd-networkd[1090]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:51:43.510909 systemd[1]: Finished lvm2-activation-early.service. May 14 00:51:43.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.511935 systemd[1]: Reached target cryptsetup.target. May 14 00:51:43.513811 systemd[1]: Starting lvm2-activation.service... May 14 00:51:43.517487 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:51:43.548890 systemd[1]: Finished lvm2-activation.service. May 14 00:51:43.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.549828 systemd[1]: Reached target local-fs-pre.target. May 14 00:51:43.550667 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:51:43.550698 systemd[1]: Reached target local-fs.target. May 14 00:51:43.551476 systemd[1]: Reached target machines.target. May 14 00:51:43.553412 systemd[1]: Starting ldconfig.service... May 14 00:51:43.554488 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:51:43.554540 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:43.555768 systemd[1]: Starting systemd-boot-update.service... May 14 00:51:43.557662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:51:43.559935 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:51:43.562090 systemd[1]: Starting systemd-sysext.service... May 14 00:51:43.563541 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1127 (bootctl) May 14 00:51:43.564563 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:51:43.573674 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:51:43.577212 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:51:43.577447 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:51:43.579074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:51:43.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.622126 kernel: loop0: detected capacity change from 0 to 194096 May 14 00:51:43.626304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:51:43.627139 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:51:43.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.634172 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:51:43.653053 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) May 14 00:51:43.653053 systemd-fsck[1140]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:51:43.656720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:51:43.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.659114 kernel: loop1: detected capacity change from 0 to 194096 May 14 00:51:43.659775 systemd[1]: Mounting boot.mount... May 14 00:51:43.663774 (sd-sysext)[1143]: Using extensions 'kubernetes'. May 14 00:51:43.664395 (sd-sysext)[1143]: Merged extensions into '/usr'. May 14 00:51:43.673815 systemd[1]: Mounted boot.mount. May 14 00:51:43.678632 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:51:43.679789 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:51:43.681663 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:51:43.683582 systemd[1]: Starting modprobe@loop.service... May 14 00:51:43.684500 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:51:43.684621 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:43.685501 systemd[1]: Finished systemd-boot-update.service. May 14 00:51:43.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.686888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:51:43.687058 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:51:43.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.688486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:51:43.688622 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:51:43.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.690106 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:51:43.690255 systemd[1]: Finished modprobe@loop.service. May 14 00:51:43.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.691791 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:51:43.691889 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:51:43.749594 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:51:43.753805 systemd[1]: Finished ldconfig.service. May 14 00:51:43.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.915779 systemd[1]: Mounting usr-share-oem.mount... May 14 00:51:43.920903 systemd[1]: Mounted usr-share-oem.mount. May 14 00:51:43.922717 systemd[1]: Finished systemd-sysext.service. May 14 00:51:43.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:43.924715 systemd[1]: Starting ensure-sysext.service... May 14 00:51:43.926414 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:51:43.930679 systemd[1]: Reloading. May 14 00:51:43.935253 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:51:43.935929 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:51:43.937197 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:51:43.968587 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2025-05-14T00:51:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:51:43.968896 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2025-05-14T00:51:43Z" level=info msg="torcx already run" May 14 00:51:44.020828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:51:44.020851 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:51:44.035958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:51:44.080951 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:51:44.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.084624 systemd[1]: Starting audit-rules.service... May 14 00:51:44.086393 systemd[1]: Starting clean-ca-certificates.service... May 14 00:51:44.088285 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:51:44.090817 systemd[1]: Starting systemd-resolved.service... May 14 00:51:44.092890 systemd[1]: Starting systemd-timesyncd.service... May 14 00:51:44.094777 systemd[1]: Starting systemd-update-utmp.service... May 14 00:51:44.096147 systemd[1]: Finished clean-ca-certificates.service. May 14 00:51:44.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.099197 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:51:44.102000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:51:44.105402 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:51:44.106511 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:51:44.109451 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:51:44.111200 systemd[1]: Starting modprobe@loop.service... May 14 00:51:44.112005 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:51:44.112134 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:44.112228 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:51:44.112957 systemd[1]: Finished systemd-update-utmp.service. May 14 00:51:44.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.114475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:51:44.114613 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:51:44.116005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:51:44.116145 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:51:44.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.117372 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:51:44.117523 systemd[1]: Finished modprobe@loop.service. May 14 00:51:44.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.119345 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:51:44.119464 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:51:44.120857 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:51:44.121924 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:51:44.123727 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:51:44.125619 systemd[1]: Starting modprobe@loop.service... May 14 00:51:44.126413 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:51:44.126547 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:44.126641 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:51:44.127543 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:51:44.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.129591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:51:44.129724 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:51:44.131063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:51:44.131309 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:51:44.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.132470 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:51:44.132610 systemd[1]: Finished modprobe@loop.service. May 14 00:51:44.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.135637 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:51:44.136695 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:51:44.138526 systemd[1]: Starting modprobe@drm.service... May 14 00:51:44.140274 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:51:44.142030 systemd[1]: Starting modprobe@loop.service... May 14 00:51:44.142854 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:51:44.142980 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:44.144150 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:51:44.146240 systemd[1]: Starting systemd-update-done.service... May 14 00:51:44.147010 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:51:44.148274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:51:44.148424 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:51:44.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.149619 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:51:44.149753 systemd[1]: Finished modprobe@drm.service. May 14 00:51:44.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.150979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:51:44.151117 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:51:44.152312 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:51:44.152463 systemd[1]: Finished modprobe@loop.service. May 14 00:51:44.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.153759 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:51:44.153843 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:51:44.154921 systemd[1]: Finished ensure-sysext.service. May 14 00:51:44.163250 systemd[1]: Finished systemd-update-done.service. May 14 00:51:44.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:51:44.169000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:51:44.169000 audit[1280]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff4903100 a2=420 a3=0 items=0 ppid=1230 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:51:44.169752 augenrules[1280]: No rules May 14 00:51:44.169000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:51:44.170342 systemd[1]: Finished audit-rules.service. May 14 00:51:44.174360 systemd[1]: Started systemd-timesyncd.service. May 14 00:51:44.175085 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:51:44.175188 systemd-timesyncd[1240]: Initial clock synchronization to Wed 2025-05-14 00:51:44.236154 UTC. May 14 00:51:44.175594 systemd[1]: Reached target time-set.target. May 14 00:51:44.178714 systemd-resolved[1234]: Positive Trust Anchors: May 14 00:51:44.178724 systemd-resolved[1234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:51:44.178753 systemd-resolved[1234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:51:44.190351 systemd-resolved[1234]: Defaulting to hostname 'linux'. May 14 00:51:44.191731 systemd[1]: Started systemd-resolved.service. May 14 00:51:44.192649 systemd[1]: Reached target network.target. May 14 00:51:44.193450 systemd[1]: Reached target nss-lookup.target. May 14 00:51:44.194270 systemd[1]: Reached target sysinit.target. May 14 00:51:44.195124 systemd[1]: Started motdgen.path. May 14 00:51:44.195851 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:51:44.197113 systemd[1]: Started logrotate.timer. May 14 00:51:44.197940 systemd[1]: Started mdadm.timer. May 14 00:51:44.198663 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:51:44.199549 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:51:44.199584 systemd[1]: Reached target paths.target. May 14 00:51:44.200327 systemd[1]: Reached target timers.target. May 14 00:51:44.201392 systemd[1]: Listening on dbus.socket. May 14 00:51:44.203253 systemd[1]: Starting docker.socket... May 14 00:51:44.205000 systemd[1]: Listening on sshd.socket. May 14 00:51:44.205884 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:44.206239 systemd[1]: Listening on docker.socket. May 14 00:51:44.207021 systemd[1]: Reached target sockets.target. May 14 00:51:44.207835 systemd[1]: Reached target basic.target. May 14 00:51:44.208770 systemd[1]: System is tainted: cgroupsv1 May 14 00:51:44.208819 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:51:44.208840 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:51:44.209875 systemd[1]: Starting containerd.service... May 14 00:51:44.211681 systemd[1]: Starting dbus.service... May 14 00:51:44.213423 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:51:44.215387 systemd[1]: Starting extend-filesystems.service... May 14 00:51:44.216318 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:51:44.217731 systemd[1]: Starting motdgen.service... May 14 00:51:44.219795 systemd[1]: Starting prepare-helm.service... May 14 00:51:44.221732 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:51:44.223845 systemd[1]: Starting sshd-keygen.service... May 14 00:51:44.226409 systemd[1]: Starting systemd-logind.service... May 14 00:51:44.227260 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:51:44.227338 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:51:44.227555 jq[1292]: false May 14 00:51:44.228488 systemd[1]: Starting update-engine.service... May 14 00:51:44.230989 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:51:44.233565 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:51:44.233787 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:51:44.234963 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:51:44.235217 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:51:44.241691 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:51:44.241920 systemd[1]: Finished motdgen.service. May 14 00:51:44.246164 jq[1311]: true May 14 00:51:44.247808 tar[1313]: linux-arm64/helm May 14 00:51:44.254103 extend-filesystems[1293]: Found loop1 May 14 00:51:44.255201 extend-filesystems[1293]: Found vda May 14 00:51:44.255201 extend-filesystems[1293]: Found vda1 May 14 00:51:44.255201 extend-filesystems[1293]: Found vda2 May 14 00:51:44.255201 extend-filesystems[1293]: Found vda3 May 14 00:51:44.255201 extend-filesystems[1293]: Found usr May 14 00:51:44.255201 extend-filesystems[1293]: Found vda4 May 14 00:51:44.255201 extend-filesystems[1293]: Found vda6 May 14 00:51:44.255201 extend-filesystems[1293]: Found vda7 May 14 00:51:44.255201 extend-filesystems[1293]: Found vda9 May 14 00:51:44.255201 extend-filesystems[1293]: Checking size of /dev/vda9 May 14 00:51:44.267262 jq[1322]: true May 14 00:51:44.282635 extend-filesystems[1293]: Resized partition /dev/vda9 May 14 00:51:44.294350 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:51:44.303215 systemd-logind[1303]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:51:44.308132 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:51:44.311507 systemd-logind[1303]: New seat seat0. May 14 00:51:44.311722 dbus-daemon[1291]: [system] SELinux support is enabled May 14 00:51:44.311887 systemd[1]: Started dbus.service. May 14 00:51:44.316795 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:51:44.316813 systemd[1]: Reached target system-config.target. May 14 00:51:44.319488 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:51:44.319513 systemd[1]: Reached target user-config.target. May 14 00:51:44.321319 systemd[1]: Started systemd-logind.service. May 14 00:51:44.330112 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:51:44.342276 update_engine[1307]: I0514 00:51:44.336818 1307 main.cc:92] Flatcar Update Engine starting May 14 00:51:44.342517 update_engine[1307]: I0514 00:51:44.342343 1307 update_check_scheduler.cc:74] Next update check in 2m16s May 14 00:51:44.342489 systemd[1]: Started update-engine.service. May 14 00:51:44.342589 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:51:44.342589 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:51:44.342589 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:51:44.347242 extend-filesystems[1293]: Resized filesystem in /dev/vda9 May 14 00:51:44.345154 systemd[1]: Started locksmithd.service. May 14 00:51:44.346920 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:51:44.347269 systemd[1]: Finished extend-filesystems.service. May 14 00:51:44.351152 bash[1350]: Updated "/home/core/.ssh/authorized_keys" May 14 00:51:44.351923 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:51:44.385512 env[1318]: time="2025-05-14T00:51:44.385459160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:51:44.401884 env[1318]: time="2025-05-14T00:51:44.401847680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:51:44.401995 env[1318]: time="2025-05-14T00:51:44.401976040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:51:44.403326 env[1318]: time="2025-05-14T00:51:44.403289520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:51:44.403363 env[1318]: time="2025-05-14T00:51:44.403325840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:51:44.403583 env[1318]: time="2025-05-14T00:51:44.403557440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:51:44.403627 env[1318]: time="2025-05-14T00:51:44.403581800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:51:44.403627 env[1318]: time="2025-05-14T00:51:44.403596800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:51:44.403627 env[1318]: time="2025-05-14T00:51:44.403606920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:51:44.403697 env[1318]: time="2025-05-14T00:51:44.403681160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:51:44.404014 env[1318]: time="2025-05-14T00:51:44.403993120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:51:44.404182 env[1318]: time="2025-05-14T00:51:44.404160120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:51:44.404217 env[1318]: time="2025-05-14T00:51:44.404181360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:51:44.404252 env[1318]: time="2025-05-14T00:51:44.404235640Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:51:44.404252 env[1318]: time="2025-05-14T00:51:44.404251400Z" level=info msg="metadata content store policy set" policy=shared May 14 00:51:44.407799 env[1318]: time="2025-05-14T00:51:44.407770480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:51:44.407847 env[1318]: time="2025-05-14T00:51:44.407802600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:51:44.407847 env[1318]: time="2025-05-14T00:51:44.407814840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:51:44.407847 env[1318]: time="2025-05-14T00:51:44.407842480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:51:44.407902 env[1318]: time="2025-05-14T00:51:44.407856160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:51:44.407902 env[1318]: time="2025-05-14T00:51:44.407870000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:51:44.407902 env[1318]: time="2025-05-14T00:51:44.407882680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:51:44.408237 env[1318]: time="2025-05-14T00:51:44.408219680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:51:44.408267 env[1318]: time="2025-05-14T00:51:44.408242960Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:51:44.408267 env[1318]: time="2025-05-14T00:51:44.408255720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:51:44.408313 env[1318]: time="2025-05-14T00:51:44.408267760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:51:44.408313 env[1318]: time="2025-05-14T00:51:44.408281080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:51:44.408409 env[1318]: time="2025-05-14T00:51:44.408389680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:51:44.408496 env[1318]: time="2025-05-14T00:51:44.408478480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:51:44.408809 env[1318]: time="2025-05-14T00:51:44.408768480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:51:44.408838 env[1318]: time="2025-05-14T00:51:44.408822880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:51:44.408862 env[1318]: time="2025-05-14T00:51:44.408838760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:51:44.408953 env[1318]: time="2025-05-14T00:51:44.408938960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:51:44.408982 env[1318]: time="2025-05-14T00:51:44.408955880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:51:44.408982 env[1318]: time="2025-05-14T00:51:44.408968520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:51:44.408982 env[1318]: time="2025-05-14T00:51:44.408979640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409044 env[1318]: time="2025-05-14T00:51:44.408991760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409044 env[1318]: time="2025-05-14T00:51:44.409003240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409044 env[1318]: time="2025-05-14T00:51:44.409013320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409044 env[1318]: time="2025-05-14T00:51:44.409024240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409044 env[1318]: time="2025-05-14T00:51:44.409037600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:51:44.409211 env[1318]: time="2025-05-14T00:51:44.409191840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409238 env[1318]: time="2025-05-14T00:51:44.409213920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409238 env[1318]: time="2025-05-14T00:51:44.409227280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409279 env[1318]: time="2025-05-14T00:51:44.409238440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:51:44.409279 env[1318]: time="2025-05-14T00:51:44.409252160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:51:44.409279 env[1318]: time="2025-05-14T00:51:44.409264000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:51:44.409376 env[1318]: time="2025-05-14T00:51:44.409280440Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:51:44.409376 env[1318]: time="2025-05-14T00:51:44.409311000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:51:44.409547 env[1318]: time="2025-05-14T00:51:44.409497480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:51:44.410118 env[1318]: time="2025-05-14T00:51:44.409553520Z" level=info msg="Connect containerd service" May 14 00:51:44.410118 env[1318]: time="2025-05-14T00:51:44.409584160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:51:44.410217 env[1318]: time="2025-05-14T00:51:44.410188240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:51:44.410339 env[1318]: time="2025-05-14T00:51:44.410307840Z" level=info msg="Start subscribing containerd event" May 14 00:51:44.410367 env[1318]: time="2025-05-14T00:51:44.410352920Z" level=info msg="Start recovering state" May 14 00:51:44.410430 env[1318]: time="2025-05-14T00:51:44.410409400Z" level=info msg="Start event monitor" May 14 00:51:44.410824 env[1318]: time="2025-05-14T00:51:44.410800600Z" level=info msg="Start snapshots syncer" May 14 00:51:44.410856 env[1318]: time="2025-05-14T00:51:44.410832960Z" level=info msg="Start cni network conf syncer for default" May 14 00:51:44.410856 env[1318]: time="2025-05-14T00:51:44.410844160Z" level=info msg="Start streaming server" May 14 00:51:44.410929 env[1318]: time="2025-05-14T00:51:44.410774480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:51:44.410974 env[1318]: time="2025-05-14T00:51:44.410956200Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:51:44.411113 systemd[1]: Started containerd.service. May 14 00:51:44.412606 env[1318]: time="2025-05-14T00:51:44.412480800Z" level=info msg="containerd successfully booted in 0.029362s" May 14 00:51:44.417859 locksmithd[1353]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:51:44.680528 tar[1313]: linux-arm64/LICENSE May 14 00:51:44.680614 tar[1313]: linux-arm64/README.md May 14 00:51:44.684836 systemd[1]: Finished prepare-helm.service. May 14 00:51:45.053408 systemd-networkd[1090]: eth0: Gained IPv6LL May 14 00:51:45.055612 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:51:45.056958 systemd[1]: Reached target network-online.target. May 14 00:51:45.059689 systemd[1]: Starting kubelet.service... May 14 00:51:45.562217 systemd[1]: Started kubelet.service. May 14 00:51:45.669865 sshd_keygen[1317]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:51:45.687595 systemd[1]: Finished sshd-keygen.service. May 14 00:51:45.689916 systemd[1]: Starting issuegen.service... May 14 00:51:45.694474 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:51:45.694669 systemd[1]: Finished issuegen.service. May 14 00:51:45.696816 systemd[1]: Starting systemd-user-sessions.service... May 14 00:51:45.702488 systemd[1]: Finished systemd-user-sessions.service. May 14 00:51:45.704675 systemd[1]: Started getty@tty1.service. May 14 00:51:45.706689 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:51:45.707762 systemd[1]: Reached target getty.target. May 14 00:51:45.708666 systemd[1]: Reached target multi-user.target. May 14 00:51:45.710804 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:51:45.717193 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:51:45.717391 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:51:45.718590 systemd[1]: Startup finished in 4.973s (kernel) + 4.747s (userspace) = 9.720s. May 14 00:51:46.070082 kubelet[1378]: E0514 00:51:46.070029 1378 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:51:46.071913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:51:46.072073 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:51:49.039755 systemd[1]: Created slice system-sshd.slice. May 14 00:51:49.040864 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:36350.service. May 14 00:51:49.083965 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 36350 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:51:49.086227 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:51:49.097258 systemd-logind[1303]: New session 1 of user core. May 14 00:51:49.098023 systemd[1]: Created slice user-500.slice. May 14 00:51:49.098992 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:51:49.107625 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:51:49.108777 systemd[1]: Starting user@500.service... May 14 00:51:49.112093 (systemd)[1410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:51:49.169438 systemd[1410]: Queued start job for default target default.target. May 14 00:51:49.169641 systemd[1410]: Reached target paths.target. May 14 00:51:49.169655 systemd[1410]: Reached target sockets.target. May 14 00:51:49.169666 systemd[1410]: Reached target timers.target. May 14 00:51:49.169675 systemd[1410]: Reached target basic.target. May 14 00:51:49.169714 systemd[1410]: Reached target default.target. May 14 00:51:49.169735 systemd[1410]: Startup finished in 52ms. May 14 00:51:49.170218 systemd[1]: Started user@500.service. May 14 00:51:49.171086 systemd[1]: Started session-1.scope. May 14 00:51:49.219814 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:36352.service. May 14 00:51:49.261407 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 36352 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:51:49.262810 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:51:49.267033 systemd[1]: Started session-2.scope. May 14 00:51:49.267280 systemd-logind[1303]: New session 2 of user core. May 14 00:51:49.321105 sshd[1419]: pam_unix(sshd:session): session closed for user core May 14 00:51:49.322394 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:36364.service. May 14 00:51:49.324152 systemd-logind[1303]: Session 2 logged out. Waiting for processes to exit. May 14 00:51:49.324325 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:36352.service: Deactivated successfully. May 14 00:51:49.324988 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:51:49.325506 systemd-logind[1303]: Removed session 2. May 14 00:51:49.358545 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 36364 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:51:49.359827 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:51:49.364531 systemd[1]: Started session-3.scope. May 14 00:51:49.364867 systemd-logind[1303]: New session 3 of user core. May 14 00:51:49.416004 sshd[1424]: pam_unix(sshd:session): session closed for user core May 14 00:51:49.418095 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:36374.service. May 14 00:51:49.418914 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:36364.service: Deactivated successfully. May 14 00:51:49.420150 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:51:49.420533 systemd-logind[1303]: Session 3 logged out. Waiting for processes to exit. May 14 00:51:49.421196 systemd-logind[1303]: Removed session 3. May 14 00:51:49.455707 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:51:49.456794 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:51:49.460271 systemd-logind[1303]: New session 4 of user core. May 14 00:51:49.461436 systemd[1]: Started session-4.scope. May 14 00:51:49.514980 sshd[1431]: pam_unix(sshd:session): session closed for user core May 14 00:51:49.517129 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:36380.service. May 14 00:51:49.517697 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:36374.service: Deactivated successfully. May 14 00:51:49.518727 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:51:49.519010 systemd-logind[1303]: Session 4 logged out. Waiting for processes to exit. May 14 00:51:49.519738 systemd-logind[1303]: Removed session 4. May 14 00:51:49.553291 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 36380 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:51:49.554572 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:51:49.557563 systemd-logind[1303]: New session 5 of user core. May 14 00:51:49.558308 systemd[1]: Started session-5.scope. May 14 00:51:49.615231 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:51:49.617531 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:51:49.669668 systemd[1]: Starting docker.service... May 14 00:51:49.749568 env[1456]: time="2025-05-14T00:51:49.749512542Z" level=info msg="Starting up" May 14 00:51:49.750934 env[1456]: time="2025-05-14T00:51:49.750900492Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:51:49.750934 env[1456]: time="2025-05-14T00:51:49.750925564Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:51:49.751030 env[1456]: time="2025-05-14T00:51:49.750947301Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:51:49.751030 env[1456]: time="2025-05-14T00:51:49.750958350Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:51:49.753461 env[1456]: time="2025-05-14T00:51:49.753253264Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:51:49.753461 env[1456]: time="2025-05-14T00:51:49.753438611Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:51:49.753461 env[1456]: time="2025-05-14T00:51:49.753456732Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:51:49.753592 env[1456]: time="2025-05-14T00:51:49.753466134Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:51:49.919525 env[1456]: time="2025-05-14T00:51:49.919432510Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 14 00:51:49.919525 env[1456]: time="2025-05-14T00:51:49.919458546Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 14 00:51:49.919810 env[1456]: time="2025-05-14T00:51:49.919583383Z" level=info msg="Loading containers: start." May 14 00:51:50.036121 kernel: Initializing XFRM netlink socket May 14 00:51:50.061949 env[1456]: time="2025-05-14T00:51:50.061912101Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 14 00:51:50.115653 systemd-networkd[1090]: docker0: Link UP May 14 00:51:50.134458 env[1456]: time="2025-05-14T00:51:50.134425293Z" level=info msg="Loading containers: done." May 14 00:51:50.151547 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1087574670-merged.mount: Deactivated successfully. May 14 00:51:50.156089 env[1456]: time="2025-05-14T00:51:50.156045826Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:51:50.156282 env[1456]: time="2025-05-14T00:51:50.156252754Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 14 00:51:50.156375 env[1456]: time="2025-05-14T00:51:50.156353748Z" level=info msg="Daemon has completed initialization" May 14 00:51:50.171213 systemd[1]: Started docker.service. May 14 00:51:50.175656 env[1456]: time="2025-05-14T00:51:50.175538009Z" level=info msg="API listen on /run/docker.sock" May 14 00:51:50.865052 env[1318]: time="2025-05-14T00:51:50.865013398Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 00:51:51.442200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65927177.mount: Deactivated successfully. May 14 00:51:53.067889 env[1318]: time="2025-05-14T00:51:53.067832737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:53.069390 env[1318]: time="2025-05-14T00:51:53.069360338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:53.071755 env[1318]: time="2025-05-14T00:51:53.071716589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:53.073291 env[1318]: time="2025-05-14T00:51:53.073261916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:53.074024 env[1318]: time="2025-05-14T00:51:53.073981280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 00:51:53.083756 env[1318]: time="2025-05-14T00:51:53.083731575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 00:51:54.728955 env[1318]: time="2025-05-14T00:51:54.728874446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:54.731225 env[1318]: time="2025-05-14T00:51:54.731194644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:54.732597 env[1318]: time="2025-05-14T00:51:54.732565868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:54.734709 env[1318]: time="2025-05-14T00:51:54.734685166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:54.735505 env[1318]: time="2025-05-14T00:51:54.735478504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 00:51:54.744808 env[1318]: time="2025-05-14T00:51:54.744772970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 00:51:56.007794 env[1318]: time="2025-05-14T00:51:56.007735121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:56.009837 env[1318]: time="2025-05-14T00:51:56.009803473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:56.011404 env[1318]: time="2025-05-14T00:51:56.011368662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:56.015883 env[1318]: time="2025-05-14T00:51:56.015825648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:56.016301 env[1318]: time="2025-05-14T00:51:56.016260892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 00:51:56.027885 env[1318]: time="2025-05-14T00:51:56.027851125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:51:56.323005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:51:56.323200 systemd[1]: Stopped kubelet.service. May 14 00:51:56.324579 systemd[1]: Starting kubelet.service... May 14 00:51:56.411350 systemd[1]: Started kubelet.service. May 14 00:51:56.466955 kubelet[1621]: E0514 00:51:56.466845 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:51:56.469505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:51:56.469643 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:51:57.174923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800655322.mount: Deactivated successfully. May 14 00:51:57.713866 env[1318]: time="2025-05-14T00:51:57.713818048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:57.715775 env[1318]: time="2025-05-14T00:51:57.715745250Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:57.717526 env[1318]: time="2025-05-14T00:51:57.717489571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:57.718784 env[1318]: time="2025-05-14T00:51:57.718748305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:57.719226 env[1318]: time="2025-05-14T00:51:57.719201922Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 00:51:57.728259 env[1318]: time="2025-05-14T00:51:57.728217939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:51:58.261108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024865339.mount: Deactivated successfully. May 14 00:51:58.987258 env[1318]: time="2025-05-14T00:51:58.987205894Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:58.988607 env[1318]: time="2025-05-14T00:51:58.988539728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:58.990854 env[1318]: time="2025-05-14T00:51:58.990329255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:58.993151 env[1318]: time="2025-05-14T00:51:58.992689509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:58.993470 env[1318]: time="2025-05-14T00:51:58.993438837Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 00:51:59.002620 env[1318]: time="2025-05-14T00:51:59.002533999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 00:51:59.462568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723723992.mount: Deactivated successfully. May 14 00:51:59.468079 env[1318]: time="2025-05-14T00:51:59.468037809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:59.470233 env[1318]: time="2025-05-14T00:51:59.470197351Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:59.472250 env[1318]: time="2025-05-14T00:51:59.472214725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:59.473585 env[1318]: time="2025-05-14T00:51:59.473552820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:51:59.474340 env[1318]: time="2025-05-14T00:51:59.474314196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 00:51:59.483698 env[1318]: time="2025-05-14T00:51:59.483658756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:51:59.948363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272235436.mount: Deactivated successfully. May 14 00:52:02.846555 env[1318]: time="2025-05-14T00:52:02.846488310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:02.848294 env[1318]: time="2025-05-14T00:52:02.848262390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:02.850667 env[1318]: time="2025-05-14T00:52:02.850639945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:02.852299 env[1318]: time="2025-05-14T00:52:02.852274675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:02.853156 env[1318]: time="2025-05-14T00:52:02.853126587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 00:52:06.720444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:52:06.720608 systemd[1]: Stopped kubelet.service. May 14 00:52:06.722003 systemd[1]: Starting kubelet.service... May 14 00:52:06.805980 systemd[1]: Started kubelet.service. May 14 00:52:06.849586 kubelet[1733]: E0514 00:52:06.849535 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:52:06.851475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:52:06.851614 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:52:07.544450 systemd[1]: Stopped kubelet.service. May 14 00:52:07.546493 systemd[1]: Starting kubelet.service... May 14 00:52:07.563056 systemd[1]: Reloading. May 14 00:52:07.611413 /usr/lib/systemd/system-generators/torcx-generator[1769]: time="2025-05-14T00:52:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:52:07.611443 /usr/lib/systemd/system-generators/torcx-generator[1769]: time="2025-05-14T00:52:07Z" level=info msg="torcx already run" May 14 00:52:07.757313 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:52:07.757332 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:52:07.772592 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:52:07.832238 systemd[1]: Started kubelet.service. May 14 00:52:07.835180 systemd[1]: Stopping kubelet.service... May 14 00:52:07.835702 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:52:07.835927 systemd[1]: Stopped kubelet.service. May 14 00:52:07.837407 systemd[1]: Starting kubelet.service... May 14 00:52:07.914291 systemd[1]: Started kubelet.service. May 14 00:52:07.953889 kubelet[1829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:07.953889 kubelet[1829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:52:07.953889 kubelet[1829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:07.958288 kubelet[1829]: I0514 00:52:07.958222 1829 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:52:08.452381 kubelet[1829]: I0514 00:52:08.452342 1829 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:52:08.452381 kubelet[1829]: I0514 00:52:08.452368 1829 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:52:08.452570 kubelet[1829]: I0514 00:52:08.452558 1829 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:52:08.480171 kubelet[1829]: I0514 00:52:08.480135 1829 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:52:08.480283 kubelet[1829]: E0514 00:52:08.480147 1829 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.487046 kubelet[1829]: I0514 00:52:08.487022 1829 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:52:08.488378 kubelet[1829]: I0514 00:52:08.488340 1829 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:52:08.488534 kubelet[1829]: I0514 00:52:08.488376 1829 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:52:08.488616 kubelet[1829]: I0514 00:52:08.488598 1829 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:52:08.488616 kubelet[1829]: I0514 00:52:08.488608 1829 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:52:08.488873 kubelet[1829]: I0514 00:52:08.488848 1829 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:08.489871 kubelet[1829]: I0514 00:52:08.489758 1829 kubelet.go:400] "Attempting to sync node with API server" May 14 00:52:08.489871 kubelet[1829]: I0514 00:52:08.489779 1829 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:52:08.489981 kubelet[1829]: I0514 00:52:08.489969 1829 kubelet.go:312] "Adding apiserver pod source" May 14 00:52:08.490064 kubelet[1829]: I0514 00:52:08.490052 1829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:52:08.491105 kubelet[1829]: I0514 00:52:08.491021 1829 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:52:08.491436 kubelet[1829]: I0514 00:52:08.491415 1829 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:52:08.491482 kubelet[1829]: W0514 00:52:08.491412 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.491482 kubelet[1829]: E0514 00:52:08.491460 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.491530 kubelet[1829]: W0514 00:52:08.491511 1829 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:52:08.491696 kubelet[1829]: W0514 00:52:08.491663 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.491782 kubelet[1829]: E0514 00:52:08.491771 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.492224 kubelet[1829]: I0514 00:52:08.492198 1829 server.go:1264] "Started kubelet" May 14 00:52:08.493573 kubelet[1829]: I0514 00:52:08.493529 1829 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:52:08.497271 kubelet[1829]: E0514 00:52:08.497119 1829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3e86a70c3c28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:52:08.492178472 +0000 UTC m=+0.573975317,LastTimestamp:2025-05-14 00:52:08.492178472 +0000 UTC m=+0.573975317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:52:08.498775 kubelet[1829]: I0514 00:52:08.498725 1829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:52:08.498981 kubelet[1829]: I0514 00:52:08.498959 1829 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:52:08.500909 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:52:08.500979 kubelet[1829]: I0514 00:52:08.500521 1829 server.go:455] "Adding debug handlers to kubelet server" May 14 00:52:08.503548 kubelet[1829]: I0514 00:52:08.503519 1829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:52:08.505532 kubelet[1829]: I0514 00:52:08.505518 1829 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:52:08.505746 kubelet[1829]: I0514 00:52:08.505733 1829 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:52:08.505981 kubelet[1829]: I0514 00:52:08.505970 1829 reconciler.go:26] "Reconciler: start to sync state" May 14 00:52:08.506402 kubelet[1829]: W0514 00:52:08.506362 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.506508 kubelet[1829]: E0514 00:52:08.506495 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.507863 kubelet[1829]: E0514 00:52:08.507832 1829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" May 14 00:52:08.508863 kubelet[1829]: I0514 00:52:08.508821 1829 factory.go:221] Registration of the systemd container factory successfully May 14 00:52:08.508990 kubelet[1829]: I0514 00:52:08.508963 1829 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:52:08.510263 kubelet[1829]: E0514 00:52:08.510206 1829 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:52:08.510625 kubelet[1829]: I0514 00:52:08.510602 1829 factory.go:221] Registration of the containerd container factory successfully May 14 00:52:08.521249 kubelet[1829]: I0514 00:52:08.521203 1829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:52:08.521985 kubelet[1829]: I0514 00:52:08.521957 1829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:52:08.522162 kubelet[1829]: I0514 00:52:08.522144 1829 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:52:08.522203 kubelet[1829]: I0514 00:52:08.522170 1829 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:52:08.522229 kubelet[1829]: E0514 00:52:08.522210 1829 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:52:08.527761 kubelet[1829]: W0514 00:52:08.527727 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.527809 kubelet[1829]: E0514 00:52:08.527767 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:08.528610 kubelet[1829]: I0514 00:52:08.528593 1829 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:52:08.528610 kubelet[1829]: I0514 00:52:08.528609 1829 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:52:08.528684 kubelet[1829]: I0514 00:52:08.528625 1829 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:08.607141 kubelet[1829]: I0514 00:52:08.607112 1829 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:52:08.607423 kubelet[1829]: E0514 00:52:08.607405 1829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" May 14 00:52:08.608916 kubelet[1829]: I0514 00:52:08.608895 1829 policy_none.go:49] "None policy: Start" May 14 00:52:08.609450 kubelet[1829]: I0514 00:52:08.609437 1829 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:52:08.609529 kubelet[1829]: I0514 00:52:08.609457 1829 state_mem.go:35] "Initializing new in-memory state store" May 14 00:52:08.613220 kubelet[1829]: I0514 00:52:08.613196 1829 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:52:08.613366 kubelet[1829]: I0514 00:52:08.613327 1829 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:52:08.613430 kubelet[1829]: I0514 00:52:08.613417 1829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:52:08.615452 kubelet[1829]: E0514 00:52:08.615437 1829 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:52:08.622650 kubelet[1829]: I0514 00:52:08.622619 1829 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:52:08.623454 kubelet[1829]: I0514 00:52:08.623433 1829 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:52:08.624209 kubelet[1829]: I0514 00:52:08.624189 1829 topology_manager.go:215] "Topology Admit Handler" podUID="0c0f2a44db473e9836f7e811b6069807" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:52:08.707087 kubelet[1829]: I0514 00:52:08.706478 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:08.707087 kubelet[1829]: I0514 00:52:08.706509 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:08.707087 kubelet[1829]: I0514 00:52:08.706530 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:52:08.707087 kubelet[1829]: I0514 00:52:08.706546 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c0f2a44db473e9836f7e811b6069807-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c0f2a44db473e9836f7e811b6069807\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:08.707087 kubelet[1829]: I0514 00:52:08.706561 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c0f2a44db473e9836f7e811b6069807-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c0f2a44db473e9836f7e811b6069807\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:08.707508 kubelet[1829]: I0514 00:52:08.706575 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:08.707508 kubelet[1829]: I0514 00:52:08.706593 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:08.707508 kubelet[1829]: I0514 00:52:08.706609 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:08.707508 kubelet[1829]: I0514 00:52:08.706625 1829 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c0f2a44db473e9836f7e811b6069807-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c0f2a44db473e9836f7e811b6069807\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:08.708605 kubelet[1829]: E0514 00:52:08.708569 1829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" May 14 00:52:08.809175 kubelet[1829]: I0514 00:52:08.809143 1829 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:52:08.809451 kubelet[1829]: E0514 00:52:08.809431 1829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" May 14 00:52:08.929561 kubelet[1829]: E0514 00:52:08.929533 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:08.930619 env[1318]: time="2025-05-14T00:52:08.930335061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 00:52:08.931014 kubelet[1829]: E0514 00:52:08.930846 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:08.931249 env[1318]: time="2025-05-14T00:52:08.931214413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c0f2a44db473e9836f7e811b6069807,Namespace:kube-system,Attempt:0,}" May 14 00:52:08.931401 kubelet[1829]: E0514 00:52:08.931237 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:08.931812 env[1318]: time="2025-05-14T00:52:08.931783094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 00:52:09.109519 kubelet[1829]: E0514 00:52:09.109417 1829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" May 14 00:52:09.210960 kubelet[1829]: I0514 00:52:09.210929 1829 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:52:09.211302 kubelet[1829]: E0514 00:52:09.211265 1829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" May 14 00:52:09.389630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1696062252.mount: Deactivated successfully. May 14 00:52:09.392071 env[1318]: time="2025-05-14T00:52:09.392033166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.394988 env[1318]: time="2025-05-14T00:52:09.394962674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.396329 env[1318]: time="2025-05-14T00:52:09.396303849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.397151 env[1318]: time="2025-05-14T00:52:09.397121823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.398553 env[1318]: time="2025-05-14T00:52:09.398524417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.400017 env[1318]: time="2025-05-14T00:52:09.399990552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.401324 env[1318]: time="2025-05-14T00:52:09.401298517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.401989 env[1318]: time="2025-05-14T00:52:09.401964444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.406270 env[1318]: time="2025-05-14T00:52:09.406243130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.407977 env[1318]: time="2025-05-14T00:52:09.407946978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.409452 env[1318]: time="2025-05-14T00:52:09.409421235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.410814 env[1318]: time="2025-05-14T00:52:09.410788818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:09.420558 kubelet[1829]: W0514 00:52:09.420465 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:09.420558 kubelet[1829]: E0514 00:52:09.420536 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:09.441682 env[1318]: time="2025-05-14T00:52:09.441613652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:09.441682 env[1318]: time="2025-05-14T00:52:09.441652624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:09.441682 env[1318]: time="2025-05-14T00:52:09.441663227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:09.441965 env[1318]: time="2025-05-14T00:52:09.441858888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:09.441965 env[1318]: time="2025-05-14T00:52:09.441891298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:09.441965 env[1318]: time="2025-05-14T00:52:09.441902061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:09.442334 env[1318]: time="2025-05-14T00:52:09.442198313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:09.442334 env[1318]: time="2025-05-14T00:52:09.442226082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:09.442334 env[1318]: time="2025-05-14T00:52:09.442237085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:09.442334 env[1318]: time="2025-05-14T00:52:09.442072754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93ae694f87fe42b28f7997da40c19c8f923e8ef32a69bb5d030e258d2b3fdb37 pid=1885 runtime=io.containerd.runc.v2 May 14 00:52:09.442702 env[1318]: time="2025-05-14T00:52:09.442639090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2e127c831cdcc82fe64ddf24aa043a3bd14ef34a43c349ecc4d8946aadfa5ef pid=1883 runtime=io.containerd.runc.v2 May 14 00:52:09.443087 env[1318]: time="2025-05-14T00:52:09.443021208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77c7d032ef8cb4d1dc8bcdf92a058d63abc6d1dd6bf5e01a8e5ef4caa383f064 pid=1886 runtime=io.containerd.runc.v2 May 14 00:52:09.523468 env[1318]: time="2025-05-14T00:52:09.523432610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"93ae694f87fe42b28f7997da40c19c8f923e8ef32a69bb5d030e258d2b3fdb37\"" May 14 00:52:09.524861 env[1318]: time="2025-05-14T00:52:09.524830683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2e127c831cdcc82fe64ddf24aa043a3bd14ef34a43c349ecc4d8946aadfa5ef\"" May 14 00:52:09.525581 kubelet[1829]: E0514 00:52:09.525558 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:09.525655 kubelet[1829]: E0514 00:52:09.525563 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:09.528147 env[1318]: time="2025-05-14T00:52:09.528121223Z" level=info msg="CreateContainer within sandbox \"93ae694f87fe42b28f7997da40c19c8f923e8ef32a69bb5d030e258d2b3fdb37\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:52:09.528803 env[1318]: time="2025-05-14T00:52:09.528773945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c0f2a44db473e9836f7e811b6069807,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c7d032ef8cb4d1dc8bcdf92a058d63abc6d1dd6bf5e01a8e5ef4caa383f064\"" May 14 00:52:09.529731 env[1318]: time="2025-05-14T00:52:09.529699872Z" level=info msg="CreateContainer within sandbox \"a2e127c831cdcc82fe64ddf24aa043a3bd14ef34a43c349ecc4d8946aadfa5ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:52:09.530013 kubelet[1829]: E0514 00:52:09.529994 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:09.532030 env[1318]: time="2025-05-14T00:52:09.531993343Z" level=info msg="CreateContainer within sandbox \"77c7d032ef8cb4d1dc8bcdf92a058d63abc6d1dd6bf5e01a8e5ef4caa383f064\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:52:09.546660 env[1318]: time="2025-05-14T00:52:09.546623317Z" level=info msg="CreateContainer within sandbox \"93ae694f87fe42b28f7997da40c19c8f923e8ef32a69bb5d030e258d2b3fdb37\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a6e7ff0d161f3d169fd89e087c05ce205ded56783d317e8a2f86bfb3ef374bc9\"" May 14 00:52:09.547228 env[1318]: time="2025-05-14T00:52:09.547205858Z" level=info msg="StartContainer for \"a6e7ff0d161f3d169fd89e087c05ce205ded56783d317e8a2f86bfb3ef374bc9\"" May 14 00:52:09.548584 env[1318]: time="2025-05-14T00:52:09.548541432Z" level=info msg="CreateContainer within sandbox \"a2e127c831cdcc82fe64ddf24aa043a3bd14ef34a43c349ecc4d8946aadfa5ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5d92bce29414f5dac7f930768143ea308e8fa5ed0c6b96d7128d8c79a5b6142\"" May 14 00:52:09.548938 env[1318]: time="2025-05-14T00:52:09.548911826Z" level=info msg="StartContainer for \"f5d92bce29414f5dac7f930768143ea308e8fa5ed0c6b96d7128d8c79a5b6142\"" May 14 00:52:09.551083 env[1318]: time="2025-05-14T00:52:09.551047128Z" level=info msg="CreateContainer within sandbox \"77c7d032ef8cb4d1dc8bcdf92a058d63abc6d1dd6bf5e01a8e5ef4caa383f064\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f7280e88aa5edd84cde610597de3664438f2acf6f0b2fbec039a13a93fc30ce\"" May 14 00:52:09.551612 env[1318]: time="2025-05-14T00:52:09.551581574Z" level=info msg="StartContainer for \"8f7280e88aa5edd84cde610597de3664438f2acf6f0b2fbec039a13a93fc30ce\"" May 14 00:52:09.599626 kubelet[1829]: W0514 00:52:09.595496 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:09.599626 kubelet[1829]: E0514 00:52:09.595555 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:09.607392 kubelet[1829]: W0514 00:52:09.607338 1829 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:09.607392 kubelet[1829]: E0514 00:52:09.607395 1829 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 14 00:52:09.679498 env[1318]: time="2025-05-14T00:52:09.673335829Z" level=info msg="StartContainer for \"8f7280e88aa5edd84cde610597de3664438f2acf6f0b2fbec039a13a93fc30ce\" returns successfully" May 14 00:52:09.679498 env[1318]: time="2025-05-14T00:52:09.673255484Z" level=info msg="StartContainer for \"f5d92bce29414f5dac7f930768143ea308e8fa5ed0c6b96d7128d8c79a5b6142\" returns successfully" May 14 00:52:09.679498 env[1318]: time="2025-05-14T00:52:09.673568461Z" level=info msg="StartContainer for \"a6e7ff0d161f3d169fd89e087c05ce205ded56783d317e8a2f86bfb3ef374bc9\" returns successfully" May 14 00:52:10.012710 kubelet[1829]: I0514 00:52:10.012617 1829 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:52:10.537283 kubelet[1829]: E0514 00:52:10.537254 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:10.539397 kubelet[1829]: E0514 00:52:10.539376 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:10.541693 kubelet[1829]: E0514 00:52:10.541532 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:11.555070 kubelet[1829]: E0514 00:52:11.555042 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:11.555978 kubelet[1829]: E0514 00:52:11.555957 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:11.568800 kubelet[1829]: E0514 00:52:11.568780 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:11.595828 kubelet[1829]: E0514 00:52:11.595787 1829 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:52:11.671972 kubelet[1829]: I0514 00:52:11.671933 1829 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:52:12.495684 kubelet[1829]: I0514 00:52:12.495645 1829 apiserver.go:52] "Watching apiserver" May 14 00:52:12.506087 kubelet[1829]: I0514 00:52:12.506053 1829 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:52:12.560887 kubelet[1829]: E0514 00:52:12.560838 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:12.562018 kubelet[1829]: E0514 00:52:12.561981 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:13.551361 kubelet[1829]: E0514 00:52:13.551324 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:13.551766 kubelet[1829]: E0514 00:52:13.551747 1829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:13.880270 systemd[1]: Reloading. May 14 00:52:13.944191 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2025-05-14T00:52:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:52:13.944214 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2025-05-14T00:52:13Z" level=info msg="torcx already run" May 14 00:52:14.020452 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:52:14.020599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:52:14.036855 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:52:14.108893 kubelet[1829]: I0514 00:52:14.108851 1829 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:52:14.109282 systemd[1]: Stopping kubelet.service... May 14 00:52:14.123558 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:52:14.123944 systemd[1]: Stopped kubelet.service. May 14 00:52:14.125794 systemd[1]: Starting kubelet.service... May 14 00:52:14.216179 systemd[1]: Started kubelet.service. May 14 00:52:14.259875 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:14.259875 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:52:14.259875 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:52:14.260234 kubelet[2176]: I0514 00:52:14.259926 2176 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:52:14.265010 kubelet[2176]: I0514 00:52:14.264969 2176 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:52:14.265010 kubelet[2176]: I0514 00:52:14.265004 2176 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:52:14.265763 kubelet[2176]: I0514 00:52:14.265199 2176 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:52:14.266786 kubelet[2176]: I0514 00:52:14.266768 2176 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:52:14.270010 kubelet[2176]: I0514 00:52:14.269985 2176 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:52:14.276978 kubelet[2176]: I0514 00:52:14.276957 2176 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:52:14.277691 kubelet[2176]: I0514 00:52:14.277655 2176 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:52:14.277847 kubelet[2176]: I0514 00:52:14.277693 2176 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:52:14.277929 kubelet[2176]: I0514 00:52:14.277853 2176 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:52:14.277929 kubelet[2176]: I0514 00:52:14.277862 2176 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:52:14.277929 kubelet[2176]: I0514 00:52:14.277897 2176 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:14.278013 kubelet[2176]: I0514 00:52:14.277991 2176 kubelet.go:400] "Attempting to sync node with API server" May 14 00:52:14.278013 kubelet[2176]: I0514 00:52:14.278001 2176 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:52:14.278059 kubelet[2176]: I0514 00:52:14.278023 2176 kubelet.go:312] "Adding apiserver pod source" May 14 00:52:14.278059 kubelet[2176]: I0514 00:52:14.278039 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:52:14.278875 kubelet[2176]: I0514 00:52:14.278847 2176 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:52:14.279020 kubelet[2176]: I0514 00:52:14.279001 2176 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:52:14.279387 kubelet[2176]: I0514 00:52:14.279366 2176 server.go:1264] "Started kubelet" May 14 00:52:14.281211 kubelet[2176]: I0514 00:52:14.281165 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:52:14.281422 kubelet[2176]: I0514 00:52:14.281398 2176 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:52:14.281510 kubelet[2176]: I0514 00:52:14.281435 2176 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:52:14.281588 kubelet[2176]: I0514 00:52:14.281567 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:52:14.282321 kubelet[2176]: I0514 00:52:14.282290 2176 server.go:455] "Adding debug handlers to kubelet server" May 14 00:52:14.322010 kubelet[2176]: I0514 00:52:14.321965 2176 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:52:14.323028 kubelet[2176]: I0514 00:52:14.322845 2176 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:52:14.323492 kubelet[2176]: E0514 00:52:14.323468 2176 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:52:14.324364 kubelet[2176]: I0514 00:52:14.324342 2176 reconciler.go:26] "Reconciler: start to sync state" May 14 00:52:14.324806 kubelet[2176]: I0514 00:52:14.324770 2176 factory.go:221] Registration of the systemd container factory successfully May 14 00:52:14.324870 kubelet[2176]: I0514 00:52:14.324850 2176 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:52:14.327747 kubelet[2176]: I0514 00:52:14.327715 2176 factory.go:221] Registration of the containerd container factory successfully May 14 00:52:14.329362 kubelet[2176]: I0514 00:52:14.329332 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:52:14.330490 kubelet[2176]: I0514 00:52:14.330460 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:52:14.330553 kubelet[2176]: I0514 00:52:14.330498 2176 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:52:14.330553 kubelet[2176]: I0514 00:52:14.330516 2176 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:52:14.330595 kubelet[2176]: E0514 00:52:14.330557 2176 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:52:14.372403 kubelet[2176]: I0514 00:52:14.372369 2176 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:52:14.372403 kubelet[2176]: I0514 00:52:14.372397 2176 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:52:14.372542 kubelet[2176]: I0514 00:52:14.372419 2176 state_mem.go:36] "Initialized new in-memory state store" May 14 00:52:14.372585 kubelet[2176]: I0514 00:52:14.372568 2176 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:52:14.372632 kubelet[2176]: I0514 00:52:14.372584 2176 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:52:14.372632 kubelet[2176]: I0514 00:52:14.372602 2176 policy_none.go:49] "None policy: Start" May 14 00:52:14.373183 kubelet[2176]: I0514 00:52:14.373163 2176 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:52:14.373244 kubelet[2176]: I0514 00:52:14.373189 2176 state_mem.go:35] "Initializing new in-memory state store" May 14 00:52:14.373386 kubelet[2176]: I0514 00:52:14.373348 2176 state_mem.go:75] "Updated machine memory state" May 14 00:52:14.375397 kubelet[2176]: I0514 00:52:14.375373 2176 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:52:14.375608 kubelet[2176]: I0514 00:52:14.375568 2176 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:52:14.375689 kubelet[2176]: I0514 00:52:14.375671 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:52:14.425495 kubelet[2176]: I0514 00:52:14.425466 2176 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:52:14.431235 kubelet[2176]: I0514 00:52:14.431203 2176 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 00:52:14.431332 kubelet[2176]: I0514 00:52:14.431276 2176 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:52:14.431389 kubelet[2176]: I0514 00:52:14.431360 2176 topology_manager.go:215] "Topology Admit Handler" podUID="0c0f2a44db473e9836f7e811b6069807" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:52:14.431925 kubelet[2176]: I0514 00:52:14.431445 2176 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:52:14.431925 kubelet[2176]: I0514 00:52:14.431484 2176 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:52:14.437384 kubelet[2176]: E0514 00:52:14.437356 2176 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:52:14.440738 kubelet[2176]: E0514 00:52:14.440604 2176 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:52:14.525407 kubelet[2176]: I0514 00:52:14.525296 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:14.525407 kubelet[2176]: I0514 00:52:14.525347 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:14.525407 kubelet[2176]: I0514 00:52:14.525368 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:14.525407 kubelet[2176]: I0514 00:52:14.525385 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:14.525407 kubelet[2176]: I0514 00:52:14.525404 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:52:14.525617 kubelet[2176]: I0514 00:52:14.525420 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c0f2a44db473e9836f7e811b6069807-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c0f2a44db473e9836f7e811b6069807\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:14.525617 kubelet[2176]: I0514 00:52:14.525437 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c0f2a44db473e9836f7e811b6069807-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c0f2a44db473e9836f7e811b6069807\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:14.525617 kubelet[2176]: I0514 00:52:14.525452 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:52:14.525617 kubelet[2176]: I0514 00:52:14.525466 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c0f2a44db473e9836f7e811b6069807-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c0f2a44db473e9836f7e811b6069807\") " pod="kube-system/kube-apiserver-localhost" May 14 00:52:14.738311 kubelet[2176]: E0514 00:52:14.738275 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:14.741226 kubelet[2176]: E0514 00:52:14.741196 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:14.741452 kubelet[2176]: E0514 00:52:14.741436 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:15.278803 kubelet[2176]: I0514 00:52:15.278766 2176 apiserver.go:52] "Watching apiserver" May 14 00:52:15.323644 kubelet[2176]: I0514 00:52:15.322430 2176 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:52:15.344485 kubelet[2176]: E0514 00:52:15.344454 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:15.344780 kubelet[2176]: E0514 00:52:15.344756 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:15.351494 kubelet[2176]: E0514 00:52:15.351452 2176 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:52:15.351905 kubelet[2176]: E0514 00:52:15.351878 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:15.369715 kubelet[2176]: I0514 00:52:15.369508 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.369492766 podStartE2EDuration="3.369492766s" podCreationTimestamp="2025-05-14 00:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:15.363275741 +0000 UTC m=+1.142674556" watchObservedRunningTime="2025-05-14 00:52:15.369492766 +0000 UTC m=+1.148891581" May 14 00:52:15.376377 kubelet[2176]: I0514 00:52:15.376317 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.376305114 podStartE2EDuration="3.376305114s" podCreationTimestamp="2025-05-14 00:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:15.369840094 +0000 UTC m=+1.149238909" watchObservedRunningTime="2025-05-14 00:52:15.376305114 +0000 UTC m=+1.155703929" May 14 00:52:15.384510 kubelet[2176]: I0514 00:52:15.384467 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.384444006 podStartE2EDuration="1.384444006s" podCreationTimestamp="2025-05-14 00:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:15.376623958 +0000 UTC m=+1.156022773" watchObservedRunningTime="2025-05-14 00:52:15.384444006 +0000 UTC m=+1.163842821" May 14 00:52:15.929452 sudo[1444]: pam_unix(sudo:session): session closed for user root May 14 00:52:15.931404 sshd[1438]: pam_unix(sshd:session): session closed for user core May 14 00:52:15.934387 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:36380.service: Deactivated successfully. May 14 00:52:15.934711 systemd-logind[1303]: Session 5 logged out. Waiting for processes to exit. May 14 00:52:15.935173 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:52:15.935577 systemd-logind[1303]: Removed session 5. May 14 00:52:16.346460 kubelet[2176]: E0514 00:52:16.346352 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:16.346460 kubelet[2176]: E0514 00:52:16.346435 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:17.505368 kubelet[2176]: E0514 00:52:17.505322 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:23.479104 kubelet[2176]: E0514 00:52:23.475759 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:24.359771 kubelet[2176]: E0514 00:52:24.359740 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:26.319446 kubelet[2176]: E0514 00:52:26.319396 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:27.222113 kubelet[2176]: I0514 00:52:27.222073 2176 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:52:27.222661 env[1318]: time="2025-05-14T00:52:27.222624254Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:52:27.223127 kubelet[2176]: I0514 00:52:27.223074 2176 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:52:27.512133 kubelet[2176]: E0514 00:52:27.511769 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.161804 kubelet[2176]: I0514 00:52:28.161721 2176 topology_manager.go:215] "Topology Admit Handler" podUID="0864c29d-4d79-4d5b-b960-9d6870525bd9" podNamespace="kube-system" podName="kube-proxy-c9v6b" May 14 00:52:28.167025 kubelet[2176]: I0514 00:52:28.166988 2176 topology_manager.go:215] "Topology Admit Handler" podUID="e08a858c-a450-423b-afa3-40abd72ff1be" podNamespace="kube-flannel" podName="kube-flannel-ds-cbbhw" May 14 00:52:28.226334 kubelet[2176]: I0514 00:52:28.226276 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e08a858c-a450-423b-afa3-40abd72ff1be-run\") pod \"kube-flannel-ds-cbbhw\" (UID: \"e08a858c-a450-423b-afa3-40abd72ff1be\") " pod="kube-flannel/kube-flannel-ds-cbbhw" May 14 00:52:28.226334 kubelet[2176]: I0514 00:52:28.226320 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e08a858c-a450-423b-afa3-40abd72ff1be-cni\") pod \"kube-flannel-ds-cbbhw\" (UID: \"e08a858c-a450-423b-afa3-40abd72ff1be\") " pod="kube-flannel/kube-flannel-ds-cbbhw" May 14 00:52:28.226334 kubelet[2176]: I0514 00:52:28.226345 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e08a858c-a450-423b-afa3-40abd72ff1be-flannel-cfg\") pod \"kube-flannel-ds-cbbhw\" (UID: \"e08a858c-a450-423b-afa3-40abd72ff1be\") " pod="kube-flannel/kube-flannel-ds-cbbhw" May 14 00:52:28.226532 kubelet[2176]: I0514 00:52:28.226365 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0864c29d-4d79-4d5b-b960-9d6870525bd9-xtables-lock\") pod \"kube-proxy-c9v6b\" (UID: \"0864c29d-4d79-4d5b-b960-9d6870525bd9\") " pod="kube-system/kube-proxy-c9v6b" May 14 00:52:28.226532 kubelet[2176]: I0514 00:52:28.226407 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e08a858c-a450-423b-afa3-40abd72ff1be-cni-plugin\") pod \"kube-flannel-ds-cbbhw\" (UID: \"e08a858c-a450-423b-afa3-40abd72ff1be\") " pod="kube-flannel/kube-flannel-ds-cbbhw" May 14 00:52:28.226532 kubelet[2176]: I0514 00:52:28.226461 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e08a858c-a450-423b-afa3-40abd72ff1be-xtables-lock\") pod \"kube-flannel-ds-cbbhw\" (UID: \"e08a858c-a450-423b-afa3-40abd72ff1be\") " pod="kube-flannel/kube-flannel-ds-cbbhw" May 14 00:52:28.226532 kubelet[2176]: I0514 00:52:28.226481 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrhm\" (UniqueName: \"kubernetes.io/projected/e08a858c-a450-423b-afa3-40abd72ff1be-kube-api-access-csrhm\") pod \"kube-flannel-ds-cbbhw\" (UID: \"e08a858c-a450-423b-afa3-40abd72ff1be\") " pod="kube-flannel/kube-flannel-ds-cbbhw" May 14 00:52:28.226532 kubelet[2176]: I0514 00:52:28.226512 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jb5n\" (UniqueName: \"kubernetes.io/projected/0864c29d-4d79-4d5b-b960-9d6870525bd9-kube-api-access-8jb5n\") pod \"kube-proxy-c9v6b\" (UID: \"0864c29d-4d79-4d5b-b960-9d6870525bd9\") " pod="kube-system/kube-proxy-c9v6b" May 14 00:52:28.226635 kubelet[2176]: I0514 00:52:28.226533 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0864c29d-4d79-4d5b-b960-9d6870525bd9-kube-proxy\") pod \"kube-proxy-c9v6b\" (UID: \"0864c29d-4d79-4d5b-b960-9d6870525bd9\") " pod="kube-system/kube-proxy-c9v6b" May 14 00:52:28.226635 kubelet[2176]: I0514 00:52:28.226550 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0864c29d-4d79-4d5b-b960-9d6870525bd9-lib-modules\") pod \"kube-proxy-c9v6b\" (UID: \"0864c29d-4d79-4d5b-b960-9d6870525bd9\") " pod="kube-system/kube-proxy-c9v6b" May 14 00:52:28.464682 kubelet[2176]: E0514 00:52:28.464165 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.465595 env[1318]: time="2025-05-14T00:52:28.465535126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9v6b,Uid:0864c29d-4d79-4d5b-b960-9d6870525bd9,Namespace:kube-system,Attempt:0,}" May 14 00:52:28.475226 kubelet[2176]: E0514 00:52:28.475202 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.475950 env[1318]: time="2025-05-14T00:52:28.475914994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cbbhw,Uid:e08a858c-a450-423b-afa3-40abd72ff1be,Namespace:kube-flannel,Attempt:0,}" May 14 00:52:28.479440 env[1318]: time="2025-05-14T00:52:28.479358749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:28.479440 env[1318]: time="2025-05-14T00:52:28.479397550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:28.479440 env[1318]: time="2025-05-14T00:52:28.479407791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:28.479754 env[1318]: time="2025-05-14T00:52:28.479676240Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe8969f272283c5626d29f8bdd71fbc8f1826947123992282976a284e14458dc pid=2247 runtime=io.containerd.runc.v2 May 14 00:52:28.496119 env[1318]: time="2025-05-14T00:52:28.494589019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:28.496119 env[1318]: time="2025-05-14T00:52:28.494629180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:28.496119 env[1318]: time="2025-05-14T00:52:28.494638941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:28.496119 env[1318]: time="2025-05-14T00:52:28.494774585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94 pid=2275 runtime=io.containerd.runc.v2 May 14 00:52:28.528542 env[1318]: time="2025-05-14T00:52:28.528493474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9v6b,Uid:0864c29d-4d79-4d5b-b960-9d6870525bd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe8969f272283c5626d29f8bdd71fbc8f1826947123992282976a284e14458dc\"" May 14 00:52:28.529120 kubelet[2176]: E0514 00:52:28.529082 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.531597 env[1318]: time="2025-05-14T00:52:28.531549136Z" level=info msg="CreateContainer within sandbox \"fe8969f272283c5626d29f8bdd71fbc8f1826947123992282976a284e14458dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:52:28.542954 env[1318]: time="2025-05-14T00:52:28.542915957Z" level=info msg="CreateContainer within sandbox \"fe8969f272283c5626d29f8bdd71fbc8f1826947123992282976a284e14458dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d122f6d9e5d5ad6ae4188f323dd8063ba8add4ab37755294bb1e59ad5eae841\"" May 14 00:52:28.544953 env[1318]: time="2025-05-14T00:52:28.544511970Z" level=info msg="StartContainer for \"8d122f6d9e5d5ad6ae4188f323dd8063ba8add4ab37755294bb1e59ad5eae841\"" May 14 00:52:28.549537 env[1318]: time="2025-05-14T00:52:28.549506897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cbbhw,Uid:e08a858c-a450-423b-afa3-40abd72ff1be,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\"" May 14 00:52:28.550136 kubelet[2176]: E0514 00:52:28.550115 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:28.551706 env[1318]: time="2025-05-14T00:52:28.551292357Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 14 00:52:28.609486 env[1318]: time="2025-05-14T00:52:28.609432823Z" level=info msg="StartContainer for \"8d122f6d9e5d5ad6ae4188f323dd8063ba8add4ab37755294bb1e59ad5eae841\" returns successfully" May 14 00:52:29.368600 kubelet[2176]: E0514 00:52:29.368313 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:29.656558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833885331.mount: Deactivated successfully. May 14 00:52:29.694176 env[1318]: time="2025-05-14T00:52:29.694135778Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:29.695329 env[1318]: time="2025-05-14T00:52:29.695303175Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:29.697860 env[1318]: time="2025-05-14T00:52:29.697825376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:29.700370 env[1318]: time="2025-05-14T00:52:29.700327335Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:29.700696 env[1318]: time="2025-05-14T00:52:29.700654506Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 14 00:52:29.705528 env[1318]: time="2025-05-14T00:52:29.705388016Z" level=info msg="CreateContainer within sandbox \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 14 00:52:29.714236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843163814.mount: Deactivated successfully. May 14 00:52:29.714753 env[1318]: time="2025-05-14T00:52:29.714715073Z" level=info msg="CreateContainer within sandbox \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"613a2e3c61a3af7f9c2527ddda368ed1071347c49afb4147b269a47ac8d39df6\"" May 14 00:52:29.715429 env[1318]: time="2025-05-14T00:52:29.715401535Z" level=info msg="StartContainer for \"613a2e3c61a3af7f9c2527ddda368ed1071347c49afb4147b269a47ac8d39df6\"" May 14 00:52:29.768382 env[1318]: time="2025-05-14T00:52:29.768329058Z" level=info msg="StartContainer for \"613a2e3c61a3af7f9c2527ddda368ed1071347c49afb4147b269a47ac8d39df6\" returns successfully" May 14 00:52:29.809215 env[1318]: time="2025-05-14T00:52:29.809162797Z" level=info msg="shim disconnected" id=613a2e3c61a3af7f9c2527ddda368ed1071347c49afb4147b269a47ac8d39df6 May 14 00:52:29.809215 env[1318]: time="2025-05-14T00:52:29.809216759Z" level=warning msg="cleaning up after shim disconnected" id=613a2e3c61a3af7f9c2527ddda368ed1071347c49afb4147b269a47ac8d39df6 namespace=k8s.io May 14 00:52:29.809446 env[1318]: time="2025-05-14T00:52:29.809228359Z" level=info msg="cleaning up dead shim" May 14 00:52:29.815965 env[1318]: time="2025-05-14T00:52:29.815866570Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:52:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2524 runtime=io.containerd.runc.v2\n" May 14 00:52:30.005402 update_engine[1307]: I0514 00:52:30.004568 1307 update_attempter.cc:509] Updating boot flags... May 14 00:52:30.372034 kubelet[2176]: E0514 00:52:30.371209 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:30.374131 env[1318]: time="2025-05-14T00:52:30.372617778Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 14 00:52:30.382643 kubelet[2176]: I0514 00:52:30.382573 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c9v6b" podStartSLOduration=2.382556799 podStartE2EDuration="2.382556799s" podCreationTimestamp="2025-05-14 00:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:29.376388912 +0000 UTC m=+15.155787727" watchObservedRunningTime="2025-05-14 00:52:30.382556799 +0000 UTC m=+16.161955614" May 14 00:52:31.538476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155754972.mount: Deactivated successfully. May 14 00:52:32.203526 env[1318]: time="2025-05-14T00:52:32.203471188Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:32.205770 env[1318]: time="2025-05-14T00:52:32.205739131Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:32.207865 env[1318]: time="2025-05-14T00:52:32.207830428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:32.209760 env[1318]: time="2025-05-14T00:52:32.209736680Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:52:32.211427 env[1318]: time="2025-05-14T00:52:32.211383085Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 14 00:52:32.214454 env[1318]: time="2025-05-14T00:52:32.214404328Z" level=info msg="CreateContainer within sandbox \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:52:32.222642 env[1318]: time="2025-05-14T00:52:32.222607193Z" level=info msg="CreateContainer within sandbox \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80ec416b62b5a8c5114a5f070b33069357709d5c805b57ca650100cb9d6f0547\"" May 14 00:52:32.224251 env[1318]: time="2025-05-14T00:52:32.223230770Z" level=info msg="StartContainer for \"80ec416b62b5a8c5114a5f070b33069357709d5c805b57ca650100cb9d6f0547\"" May 14 00:52:32.279997 env[1318]: time="2025-05-14T00:52:32.279940284Z" level=info msg="StartContainer for \"80ec416b62b5a8c5114a5f070b33069357709d5c805b57ca650100cb9d6f0547\" returns successfully" May 14 00:52:32.329136 kubelet[2176]: I0514 00:52:32.328737 2176 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:52:32.370730 kubelet[2176]: I0514 00:52:32.370521 2176 topology_manager.go:215] "Topology Admit Handler" podUID="c5987e0a-5eb1-4788-8778-a3d097169564" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5km2r" May 14 00:52:32.375745 kubelet[2176]: E0514 00:52:32.375694 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:32.383452 kubelet[2176]: I0514 00:52:32.383398 2176 topology_manager.go:215] "Topology Admit Handler" podUID="246183cb-f20a-4636-9a2d-386a6d59471c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d7h76" May 14 00:52:32.398597 env[1318]: time="2025-05-14T00:52:32.398548134Z" level=info msg="shim disconnected" id=80ec416b62b5a8c5114a5f070b33069357709d5c805b57ca650100cb9d6f0547 May 14 00:52:32.398597 env[1318]: time="2025-05-14T00:52:32.398594375Z" level=warning msg="cleaning up after shim disconnected" id=80ec416b62b5a8c5114a5f070b33069357709d5c805b57ca650100cb9d6f0547 namespace=k8s.io May 14 00:52:32.398597 env[1318]: time="2025-05-14T00:52:32.398603856Z" level=info msg="cleaning up dead shim" May 14 00:52:32.406762 env[1318]: time="2025-05-14T00:52:32.406703038Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:52:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2591 runtime=io.containerd.runc.v2\n" May 14 00:52:32.427155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80ec416b62b5a8c5114a5f070b33069357709d5c805b57ca650100cb9d6f0547-rootfs.mount: Deactivated successfully. May 14 00:52:32.459601 kubelet[2176]: I0514 00:52:32.458967 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5987e0a-5eb1-4788-8778-a3d097169564-config-volume\") pod \"coredns-7db6d8ff4d-5km2r\" (UID: \"c5987e0a-5eb1-4788-8778-a3d097169564\") " pod="kube-system/coredns-7db6d8ff4d-5km2r" May 14 00:52:32.459601 kubelet[2176]: I0514 00:52:32.459012 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/246183cb-f20a-4636-9a2d-386a6d59471c-config-volume\") pod \"coredns-7db6d8ff4d-d7h76\" (UID: \"246183cb-f20a-4636-9a2d-386a6d59471c\") " pod="kube-system/coredns-7db6d8ff4d-d7h76" May 14 00:52:32.459601 kubelet[2176]: I0514 00:52:32.459031 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgd6\" (UniqueName: \"kubernetes.io/projected/c5987e0a-5eb1-4788-8778-a3d097169564-kube-api-access-8vgd6\") pod \"coredns-7db6d8ff4d-5km2r\" (UID: \"c5987e0a-5eb1-4788-8778-a3d097169564\") " pod="kube-system/coredns-7db6d8ff4d-5km2r" May 14 00:52:32.459601 kubelet[2176]: I0514 00:52:32.459050 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl88v\" (UniqueName: \"kubernetes.io/projected/246183cb-f20a-4636-9a2d-386a6d59471c-kube-api-access-bl88v\") pod \"coredns-7db6d8ff4d-d7h76\" (UID: \"246183cb-f20a-4636-9a2d-386a6d59471c\") " pod="kube-system/coredns-7db6d8ff4d-d7h76" May 14 00:52:32.674373 kubelet[2176]: E0514 00:52:32.674334 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:32.675185 env[1318]: time="2025-05-14T00:52:32.675087232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5km2r,Uid:c5987e0a-5eb1-4788-8778-a3d097169564,Namespace:kube-system,Attempt:0,}" May 14 00:52:32.686326 kubelet[2176]: E0514 00:52:32.686269 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:32.687061 env[1318]: time="2025-05-14T00:52:32.687019919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7h76,Uid:246183cb-f20a-4636-9a2d-386a6d59471c,Namespace:kube-system,Attempt:0,}" May 14 00:52:32.708191 env[1318]: time="2025-05-14T00:52:32.708082656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5km2r,Uid:c5987e0a-5eb1-4788-8778-a3d097169564,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"806db5cd5c3e8ca102b098f0f64d302d443c499cec14c4a80eed7d74aa320e4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:52:32.708609 kubelet[2176]: E0514 00:52:32.708569 2176 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806db5cd5c3e8ca102b098f0f64d302d443c499cec14c4a80eed7d74aa320e4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:52:32.708717 kubelet[2176]: E0514 00:52:32.708636 2176 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806db5cd5c3e8ca102b098f0f64d302d443c499cec14c4a80eed7d74aa320e4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5km2r" May 14 00:52:32.708717 kubelet[2176]: E0514 00:52:32.708658 2176 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806db5cd5c3e8ca102b098f0f64d302d443c499cec14c4a80eed7d74aa320e4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5km2r" May 14 00:52:32.708824 kubelet[2176]: E0514 00:52:32.708786 2176 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5km2r_kube-system(c5987e0a-5eb1-4788-8778-a3d097169564)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5km2r_kube-system(c5987e0a-5eb1-4788-8778-a3d097169564)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806db5cd5c3e8ca102b098f0f64d302d443c499cec14c4a80eed7d74aa320e4c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-5km2r" podUID="c5987e0a-5eb1-4788-8778-a3d097169564" May 14 00:52:32.711027 env[1318]: time="2025-05-14T00:52:32.710493602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7h76,Uid:246183cb-f20a-4636-9a2d-386a6d59471c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70e7b14e305174564a06fe107f0be62f4260df30124ac781c3afda7a3ecd1174\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:52:32.711439 kubelet[2176]: E0514 00:52:32.711401 2176 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7b14e305174564a06fe107f0be62f4260df30124ac781c3afda7a3ecd1174\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:52:32.711518 kubelet[2176]: E0514 00:52:32.711442 2176 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7b14e305174564a06fe107f0be62f4260df30124ac781c3afda7a3ecd1174\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-d7h76" May 14 00:52:32.711518 kubelet[2176]: E0514 00:52:32.711459 2176 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7b14e305174564a06fe107f0be62f4260df30124ac781c3afda7a3ecd1174\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-d7h76" May 14 00:52:32.711518 kubelet[2176]: E0514 00:52:32.711484 2176 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-d7h76_kube-system(246183cb-f20a-4636-9a2d-386a6d59471c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-d7h76_kube-system(246183cb-f20a-4636-9a2d-386a6d59471c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e7b14e305174564a06fe107f0be62f4260df30124ac781c3afda7a3ecd1174\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-d7h76" podUID="246183cb-f20a-4636-9a2d-386a6d59471c" May 14 00:52:33.379403 kubelet[2176]: E0514 00:52:33.379362 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:33.382596 env[1318]: time="2025-05-14T00:52:33.382502566Z" level=info msg="CreateContainer within sandbox \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 14 00:52:33.392817 env[1318]: time="2025-05-14T00:52:33.392779834Z" level=info msg="CreateContainer within sandbox \"779f98bbb746aedcdddcbbfdb0fd9a43cb2e3debf6053042293bf2a937987a94\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"70c72d792c84f1f363de9d720761fc0975b0aaa4cf8f27376e16fed259fe06d0\"" May 14 00:52:33.393493 env[1318]: time="2025-05-14T00:52:33.393467812Z" level=info msg="StartContainer for \"70c72d792c84f1f363de9d720761fc0975b0aaa4cf8f27376e16fed259fe06d0\"" May 14 00:52:33.427827 systemd[1]: run-netns-cni\x2db5cc183f\x2de45a\x2d9a3c\x2d684e\x2dac207e34fba6.mount: Deactivated successfully. May 14 00:52:33.427970 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-806db5cd5c3e8ca102b098f0f64d302d443c499cec14c4a80eed7d74aa320e4c-shm.mount: Deactivated successfully. May 14 00:52:33.449245 env[1318]: time="2025-05-14T00:52:33.449170467Z" level=info msg="StartContainer for \"70c72d792c84f1f363de9d720761fc0975b0aaa4cf8f27376e16fed259fe06d0\" returns successfully" May 14 00:52:34.382376 kubelet[2176]: E0514 00:52:34.382310 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:34.523889 systemd-networkd[1090]: flannel.1: Link UP May 14 00:52:34.523896 systemd-networkd[1090]: flannel.1: Gained carrier May 14 00:52:35.384012 kubelet[2176]: E0514 00:52:35.383972 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:36.125225 systemd-networkd[1090]: flannel.1: Gained IPv6LL May 14 00:52:40.340308 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:35504.service. May 14 00:52:40.376909 sshd[2799]: Accepted publickey for core from 10.0.0.1 port 35504 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:40.378061 sshd[2799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:40.381334 systemd-logind[1303]: New session 6 of user core. May 14 00:52:40.382139 systemd[1]: Started session-6.scope. May 14 00:52:40.493559 sshd[2799]: pam_unix(sshd:session): session closed for user core May 14 00:52:40.495921 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:35504.service: Deactivated successfully. May 14 00:52:40.496830 systemd-logind[1303]: Session 6 logged out. Waiting for processes to exit. May 14 00:52:40.496875 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:52:40.497771 systemd-logind[1303]: Removed session 6. May 14 00:52:43.331682 kubelet[2176]: E0514 00:52:43.331621 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:43.332481 env[1318]: time="2025-05-14T00:52:43.332442535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7h76,Uid:246183cb-f20a-4636-9a2d-386a6d59471c,Namespace:kube-system,Attempt:0,}" May 14 00:52:43.348891 systemd-networkd[1090]: cni0: Link UP May 14 00:52:43.348898 systemd-networkd[1090]: cni0: Gained carrier May 14 00:52:43.351931 systemd-networkd[1090]: cni0: Lost carrier May 14 00:52:43.358421 systemd-networkd[1090]: vethf2117686: Link UP May 14 00:52:43.360705 kernel: cni0: port 1(vethf2117686) entered blocking state May 14 00:52:43.360793 kernel: cni0: port 1(vethf2117686) entered disabled state May 14 00:52:43.361907 kernel: device vethf2117686 entered promiscuous mode May 14 00:52:43.363683 kernel: cni0: port 1(vethf2117686) entered blocking state May 14 00:52:43.363753 kernel: cni0: port 1(vethf2117686) entered forwarding state May 14 00:52:43.364137 kernel: cni0: port 1(vethf2117686) entered disabled state May 14 00:52:43.376292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf2117686: link becomes ready May 14 00:52:43.376379 kernel: cni0: port 1(vethf2117686) entered blocking state May 14 00:52:43.376395 kernel: cni0: port 1(vethf2117686) entered forwarding state May 14 00:52:43.377178 systemd-networkd[1090]: vethf2117686: Gained carrier May 14 00:52:43.377372 systemd-networkd[1090]: cni0: Gained carrier May 14 00:52:43.379269 env[1318]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 14 00:52:43.379269 env[1318]: delegateAdd: netconf sent to delegate plugin: May 14 00:52:43.388503 env[1318]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T00:52:43.388430762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:43.388503 env[1318]: time="2025-05-14T00:52:43.388478923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:43.388503 env[1318]: time="2025-05-14T00:52:43.388489443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:43.388699 env[1318]: time="2025-05-14T00:52:43.388663046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94437f5a3ed65baf9e5c7a00e682ef9222394a470c38d567ff6f12a0f0e0724a pid=2861 runtime=io.containerd.runc.v2 May 14 00:52:43.429376 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:52:43.446398 env[1318]: time="2025-05-14T00:52:43.446340142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7h76,Uid:246183cb-f20a-4636-9a2d-386a6d59471c,Namespace:kube-system,Attempt:0,} returns sandbox id \"94437f5a3ed65baf9e5c7a00e682ef9222394a470c38d567ff6f12a0f0e0724a\"" May 14 00:52:43.447375 kubelet[2176]: E0514 00:52:43.447130 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:43.449749 env[1318]: time="2025-05-14T00:52:43.449218511Z" level=info msg="CreateContainer within sandbox \"94437f5a3ed65baf9e5c7a00e682ef9222394a470c38d567ff6f12a0f0e0724a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:52:43.461730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052937548.mount: Deactivated successfully. May 14 00:52:43.463271 env[1318]: time="2025-05-14T00:52:43.463206748Z" level=info msg="CreateContainer within sandbox \"94437f5a3ed65baf9e5c7a00e682ef9222394a470c38d567ff6f12a0f0e0724a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b3a74a414a98c151775c282b9103c8b7af620b222af88ad81349961cde938d4\"" May 14 00:52:43.464572 env[1318]: time="2025-05-14T00:52:43.463801958Z" level=info msg="StartContainer for \"1b3a74a414a98c151775c282b9103c8b7af620b222af88ad81349961cde938d4\"" May 14 00:52:43.525352 env[1318]: time="2025-05-14T00:52:43.524953873Z" level=info msg="StartContainer for \"1b3a74a414a98c151775c282b9103c8b7af620b222af88ad81349961cde938d4\" returns successfully" May 14 00:52:44.331980 kubelet[2176]: E0514 00:52:44.331942 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:44.332364 env[1318]: time="2025-05-14T00:52:44.332260885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5km2r,Uid:c5987e0a-5eb1-4788-8778-a3d097169564,Namespace:kube-system,Attempt:0,}" May 14 00:52:44.351849 systemd-networkd[1090]: veth245bad40: Link UP May 14 00:52:44.353350 kernel: cni0: port 2(veth245bad40) entered blocking state May 14 00:52:44.353393 kernel: cni0: port 2(veth245bad40) entered disabled state May 14 00:52:44.354118 kernel: device veth245bad40 entered promiscuous mode May 14 00:52:44.355541 kernel: cni0: port 2(veth245bad40) entered blocking state May 14 00:52:44.355571 kernel: cni0: port 2(veth245bad40) entered forwarding state May 14 00:52:44.356421 kernel: cni0: port 2(veth245bad40) entered disabled state May 14 00:52:44.362648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:52:44.362713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth245bad40: link becomes ready May 14 00:52:44.362750 kernel: cni0: port 2(veth245bad40) entered blocking state May 14 00:52:44.362770 kernel: cni0: port 2(veth245bad40) entered forwarding state May 14 00:52:44.363368 systemd-networkd[1090]: veth245bad40: Gained carrier May 14 00:52:44.364705 env[1318]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018928), "name":"cbr0", "type":"bridge"} May 14 00:52:44.364705 env[1318]: delegateAdd: netconf sent to delegate plugin: May 14 00:52:44.373486 env[1318]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T00:52:44.373340274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:52:44.373486 env[1318]: time="2025-05-14T00:52:44.373378715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:52:44.373486 env[1318]: time="2025-05-14T00:52:44.373389595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:52:44.374099 env[1318]: time="2025-05-14T00:52:44.374053326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43035cf129cdf214b9076148e9fc101465f92aa06d7fa9e0a6afbc442235e5a4 pid=2970 runtime=io.containerd.runc.v2 May 14 00:52:44.403394 kubelet[2176]: E0514 00:52:44.402237 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:44.415777 kubelet[2176]: I0514 00:52:44.415217 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d7h76" podStartSLOduration=16.415199996 podStartE2EDuration="16.415199996s" podCreationTimestamp="2025-05-14 00:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:44.415085434 +0000 UTC m=+30.194484249" watchObservedRunningTime="2025-05-14 00:52:44.415199996 +0000 UTC m=+30.194598811" May 14 00:52:44.416164 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:52:44.418626 kubelet[2176]: I0514 00:52:44.418576 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cbbhw" podStartSLOduration=12.75734509 podStartE2EDuration="16.418556571s" podCreationTimestamp="2025-05-14 00:52:28 +0000 UTC" firstStartedPulling="2025-05-14 00:52:28.550863783 +0000 UTC m=+14.330262598" lastFinishedPulling="2025-05-14 00:52:32.212075264 +0000 UTC m=+17.991474079" observedRunningTime="2025-05-14 00:52:34.391558325 +0000 UTC m=+20.170957180" watchObservedRunningTime="2025-05-14 00:52:44.418556571 +0000 UTC m=+30.197955426" May 14 00:52:44.441437 env[1318]: time="2025-05-14T00:52:44.441388943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5km2r,Uid:c5987e0a-5eb1-4788-8778-a3d097169564,Namespace:kube-system,Attempt:0,} returns sandbox id \"43035cf129cdf214b9076148e9fc101465f92aa06d7fa9e0a6afbc442235e5a4\"" May 14 00:52:44.443799 kubelet[2176]: E0514 00:52:44.443773 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:44.448464 env[1318]: time="2025-05-14T00:52:44.448300415Z" level=info msg="CreateContainer within sandbox \"43035cf129cdf214b9076148e9fc101465f92aa06d7fa9e0a6afbc442235e5a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:52:44.459528 env[1318]: time="2025-05-14T00:52:44.459482997Z" level=info msg="CreateContainer within sandbox \"43035cf129cdf214b9076148e9fc101465f92aa06d7fa9e0a6afbc442235e5a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5f0f8990532ffe1cf916541c3857851417f6dc1b80b4979052eee145ce90fb9\"" May 14 00:52:44.460198 env[1318]: time="2025-05-14T00:52:44.460167808Z" level=info msg="StartContainer for \"e5f0f8990532ffe1cf916541c3857851417f6dc1b80b4979052eee145ce90fb9\"" May 14 00:52:44.503593 env[1318]: time="2025-05-14T00:52:44.503537355Z" level=info msg="StartContainer for \"e5f0f8990532ffe1cf916541c3857851417f6dc1b80b4979052eee145ce90fb9\" returns successfully" May 14 00:52:44.765254 systemd-networkd[1090]: cni0: Gained IPv6LL May 14 00:52:45.149249 systemd-networkd[1090]: vethf2117686: Gained IPv6LL May 14 00:52:45.341842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359594688.mount: Deactivated successfully. May 14 00:52:45.405838 kubelet[2176]: E0514 00:52:45.405620 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:45.406652 kubelet[2176]: E0514 00:52:45.406627 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:45.417159 kubelet[2176]: I0514 00:52:45.417086 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5km2r" podStartSLOduration=17.417073309 podStartE2EDuration="17.417073309s" podCreationTimestamp="2025-05-14 00:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:52:45.416171054 +0000 UTC m=+31.195569909" watchObservedRunningTime="2025-05-14 00:52:45.417073309 +0000 UTC m=+31.196472124" May 14 00:52:45.496622 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:35384.service. May 14 00:52:45.533208 sshd[3072]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:45.535230 sshd[3072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:45.539533 systemd-logind[1303]: New session 7 of user core. May 14 00:52:45.540471 systemd[1]: Started session-7.scope. May 14 00:52:45.652489 sshd[3072]: pam_unix(sshd:session): session closed for user core May 14 00:52:45.654878 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:35384.service: Deactivated successfully. May 14 00:52:45.655956 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:52:45.656781 systemd-logind[1303]: Session 7 logged out. Waiting for processes to exit. May 14 00:52:45.657863 systemd-logind[1303]: Removed session 7. May 14 00:52:46.173214 systemd-networkd[1090]: veth245bad40: Gained IPv6LL May 14 00:52:46.406662 kubelet[2176]: E0514 00:52:46.406619 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:46.407388 kubelet[2176]: E0514 00:52:46.407363 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:47.408669 kubelet[2176]: E0514 00:52:47.408632 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:52:50.656234 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:35390.service. May 14 00:52:50.693756 sshd[3109]: Accepted publickey for core from 10.0.0.1 port 35390 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:50.695344 sshd[3109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:50.699578 systemd-logind[1303]: New session 8 of user core. May 14 00:52:50.700417 systemd[1]: Started session-8.scope. May 14 00:52:50.825337 sshd[3109]: pam_unix(sshd:session): session closed for user core May 14 00:52:50.827647 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:35396.service. May 14 00:52:50.829708 systemd-logind[1303]: Session 8 logged out. Waiting for processes to exit. May 14 00:52:50.829838 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:35390.service: Deactivated successfully. May 14 00:52:50.830717 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:52:50.831201 systemd-logind[1303]: Removed session 8. May 14 00:52:50.867634 sshd[3122]: Accepted publickey for core from 10.0.0.1 port 35396 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:50.869389 sshd[3122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:50.873235 systemd-logind[1303]: New session 9 of user core. May 14 00:52:50.873795 systemd[1]: Started session-9.scope. May 14 00:52:51.018475 sshd[3122]: pam_unix(sshd:session): session closed for user core May 14 00:52:51.023582 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:35408.service. May 14 00:52:51.024303 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:35396.service: Deactivated successfully. May 14 00:52:51.029788 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:52:51.032640 systemd-logind[1303]: Session 9 logged out. Waiting for processes to exit. May 14 00:52:51.035643 systemd-logind[1303]: Removed session 9. May 14 00:52:51.064430 sshd[3135]: Accepted publickey for core from 10.0.0.1 port 35408 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:51.065848 sshd[3135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:51.069694 systemd-logind[1303]: New session 10 of user core. May 14 00:52:51.070051 systemd[1]: Started session-10.scope. May 14 00:52:51.178297 sshd[3135]: pam_unix(sshd:session): session closed for user core May 14 00:52:51.181457 systemd-logind[1303]: Session 10 logged out. Waiting for processes to exit. May 14 00:52:51.181648 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:35408.service: Deactivated successfully. May 14 00:52:51.182558 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:52:51.182980 systemd-logind[1303]: Removed session 10. May 14 00:52:56.181324 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:46484.service. May 14 00:52:56.217853 sshd[3172]: Accepted publickey for core from 10.0.0.1 port 46484 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:56.219693 sshd[3172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:56.225810 systemd-logind[1303]: New session 11 of user core. May 14 00:52:56.226301 systemd[1]: Started session-11.scope. May 14 00:52:56.353154 sshd[3172]: pam_unix(sshd:session): session closed for user core May 14 00:52:56.355611 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:46498.service. May 14 00:52:56.356538 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:46484.service: Deactivated successfully. May 14 00:52:56.358634 systemd-logind[1303]: Session 11 logged out. Waiting for processes to exit. May 14 00:52:56.359734 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:52:56.360959 systemd-logind[1303]: Removed session 11. May 14 00:52:56.399243 sshd[3185]: Accepted publickey for core from 10.0.0.1 port 46498 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:56.400541 sshd[3185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:56.404348 systemd-logind[1303]: New session 12 of user core. May 14 00:52:56.405271 systemd[1]: Started session-12.scope. May 14 00:52:56.584219 sshd[3185]: pam_unix(sshd:session): session closed for user core May 14 00:52:56.587766 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:46504.service. May 14 00:52:56.588316 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:46498.service: Deactivated successfully. May 14 00:52:56.592797 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:52:56.593949 systemd-logind[1303]: Session 12 logged out. Waiting for processes to exit. May 14 00:52:56.595010 systemd-logind[1303]: Removed session 12. May 14 00:52:56.625922 sshd[3198]: Accepted publickey for core from 10.0.0.1 port 46504 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:56.627246 sshd[3198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:56.630566 systemd-logind[1303]: New session 13 of user core. May 14 00:52:56.631338 systemd[1]: Started session-13.scope. May 14 00:52:57.960830 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:46510.service. May 14 00:52:57.961437 sshd[3198]: pam_unix(sshd:session): session closed for user core May 14 00:52:57.966646 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:46504.service: Deactivated successfully. May 14 00:52:57.969778 systemd-logind[1303]: Session 13 logged out. Waiting for processes to exit. May 14 00:52:57.969826 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:52:57.972754 systemd-logind[1303]: Removed session 13. May 14 00:52:58.003835 sshd[3217]: Accepted publickey for core from 10.0.0.1 port 46510 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:58.005164 sshd[3217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:58.008483 systemd-logind[1303]: New session 14 of user core. May 14 00:52:58.009299 systemd[1]: Started session-14.scope. May 14 00:52:58.213648 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:46514.service. May 14 00:52:58.214088 sshd[3217]: pam_unix(sshd:session): session closed for user core May 14 00:52:58.216558 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:46510.service: Deactivated successfully. May 14 00:52:58.218370 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:52:58.218418 systemd-logind[1303]: Session 14 logged out. Waiting for processes to exit. May 14 00:52:58.219660 systemd-logind[1303]: Removed session 14. May 14 00:52:58.251747 sshd[3230]: Accepted publickey for core from 10.0.0.1 port 46514 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:52:58.252968 sshd[3230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:52:58.256820 systemd-logind[1303]: New session 15 of user core. May 14 00:52:58.257266 systemd[1]: Started session-15.scope. May 14 00:52:58.364910 sshd[3230]: pam_unix(sshd:session): session closed for user core May 14 00:52:58.367483 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:46514.service: Deactivated successfully. May 14 00:52:58.368430 systemd-logind[1303]: Session 15 logged out. Waiting for processes to exit. May 14 00:52:58.368485 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:52:58.369662 systemd-logind[1303]: Removed session 15. May 14 00:53:03.368374 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:53752.service. May 14 00:53:03.404883 sshd[3272]: Accepted publickey for core from 10.0.0.1 port 53752 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:03.406447 sshd[3272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:03.409733 systemd-logind[1303]: New session 16 of user core. May 14 00:53:03.410617 systemd[1]: Started session-16.scope. May 14 00:53:03.521659 sshd[3272]: pam_unix(sshd:session): session closed for user core May 14 00:53:03.524948 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:53752.service: Deactivated successfully. May 14 00:53:03.525883 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:53:03.525912 systemd-logind[1303]: Session 16 logged out. Waiting for processes to exit. May 14 00:53:03.526860 systemd-logind[1303]: Removed session 16. May 14 00:53:08.525207 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:53764.service. May 14 00:53:08.562111 sshd[3307]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:08.563634 sshd[3307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:08.567181 systemd-logind[1303]: New session 17 of user core. May 14 00:53:08.567598 systemd[1]: Started session-17.scope. May 14 00:53:08.673106 sshd[3307]: pam_unix(sshd:session): session closed for user core May 14 00:53:08.675618 systemd-logind[1303]: Session 17 logged out. Waiting for processes to exit. May 14 00:53:08.675820 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:53764.service: Deactivated successfully. May 14 00:53:08.676636 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:53:08.677003 systemd-logind[1303]: Removed session 17. May 14 00:53:13.676694 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:54288.service. May 14 00:53:13.718918 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:53:13.720654 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:53:13.725016 systemd-logind[1303]: New session 18 of user core. May 14 00:53:13.725929 systemd[1]: Started session-18.scope. May 14 00:53:13.839562 sshd[3343]: pam_unix(sshd:session): session closed for user core May 14 00:53:13.842809 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:54288.service: Deactivated successfully. May 14 00:53:13.844087 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:53:13.844149 systemd-logind[1303]: Session 18 logged out. Waiting for processes to exit. May 14 00:53:13.845159 systemd-logind[1303]: Removed session 18.