May 14 00:43:51.754105 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:43:51.754123 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:43:51.754131 kernel: efi: EFI v2.70 by EDK II May 14 00:43:51.754137 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:43:51.754142 kernel: random: crng init done May 14 00:43:51.754147 kernel: ACPI: Early table checksum verification disabled May 14 00:43:51.754154 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:43:51.754161 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:43:51.754166 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754172 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754177 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754183 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754188 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754194 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754202 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754208 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754214 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:43:51.754220 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:43:51.754225 kernel: NUMA: Failed to initialise from firmware May 14 00:43:51.754231 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:43:51.754237 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 14 00:43:51.754243 kernel: Zone ranges: May 14 00:43:51.754248 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:43:51.754255 kernel: DMA32 empty May 14 00:43:51.754261 kernel: Normal empty May 14 00:43:51.754277 kernel: Movable zone start for each node May 14 00:43:51.754284 kernel: Early memory node ranges May 14 00:43:51.754289 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:43:51.754295 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:43:51.754301 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:43:51.754307 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:43:51.754313 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:43:51.754318 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:43:51.754324 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:43:51.754329 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:43:51.754337 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:43:51.754343 kernel: psci: probing for conduit method from ACPI. May 14 00:43:51.754349 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:43:51.754355 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:43:51.754360 kernel: psci: Trusted OS migration not required May 14 00:43:51.754369 kernel: psci: SMC Calling Convention v1.1 May 14 00:43:51.754375 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:43:51.754382 kernel: ACPI: SRAT not present May 14 00:43:51.754389 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:43:51.754395 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:43:51.754401 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:43:51.754407 kernel: Detected PIPT I-cache on CPU0 May 14 00:43:51.754414 kernel: CPU features: detected: GIC system register CPU interface May 14 00:43:51.754420 kernel: CPU features: detected: Hardware dirty bit management May 14 00:43:51.754426 kernel: CPU features: detected: Spectre-v4 May 14 00:43:51.754432 kernel: CPU features: detected: Spectre-BHB May 14 00:43:51.754439 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:43:51.754445 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:43:51.754451 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:43:51.754457 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:43:51.754464 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:43:51.754470 kernel: Policy zone: DMA May 14 00:43:51.754477 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:43:51.754484 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:43:51.754490 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:43:51.754496 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:43:51.754502 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:43:51.754510 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 14 00:43:51.754516 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:43:51.754522 kernel: trace event string verifier disabled May 14 00:43:51.754528 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:43:51.754534 kernel: rcu: RCU event tracing is enabled. May 14 00:43:51.754541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:43:51.754547 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:43:51.754553 kernel: Tracing variant of Tasks RCU enabled. May 14 00:43:51.754559 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:43:51.754565 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:43:51.754582 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:43:51.754589 kernel: GICv3: 256 SPIs implemented May 14 00:43:51.754595 kernel: GICv3: 0 Extended SPIs implemented May 14 00:43:51.754601 kernel: GICv3: Distributor has no Range Selector support May 14 00:43:51.754607 kernel: Root IRQ handler: gic_handle_irq May 14 00:43:51.754613 kernel: GICv3: 16 PPIs implemented May 14 00:43:51.754619 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:43:51.754625 kernel: ACPI: SRAT not present May 14 00:43:51.754631 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:43:51.754637 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:43:51.754643 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:43:51.754650 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:43:51.754656 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:43:51.754663 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:43:51.754669 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:43:51.754676 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:43:51.754682 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:43:51.754688 kernel: arm-pv: using stolen time PV May 14 00:43:51.754694 kernel: Console: colour dummy device 80x25 May 14 00:43:51.754700 kernel: ACPI: Core revision 20210730 May 14 00:43:51.754707 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:43:51.754713 kernel: pid_max: default: 32768 minimum: 301 May 14 00:43:51.754719 kernel: LSM: Security Framework initializing May 14 00:43:51.754739 kernel: SELinux: Initializing. May 14 00:43:51.754745 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:43:51.754752 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:43:51.754758 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:43:51.754764 kernel: rcu: Hierarchical SRCU implementation. May 14 00:43:51.754770 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:43:51.754777 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:43:51.754783 kernel: Remapping and enabling EFI services. May 14 00:43:51.754789 kernel: smp: Bringing up secondary CPUs ... May 14 00:43:51.754796 kernel: Detected PIPT I-cache on CPU1 May 14 00:43:51.754803 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:43:51.754809 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:43:51.754816 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:43:51.754822 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:43:51.754828 kernel: Detected PIPT I-cache on CPU2 May 14 00:43:51.754834 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:43:51.754841 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:43:51.754847 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:43:51.754853 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:43:51.754861 kernel: Detected PIPT I-cache on CPU3 May 14 00:43:51.754867 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:43:51.754874 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:43:51.754880 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:43:51.754891 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:43:51.754898 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:43:51.754905 kernel: SMP: Total of 4 processors activated. May 14 00:43:51.754911 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:43:51.754918 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:43:51.754925 kernel: CPU features: detected: Common not Private translations May 14 00:43:51.754931 kernel: CPU features: detected: CRC32 instructions May 14 00:43:51.754938 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:43:51.754946 kernel: CPU features: detected: LSE atomic instructions May 14 00:43:51.754953 kernel: CPU features: detected: Privileged Access Never May 14 00:43:51.754959 kernel: CPU features: detected: RAS Extension Support May 14 00:43:51.754966 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:43:51.754973 kernel: CPU: All CPU(s) started at EL1 May 14 00:43:51.754980 kernel: alternatives: patching kernel code May 14 00:43:51.754987 kernel: devtmpfs: initialized May 14 00:43:51.754993 kernel: KASLR enabled May 14 00:43:51.755000 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:43:51.755007 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:43:51.755013 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:43:51.755020 kernel: SMBIOS 3.0.0 present. May 14 00:43:51.755027 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:43:51.755033 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:43:51.755041 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:43:51.755048 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:43:51.755055 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:43:51.755061 kernel: audit: initializing netlink subsys (disabled) May 14 00:43:51.755068 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 14 00:43:51.755075 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:43:51.755081 kernel: cpuidle: using governor menu May 14 00:43:51.755088 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:43:51.755095 kernel: ASID allocator initialised with 32768 entries May 14 00:43:51.755102 kernel: ACPI: bus type PCI registered May 14 00:43:51.755109 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:43:51.755115 kernel: Serial: AMBA PL011 UART driver May 14 00:43:51.755122 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:43:51.755128 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:43:51.755135 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:43:51.755142 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:43:51.755148 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:43:51.755155 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:43:51.755162 kernel: ACPI: Added _OSI(Module Device) May 14 00:43:51.755169 kernel: ACPI: Added _OSI(Processor Device) May 14 00:43:51.755176 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:43:51.755182 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:43:51.755189 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:43:51.755195 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:43:51.755201 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:43:51.755208 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:43:51.755215 kernel: ACPI: Interpreter enabled May 14 00:43:51.755222 kernel: ACPI: Using GIC for interrupt routing May 14 00:43:51.755229 kernel: ACPI: MCFG table detected, 1 entries May 14 00:43:51.755235 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:43:51.755242 kernel: printk: console [ttyAMA0] enabled May 14 00:43:51.755249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:43:51.755374 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:43:51.755438 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:43:51.755514 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:43:51.755574 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:43:51.755632 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:43:51.755641 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:43:51.755647 kernel: PCI host bridge to bus 0000:00 May 14 00:43:51.755719 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:43:51.755781 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:43:51.755834 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:43:51.755889 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:43:51.755961 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:43:51.756032 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:43:51.756093 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:43:51.756155 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:43:51.756283 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:43:51.756357 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:43:51.756468 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:43:51.756533 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:43:51.756589 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:43:51.756641 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:43:51.756694 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:43:51.756703 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:43:51.756710 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:43:51.756720 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:43:51.756742 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:43:51.756750 kernel: iommu: Default domain type: Translated May 14 00:43:51.756757 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:43:51.756763 kernel: vgaarb: loaded May 14 00:43:51.756770 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:43:51.756776 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:43:51.756783 kernel: PTP clock support registered May 14 00:43:51.756789 kernel: Registered efivars operations May 14 00:43:51.756798 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:43:51.756805 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:43:51.756811 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:43:51.756818 kernel: pnp: PnP ACPI init May 14 00:43:51.756895 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:43:51.756905 kernel: pnp: PnP ACPI: found 1 devices May 14 00:43:51.756911 kernel: NET: Registered PF_INET protocol family May 14 00:43:51.756919 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:43:51.756928 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:43:51.756935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:43:51.756942 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:43:51.756949 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:43:51.756956 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:43:51.756963 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:43:51.756969 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:43:51.756976 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:43:51.756982 kernel: PCI: CLS 0 bytes, default 64 May 14 00:43:51.756990 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:43:51.756997 kernel: kvm [1]: HYP mode not available May 14 00:43:51.757004 kernel: Initialise system trusted keyrings May 14 00:43:51.757010 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:43:51.757016 kernel: Key type asymmetric registered May 14 00:43:51.757023 kernel: Asymmetric key parser 'x509' registered May 14 00:43:51.757029 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:43:51.757036 kernel: io scheduler mq-deadline registered May 14 00:43:51.757042 kernel: io scheduler kyber registered May 14 00:43:51.757050 kernel: io scheduler bfq registered May 14 00:43:51.757057 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:43:51.757063 kernel: ACPI: button: Power Button [PWRB] May 14 00:43:51.757070 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:43:51.757176 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:43:51.757186 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:43:51.757193 kernel: thunder_xcv, ver 1.0 May 14 00:43:51.757199 kernel: thunder_bgx, ver 1.0 May 14 00:43:51.757206 kernel: nicpf, ver 1.0 May 14 00:43:51.757214 kernel: nicvf, ver 1.0 May 14 00:43:51.757287 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:43:51.757348 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:43:51 UTC (1747183431) May 14 00:43:51.757357 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:43:51.757364 kernel: NET: Registered PF_INET6 protocol family May 14 00:43:51.757370 kernel: Segment Routing with IPv6 May 14 00:43:51.757377 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:43:51.757383 kernel: NET: Registered PF_PACKET protocol family May 14 00:43:51.757392 kernel: Key type dns_resolver registered May 14 00:43:51.757398 kernel: registered taskstats version 1 May 14 00:43:51.757405 kernel: Loading compiled-in X.509 certificates May 14 00:43:51.757411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:43:51.757418 kernel: Key type .fscrypt registered May 14 00:43:51.757424 kernel: Key type fscrypt-provisioning registered May 14 00:43:51.757431 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:43:51.757437 kernel: ima: Allocated hash algorithm: sha1 May 14 00:43:51.757444 kernel: ima: No architecture policies found May 14 00:43:51.757451 kernel: clk: Disabling unused clocks May 14 00:43:51.757458 kernel: Freeing unused kernel memory: 36480K May 14 00:43:51.757464 kernel: Run /init as init process May 14 00:43:51.757471 kernel: with arguments: May 14 00:43:51.757477 kernel: /init May 14 00:43:51.757484 kernel: with environment: May 14 00:43:51.757490 kernel: HOME=/ May 14 00:43:51.757496 kernel: TERM=linux May 14 00:43:51.757502 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:43:51.757512 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:43:51.757521 systemd[1]: Detected virtualization kvm. May 14 00:43:51.757528 systemd[1]: Detected architecture arm64. May 14 00:43:51.757535 systemd[1]: Running in initrd. May 14 00:43:51.757542 systemd[1]: No hostname configured, using default hostname. May 14 00:43:51.757549 systemd[1]: Hostname set to . May 14 00:43:51.757557 systemd[1]: Initializing machine ID from VM UUID. May 14 00:43:51.757565 systemd[1]: Queued start job for default target initrd.target. May 14 00:43:51.757572 systemd[1]: Started systemd-ask-password-console.path. May 14 00:43:51.757579 systemd[1]: Reached target cryptsetup.target. May 14 00:43:51.757585 systemd[1]: Reached target paths.target. May 14 00:43:51.757593 systemd[1]: Reached target slices.target. May 14 00:43:51.757600 systemd[1]: Reached target swap.target. May 14 00:43:51.757607 systemd[1]: Reached target timers.target. May 14 00:43:51.757614 systemd[1]: Listening on iscsid.socket. May 14 00:43:51.757622 systemd[1]: Listening on iscsiuio.socket. May 14 00:43:51.757630 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:43:51.757637 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:43:51.757644 systemd[1]: Listening on systemd-journald.socket. May 14 00:43:51.757651 systemd[1]: Listening on systemd-networkd.socket. May 14 00:43:51.757658 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:43:51.757665 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:43:51.757672 systemd[1]: Reached target sockets.target. May 14 00:43:51.757681 systemd[1]: Starting kmod-static-nodes.service... May 14 00:43:51.757688 systemd[1]: Finished network-cleanup.service. May 14 00:43:51.757695 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:43:51.757702 systemd[1]: Starting systemd-journald.service... May 14 00:43:51.757709 systemd[1]: Starting systemd-modules-load.service... May 14 00:43:51.757716 systemd[1]: Starting systemd-resolved.service... May 14 00:43:51.757735 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:43:51.757744 systemd[1]: Finished kmod-static-nodes.service. May 14 00:43:51.757751 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:43:51.757760 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:43:51.757767 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:43:51.757775 kernel: audit: type=1130 audit(1747183431.752:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.757783 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:43:51.757790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:43:51.757800 systemd-journald[289]: Journal started May 14 00:43:51.757842 systemd-journald[289]: Runtime Journal (/run/log/journal/31389bfcb45f4f97b4f8507e18519ffb) is 6.0M, max 48.7M, 42.6M free. May 14 00:43:51.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.750086 systemd-modules-load[290]: Inserted module 'overlay' May 14 00:43:51.762560 kernel: audit: type=1130 audit(1747183431.758:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.762579 systemd[1]: Started systemd-journald.service. May 14 00:43:51.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.765903 kernel: audit: type=1130 audit(1747183431.763:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.775208 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:43:51.775925 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:43:51.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.777563 systemd[1]: Starting dracut-cmdline.service... May 14 00:43:51.782213 kernel: audit: type=1130 audit(1747183431.775:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.782232 kernel: Bridge firewalling registered May 14 00:43:51.782187 systemd-modules-load[290]: Inserted module 'br_netfilter' May 14 00:43:51.786160 systemd-resolved[291]: Positive Trust Anchors: May 14 00:43:51.786174 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:43:51.786202 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:43:51.790376 systemd-resolved[291]: Defaulting to hostname 'linux'. May 14 00:43:51.796344 dracut-cmdline[308]: dracut-dracut-053 May 14 00:43:51.800348 kernel: SCSI subsystem initialized May 14 00:43:51.800367 kernel: audit: type=1130 audit(1747183431.796:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.791117 systemd[1]: Started systemd-resolved.service. May 14 00:43:51.801687 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:43:51.797518 systemd[1]: Reached target nss-lookup.target. May 14 00:43:51.809284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:43:51.809312 kernel: device-mapper: uevent: version 1.0.3 May 14 00:43:51.810735 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:43:51.812899 systemd-modules-load[290]: Inserted module 'dm_multipath' May 14 00:43:51.813665 systemd[1]: Finished systemd-modules-load.service. May 14 00:43:51.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.815259 systemd[1]: Starting systemd-sysctl.service... May 14 00:43:51.818889 kernel: audit: type=1130 audit(1747183431.813:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.823712 systemd[1]: Finished systemd-sysctl.service. May 14 00:43:51.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.827746 kernel: audit: type=1130 audit(1747183431.823:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.864745 kernel: Loading iSCSI transport class v2.0-870. May 14 00:43:51.876757 kernel: iscsi: registered transport (tcp) May 14 00:43:51.893764 kernel: iscsi: registered transport (qla4xxx) May 14 00:43:51.893808 kernel: QLogic iSCSI HBA Driver May 14 00:43:51.927883 systemd[1]: Finished dracut-cmdline.service. May 14 00:43:51.931801 kernel: audit: type=1130 audit(1747183431.927:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:51.929540 systemd[1]: Starting dracut-pre-udev.service... May 14 00:43:51.972757 kernel: raid6: neonx8 gen() 13669 MB/s May 14 00:43:51.989744 kernel: raid6: neonx8 xor() 10721 MB/s May 14 00:43:52.006748 kernel: raid6: neonx4 gen() 13497 MB/s May 14 00:43:52.023747 kernel: raid6: neonx4 xor() 11069 MB/s May 14 00:43:52.040742 kernel: raid6: neonx2 gen() 12895 MB/s May 14 00:43:52.057747 kernel: raid6: neonx2 xor() 10384 MB/s May 14 00:43:52.074746 kernel: raid6: neonx1 gen() 10545 MB/s May 14 00:43:52.091750 kernel: raid6: neonx1 xor() 8744 MB/s May 14 00:43:52.108745 kernel: raid6: int64x8 gen() 6241 MB/s May 14 00:43:52.125748 kernel: raid6: int64x8 xor() 3529 MB/s May 14 00:43:52.142749 kernel: raid6: int64x4 gen() 7177 MB/s May 14 00:43:52.159746 kernel: raid6: int64x4 xor() 3845 MB/s May 14 00:43:52.176759 kernel: raid6: int64x2 gen() 6109 MB/s May 14 00:43:52.193745 kernel: raid6: int64x2 xor() 3314 MB/s May 14 00:43:52.210748 kernel: raid6: int64x1 gen() 5040 MB/s May 14 00:43:52.227916 kernel: raid6: int64x1 xor() 2641 MB/s May 14 00:43:52.227929 kernel: raid6: using algorithm neonx8 gen() 13669 MB/s May 14 00:43:52.227937 kernel: raid6: .... xor() 10721 MB/s, rmw enabled May 14 00:43:52.229009 kernel: raid6: using neon recovery algorithm May 14 00:43:52.240271 kernel: xor: measuring software checksum speed May 14 00:43:52.240295 kernel: 8regs : 17246 MB/sec May 14 00:43:52.240311 kernel: 32regs : 20707 MB/sec May 14 00:43:52.240884 kernel: arm64_neon : 26130 MB/sec May 14 00:43:52.240895 kernel: xor: using function: arm64_neon (26130 MB/sec) May 14 00:43:52.299744 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:43:52.310145 systemd[1]: Finished dracut-pre-udev.service. May 14 00:43:52.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:52.313000 audit: BPF prog-id=7 op=LOAD May 14 00:43:52.313000 audit: BPF prog-id=8 op=LOAD May 14 00:43:52.314429 systemd[1]: Starting systemd-udevd.service... May 14 00:43:52.315868 kernel: audit: type=1130 audit(1747183432.310:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:52.328190 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 14 00:43:52.331481 systemd[1]: Started systemd-udevd.service. May 14 00:43:52.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:52.334319 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:43:52.346019 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 14 00:43:52.372312 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:43:52.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:52.373934 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:43:52.411341 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:43:52.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:52.442415 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:43:52.447735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:43:52.447751 kernel: GPT:9289727 != 19775487 May 14 00:43:52.447760 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:43:52.447769 kernel: GPT:9289727 != 19775487 May 14 00:43:52.447783 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:43:52.447792 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:43:52.459751 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) May 14 00:43:52.460745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:43:52.462269 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:43:52.467083 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:43:52.472291 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:43:52.475814 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:43:52.477460 systemd[1]: Starting disk-uuid.service... May 14 00:43:52.483360 disk-uuid[562]: Primary Header is updated. May 14 00:43:52.483360 disk-uuid[562]: Secondary Entries is updated. May 14 00:43:52.483360 disk-uuid[562]: Secondary Header is updated. May 14 00:43:52.487765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:43:53.496494 disk-uuid[563]: The operation has completed successfully. May 14 00:43:53.497625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:43:53.522817 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:43:53.523880 systemd[1]: Finished disk-uuid.service. May 14 00:43:53.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.525423 systemd[1]: Starting verity-setup.service... May 14 00:43:53.540749 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:43:53.562341 systemd[1]: Found device dev-mapper-usr.device. May 14 00:43:53.564619 systemd[1]: Mounting sysusr-usr.mount... May 14 00:43:53.566488 systemd[1]: Finished verity-setup.service. May 14 00:43:53.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.612417 systemd[1]: Mounted sysusr-usr.mount. May 14 00:43:53.613821 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:43:53.613341 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:43:53.614070 systemd[1]: Starting ignition-setup.service... May 14 00:43:53.616403 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:43:53.623736 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:43:53.623770 kernel: BTRFS info (device vda6): using free space tree May 14 00:43:53.623780 kernel: BTRFS info (device vda6): has skinny extents May 14 00:43:53.630556 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:43:53.636347 systemd[1]: Finished ignition-setup.service. May 14 00:43:53.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.637960 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:43:53.703703 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:43:53.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.704000 audit: BPF prog-id=9 op=LOAD May 14 00:43:53.705969 systemd[1]: Starting systemd-networkd.service... May 14 00:43:53.715540 ignition[647]: Ignition 2.14.0 May 14 00:43:53.715550 ignition[647]: Stage: fetch-offline May 14 00:43:53.715589 ignition[647]: no configs at "/usr/lib/ignition/base.d" May 14 00:43:53.715598 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:43:53.715743 ignition[647]: parsed url from cmdline: "" May 14 00:43:53.715746 ignition[647]: no config URL provided May 14 00:43:53.715750 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:43:53.715758 ignition[647]: no config at "/usr/lib/ignition/user.ign" May 14 00:43:53.715777 ignition[647]: op(1): [started] loading QEMU firmware config module May 14 00:43:53.715782 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:43:53.722975 ignition[647]: op(1): [finished] loading QEMU firmware config module May 14 00:43:53.728187 systemd-networkd[739]: lo: Link UP May 14 00:43:53.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.728201 systemd-networkd[739]: lo: Gained carrier May 14 00:43:53.728813 systemd-networkd[739]: Enumeration completed May 14 00:43:53.729047 systemd[1]: Started systemd-networkd.service. May 14 00:43:53.729162 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:43:53.730238 systemd[1]: Reached target network.target. May 14 00:43:53.730541 systemd-networkd[739]: eth0: Link UP May 14 00:43:53.730544 systemd-networkd[739]: eth0: Gained carrier May 14 00:43:53.731778 systemd[1]: Starting iscsiuio.service... May 14 00:43:53.740666 systemd[1]: Started iscsiuio.service. May 14 00:43:53.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.742226 systemd[1]: Starting iscsid.service... May 14 00:43:53.745448 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:43:53.745448 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:43:53.745448 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:43:53.745448 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:43:53.745448 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:43:53.745448 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:43:53.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.748297 systemd[1]: Started iscsid.service. May 14 00:43:53.752517 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:43:53.754502 systemd[1]: Starting dracut-initqueue.service... May 14 00:43:53.764502 systemd[1]: Finished dracut-initqueue.service. May 14 00:43:53.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.765471 systemd[1]: Reached target remote-fs-pre.target. May 14 00:43:53.764635 ignition[647]: parsing config with SHA512: 96a0fc3c1846ab4af8279270980cfa18c0c02d0bc0865956f2c9f09ad8611c6c01b5c784e7cbc95017d83e655ee62ce97539d9fe0e6d87695a60f5d9f2d6d693 May 14 00:43:53.766957 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:43:53.768930 systemd[1]: Reached target remote-fs.target. May 14 00:43:53.772189 systemd[1]: Starting dracut-pre-mount.service... May 14 00:43:53.774858 unknown[647]: fetched base config from "system" May 14 00:43:53.774872 unknown[647]: fetched user config from "qemu" May 14 00:43:53.775364 ignition[647]: fetch-offline: fetch-offline passed May 14 00:43:53.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.776497 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:43:53.775423 ignition[647]: Ignition finished successfully May 14 00:43:53.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.777544 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.78 May 14 00:43:53.777553 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 14 00:43:53.778179 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:43:53.778871 systemd[1]: Starting ignition-kargs.service... May 14 00:43:53.780090 systemd[1]: Finished dracut-pre-mount.service. May 14 00:43:53.787331 ignition[760]: Ignition 2.14.0 May 14 00:43:53.787344 ignition[760]: Stage: kargs May 14 00:43:53.787428 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 14 00:43:53.787438 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:43:53.789585 systemd[1]: Finished ignition-kargs.service. May 14 00:43:53.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.788557 ignition[760]: kargs: kargs passed May 14 00:43:53.788599 ignition[760]: Ignition finished successfully May 14 00:43:53.791848 systemd[1]: Starting ignition-disks.service... May 14 00:43:53.797634 ignition[766]: Ignition 2.14.0 May 14 00:43:53.797644 ignition[766]: Stage: disks May 14 00:43:53.797747 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 14 00:43:53.799798 systemd[1]: Finished ignition-disks.service. May 14 00:43:53.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.797757 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:43:53.801324 systemd[1]: Reached target initrd-root-device.target. May 14 00:43:53.798761 ignition[766]: disks: disks passed May 14 00:43:53.802630 systemd[1]: Reached target local-fs-pre.target. May 14 00:43:53.798805 ignition[766]: Ignition finished successfully May 14 00:43:53.804219 systemd[1]: Reached target local-fs.target. May 14 00:43:53.805522 systemd[1]: Reached target sysinit.target. May 14 00:43:53.806628 systemd[1]: Reached target basic.target. May 14 00:43:53.808655 systemd[1]: Starting systemd-fsck-root.service... May 14 00:43:53.819053 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:43:53.822572 systemd[1]: Finished systemd-fsck-root.service. May 14 00:43:53.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.824089 systemd[1]: Mounting sysroot.mount... May 14 00:43:53.831421 systemd[1]: Mounted sysroot.mount. May 14 00:43:53.832585 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:43:53.832116 systemd[1]: Reached target initrd-root-fs.target. May 14 00:43:53.834221 systemd[1]: Mounting sysroot-usr.mount... May 14 00:43:53.835047 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:43:53.835084 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:43:53.835107 systemd[1]: Reached target ignition-diskful.target. May 14 00:43:53.836918 systemd[1]: Mounted sysroot-usr.mount. May 14 00:43:53.838791 systemd[1]: Starting initrd-setup-root.service... May 14 00:43:53.842799 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:43:53.846606 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 14 00:43:53.850548 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:43:53.854374 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:43:53.879517 systemd[1]: Finished initrd-setup-root.service. May 14 00:43:53.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.881004 systemd[1]: Starting ignition-mount.service... May 14 00:43:53.882250 systemd[1]: Starting sysroot-boot.service... May 14 00:43:53.886205 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:43:53.894974 ignition[827]: INFO : Ignition 2.14.0 May 14 00:43:53.894974 ignition[827]: INFO : Stage: mount May 14 00:43:53.897124 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:43:53.897124 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:43:53.897124 ignition[827]: INFO : mount: mount passed May 14 00:43:53.897124 ignition[827]: INFO : Ignition finished successfully May 14 00:43:53.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:53.899246 systemd[1]: Finished ignition-mount.service. May 14 00:43:53.902549 systemd[1]: Finished sysroot-boot.service. May 14 00:43:53.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:54.572998 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:43:54.578739 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 14 00:43:54.580975 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:43:54.580989 kernel: BTRFS info (device vda6): using free space tree May 14 00:43:54.581004 kernel: BTRFS info (device vda6): has skinny extents May 14 00:43:54.584299 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:43:54.586701 systemd[1]: Starting ignition-files.service... May 14 00:43:54.600064 ignition[856]: INFO : Ignition 2.14.0 May 14 00:43:54.600064 ignition[856]: INFO : Stage: files May 14 00:43:54.601682 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:43:54.601682 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:43:54.601682 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 14 00:43:54.606940 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:43:54.606940 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:43:54.609822 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:43:54.609822 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:43:54.609822 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:43:54.609822 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:43:54.609327 unknown[856]: wrote ssh authorized keys file for user: core May 14 00:43:54.616645 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 00:43:54.653334 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:43:54.849454 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:43:54.849454 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:43:54.853474 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 00:43:55.191119 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 00:43:55.547950 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:43:55.547950 ignition[856]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:43:55.551969 ignition[856]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:43:55.566194 systemd-networkd[739]: eth0: Gained IPv6LL May 14 00:43:55.580175 ignition[856]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:43:55.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.583742 ignition[856]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:43:55.583742 ignition[856]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:43:55.583742 ignition[856]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:43:55.583742 ignition[856]: INFO : files: files passed May 14 00:43:55.583742 ignition[856]: INFO : Ignition finished successfully May 14 00:43:55.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.582681 systemd[1]: Finished ignition-files.service. May 14 00:43:55.584420 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:43:55.585881 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:43:55.598954 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:43:55.586539 systemd[1]: Starting ignition-quench.service... May 14 00:43:55.601266 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:43:55.590196 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:43:55.590302 systemd[1]: Finished ignition-quench.service. May 14 00:43:55.592953 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:43:55.593972 systemd[1]: Reached target ignition-complete.target. May 14 00:43:55.595431 systemd[1]: Starting initrd-parse-etc.service... May 14 00:43:55.607654 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:43:55.607750 systemd[1]: Finished initrd-parse-etc.service. May 14 00:43:55.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.609477 systemd[1]: Reached target initrd-fs.target. May 14 00:43:55.610770 systemd[1]: Reached target initrd.target. May 14 00:43:55.612103 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:43:55.612807 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:43:55.623349 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:43:55.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.624869 systemd[1]: Starting initrd-cleanup.service... May 14 00:43:55.632304 systemd[1]: Stopped target nss-lookup.target. May 14 00:43:55.633171 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:43:55.634577 systemd[1]: Stopped target timers.target. May 14 00:43:55.635962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:43:55.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.636061 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:43:55.637357 systemd[1]: Stopped target initrd.target. May 14 00:43:55.638780 systemd[1]: Stopped target basic.target. May 14 00:43:55.640065 systemd[1]: Stopped target ignition-complete.target. May 14 00:43:55.641435 systemd[1]: Stopped target ignition-diskful.target. May 14 00:43:55.642754 systemd[1]: Stopped target initrd-root-device.target. May 14 00:43:55.644274 systemd[1]: Stopped target remote-fs.target. May 14 00:43:55.645604 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:43:55.647029 systemd[1]: Stopped target sysinit.target. May 14 00:43:55.648286 systemd[1]: Stopped target local-fs.target. May 14 00:43:55.649600 systemd[1]: Stopped target local-fs-pre.target. May 14 00:43:55.650887 systemd[1]: Stopped target swap.target. May 14 00:43:55.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.652073 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:43:55.652178 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:43:55.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.653526 systemd[1]: Stopped target cryptsetup.target. May 14 00:43:55.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.654706 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:43:55.654824 systemd[1]: Stopped dracut-initqueue.service. May 14 00:43:55.656285 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:43:55.656380 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:43:55.657686 systemd[1]: Stopped target paths.target. May 14 00:43:55.658905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:43:55.662875 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:43:55.663777 systemd[1]: Stopped target slices.target. May 14 00:43:55.665323 systemd[1]: Stopped target sockets.target. May 14 00:43:55.666693 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:43:55.666791 systemd[1]: Closed iscsid.socket. May 14 00:43:55.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.667909 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:43:55.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.667971 systemd[1]: Closed iscsiuio.socket. May 14 00:43:55.669087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:43:55.669184 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:43:55.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.670634 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:43:55.670740 systemd[1]: Stopped ignition-files.service. May 14 00:43:55.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.680449 ignition[896]: INFO : Ignition 2.14.0 May 14 00:43:55.680449 ignition[896]: INFO : Stage: umount May 14 00:43:55.680449 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:43:55.680449 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:43:55.680449 ignition[896]: INFO : umount: umount passed May 14 00:43:55.680449 ignition[896]: INFO : Ignition finished successfully May 14 00:43:55.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.672549 systemd[1]: Stopping ignition-mount.service... May 14 00:43:55.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.673699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:43:55.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.673836 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:43:55.676124 systemd[1]: Stopping sysroot-boot.service... May 14 00:43:55.678027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:43:55.678147 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:43:55.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.679690 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:43:55.679800 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:43:55.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.682512 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:43:55.682590 systemd[1]: Stopped ignition-mount.service. May 14 00:43:55.684564 systemd[1]: Stopped target network.target. May 14 00:43:55.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.685972 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:43:55.686023 systemd[1]: Stopped ignition-disks.service. May 14 00:43:55.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.688607 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:43:55.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.688647 systemd[1]: Stopped ignition-kargs.service. May 14 00:43:55.690186 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:43:55.690228 systemd[1]: Stopped ignition-setup.service. May 14 00:43:55.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.691668 systemd[1]: Stopping systemd-networkd.service... May 14 00:43:55.694020 systemd[1]: Stopping systemd-resolved.service... May 14 00:43:55.718000 audit: BPF prog-id=6 op=UNLOAD May 14 00:43:55.696085 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:43:55.696549 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:43:55.696626 systemd[1]: Finished initrd-cleanup.service. May 14 00:43:55.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.699848 systemd-networkd[739]: eth0: DHCPv6 lease lost May 14 00:43:55.722000 audit: BPF prog-id=9 op=UNLOAD May 14 00:43:55.701685 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:43:55.701789 systemd[1]: Stopped systemd-networkd.service. May 14 00:43:55.703586 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:43:55.703615 systemd[1]: Closed systemd-networkd.socket. May 14 00:43:55.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.705493 systemd[1]: Stopping network-cleanup.service... May 14 00:43:55.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.706177 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:43:55.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.706233 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:43:55.707627 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:43:55.707668 systemd[1]: Stopped systemd-sysctl.service. May 14 00:43:55.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.711215 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:43:55.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.711266 systemd[1]: Stopped systemd-modules-load.service. May 14 00:43:55.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.712258 systemd[1]: Stopping systemd-udevd.service... May 14 00:43:55.714337 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:43:55.714774 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:43:55.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.714857 systemd[1]: Stopped systemd-resolved.service. May 14 00:43:55.720457 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:43:55.720550 systemd[1]: Stopped network-cleanup.service. May 14 00:43:55.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.725752 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:43:55.725832 systemd[1]: Stopped sysroot-boot.service. May 14 00:43:55.726978 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:43:55.727015 systemd[1]: Stopped initrd-setup-root.service. May 14 00:43:55.728383 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:43:55.728490 systemd[1]: Stopped systemd-udevd.service. May 14 00:43:55.729655 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:43:55.729686 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:43:55.731126 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:43:55.731156 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:43:55.732430 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:43:55.732469 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:43:55.733867 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:43:55.733905 systemd[1]: Stopped dracut-cmdline.service. May 14 00:43:55.735268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:43:55.735306 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:43:55.737343 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:43:55.738676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:43:55.738775 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:43:55.742502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:43:55.742579 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:43:55.743743 systemd[1]: Reached target initrd-switch-root.target. May 14 00:43:55.745519 systemd[1]: Starting initrd-switch-root.service... May 14 00:43:55.751502 systemd[1]: Switching root. May 14 00:43:55.769262 iscsid[746]: iscsid shutting down. May 14 00:43:55.769880 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). May 14 00:43:55.769919 systemd-journald[289]: Journal stopped May 14 00:43:57.808665 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:43:57.808714 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:43:57.808744 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:43:57.808759 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:43:57.808770 kernel: SELinux: policy capability open_perms=1 May 14 00:43:57.808779 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:43:57.808789 kernel: SELinux: policy capability always_check_network=0 May 14 00:43:57.808799 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:43:57.808814 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:43:57.808829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:43:57.808839 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:43:57.808851 systemd[1]: Successfully loaded SELinux policy in 37.222ms. May 14 00:43:57.808872 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.038ms. May 14 00:43:57.808884 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:43:57.808896 systemd[1]: Detected virtualization kvm. May 14 00:43:57.808907 systemd[1]: Detected architecture arm64. May 14 00:43:57.808917 systemd[1]: Detected first boot. May 14 00:43:57.808930 systemd[1]: Initializing machine ID from VM UUID. May 14 00:43:57.808942 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:43:57.808956 systemd[1]: Populated /etc with preset unit settings. May 14 00:43:57.808967 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:43:57.808979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:43:57.808992 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:43:57.809003 systemd[1]: iscsiuio.service: Deactivated successfully. May 14 00:43:57.809014 systemd[1]: Stopped iscsiuio.service. May 14 00:43:57.809026 kernel: kauditd_printk_skb: 78 callbacks suppressed May 14 00:43:57.809036 kernel: audit: type=1334 audit(1747183437.638:82): prog-id=12 op=LOAD May 14 00:43:57.809046 kernel: audit: type=1334 audit(1747183437.638:83): prog-id=3 op=UNLOAD May 14 00:43:57.809056 kernel: audit: type=1334 audit(1747183437.638:84): prog-id=13 op=LOAD May 14 00:43:57.809066 kernel: audit: type=1334 audit(1747183437.638:85): prog-id=14 op=LOAD May 14 00:43:57.809077 kernel: audit: type=1334 audit(1747183437.638:86): prog-id=4 op=UNLOAD May 14 00:43:57.809087 kernel: audit: type=1334 audit(1747183437.638:87): prog-id=5 op=UNLOAD May 14 00:43:57.809097 kernel: audit: type=1131 audit(1747183437.640:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.809109 systemd[1]: iscsid.service: Deactivated successfully. May 14 00:43:57.809120 kernel: audit: type=1131 audit(1747183437.647:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.809131 systemd[1]: Stopped iscsid.service. May 14 00:43:57.809142 kernel: audit: type=1131 audit(1747183437.654:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.809155 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:43:57.809168 systemd[1]: Stopped initrd-switch-root.service. May 14 00:43:57.809178 kernel: audit: type=1334 audit(1747183437.659:91): prog-id=12 op=UNLOAD May 14 00:43:57.809190 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:43:57.809202 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:43:57.809213 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:43:57.809224 systemd[1]: Created slice system-getty.slice. May 14 00:43:57.809241 systemd[1]: Created slice system-modprobe.slice. May 14 00:43:57.809253 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:43:57.809270 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:43:57.809281 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:43:57.809293 systemd[1]: Created slice user.slice. May 14 00:43:57.809303 systemd[1]: Started systemd-ask-password-console.path. May 14 00:43:57.809316 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:43:57.809327 systemd[1]: Set up automount boot.automount. May 14 00:43:57.809339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:43:57.809350 systemd[1]: Stopped target initrd-switch-root.target. May 14 00:43:57.809361 systemd[1]: Stopped target initrd-fs.target. May 14 00:43:57.809374 systemd[1]: Stopped target initrd-root-fs.target. May 14 00:43:57.809385 systemd[1]: Reached target integritysetup.target. May 14 00:43:57.809397 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:43:57.809408 systemd[1]: Reached target remote-fs.target. May 14 00:43:57.809420 systemd[1]: Reached target slices.target. May 14 00:43:57.809431 systemd[1]: Reached target swap.target. May 14 00:43:57.809442 systemd[1]: Reached target torcx.target. May 14 00:43:57.809453 systemd[1]: Reached target veritysetup.target. May 14 00:43:57.809464 systemd[1]: Listening on systemd-coredump.socket. May 14 00:43:57.809475 systemd[1]: Listening on systemd-initctl.socket. May 14 00:43:57.809488 systemd[1]: Listening on systemd-networkd.socket. May 14 00:43:57.809500 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:43:57.809511 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:43:57.809522 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:43:57.809534 systemd[1]: Mounting dev-hugepages.mount... May 14 00:43:57.809545 systemd[1]: Mounting dev-mqueue.mount... May 14 00:43:57.809557 systemd[1]: Mounting media.mount... May 14 00:43:57.809569 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:43:57.809580 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:43:57.809592 systemd[1]: Mounting tmp.mount... May 14 00:43:57.809603 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:43:57.809615 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:43:57.809626 systemd[1]: Starting kmod-static-nodes.service... May 14 00:43:57.809638 systemd[1]: Starting modprobe@configfs.service... May 14 00:43:57.809650 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:43:57.809663 systemd[1]: Starting modprobe@drm.service... May 14 00:43:57.809674 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:43:57.809685 systemd[1]: Starting modprobe@fuse.service... May 14 00:43:57.809697 systemd[1]: Starting modprobe@loop.service... May 14 00:43:57.809709 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:43:57.809720 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:43:57.809737 systemd[1]: Stopped systemd-fsck-root.service. May 14 00:43:57.809749 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:43:57.809760 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:43:57.809770 systemd[1]: Stopped systemd-journald.service. May 14 00:43:57.809782 kernel: fuse: init (API version 7.34) May 14 00:43:57.809794 systemd[1]: Starting systemd-journald.service... May 14 00:43:57.809806 kernel: loop: module loaded May 14 00:43:57.809818 systemd[1]: Starting systemd-modules-load.service... May 14 00:43:57.809829 systemd[1]: Starting systemd-network-generator.service... May 14 00:43:57.809839 systemd[1]: Starting systemd-remount-fs.service... May 14 00:43:57.809850 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:43:57.809861 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:43:57.809873 systemd[1]: Stopped verity-setup.service. May 14 00:43:57.809885 systemd[1]: Mounted dev-hugepages.mount. May 14 00:43:57.809895 systemd[1]: Mounted dev-mqueue.mount. May 14 00:43:57.809908 systemd[1]: Mounted media.mount. May 14 00:43:57.809919 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:43:57.809930 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:43:57.809941 systemd[1]: Mounted tmp.mount. May 14 00:43:57.809952 systemd[1]: Finished kmod-static-nodes.service. May 14 00:43:57.809963 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:43:57.809976 systemd[1]: Finished modprobe@configfs.service. May 14 00:43:57.809987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:43:57.810000 systemd-journald[992]: Journal started May 14 00:43:57.810043 systemd-journald[992]: Runtime Journal (/run/log/journal/31389bfcb45f4f97b4f8507e18519ffb) is 6.0M, max 48.7M, 42.6M free. May 14 00:43:55.839000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:43:55.926000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:43:55.926000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:43:55.926000 audit: BPF prog-id=10 op=LOAD May 14 00:43:55.926000 audit: BPF prog-id=10 op=UNLOAD May 14 00:43:55.926000 audit: BPF prog-id=11 op=LOAD May 14 00:43:55.926000 audit: BPF prog-id=11 op=UNLOAD May 14 00:43:55.966000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 14 00:43:55.966000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58a2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:43:55.966000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:43:55.967000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 14 00:43:55.967000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:43:55.967000 audit: CWD cwd="/" May 14 00:43:55.967000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:43:55.967000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:43:55.967000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:43:57.638000 audit: BPF prog-id=12 op=LOAD May 14 00:43:57.638000 audit: BPF prog-id=3 op=UNLOAD May 14 00:43:57.638000 audit: BPF prog-id=13 op=LOAD May 14 00:43:57.638000 audit: BPF prog-id=14 op=LOAD May 14 00:43:57.638000 audit: BPF prog-id=4 op=UNLOAD May 14 00:43:57.638000 audit: BPF prog-id=5 op=UNLOAD May 14 00:43:57.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.659000 audit: BPF prog-id=12 op=UNLOAD May 14 00:43:57.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.773000 audit: BPF prog-id=15 op=LOAD May 14 00:43:57.774000 audit: BPF prog-id=16 op=LOAD May 14 00:43:57.774000 audit: BPF prog-id=17 op=LOAD May 14 00:43:57.774000 audit: BPF prog-id=13 op=UNLOAD May 14 00:43:57.774000 audit: BPF prog-id=14 op=UNLOAD May 14 00:43:57.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.806000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:43:57.806000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffebe35900 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:43:57.806000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:43:57.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:55.965585 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:43:57.637120 systemd[1]: Queued start job for default target multi-user.target. May 14 00:43:55.965865 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:43:57.637133 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:43:55.965884 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:43:57.640603 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:43:55.965913 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 14 00:43:55.965922 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="skipped missing lower profile" missing profile=oem May 14 00:43:55.965950 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 14 00:43:55.965961 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 14 00:43:55.966279 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 14 00:43:55.966316 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:43:55.966328 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:43:55.966773 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 14 00:43:55.966811 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 14 00:43:55.966829 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 14 00:43:55.966843 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 14 00:43:55.966860 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 14 00:43:57.812405 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:43:55.966873 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 14 00:43:57.390136 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:43:57.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.390407 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:43:57.390511 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:43:57.390676 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:43:57.390750 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 14 00:43:57.390813 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:43:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 14 00:43:57.814242 systemd[1]: Started systemd-journald.service. May 14 00:43:57.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.815032 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:43:57.815218 systemd[1]: Finished modprobe@drm.service. May 14 00:43:57.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.816425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:43:57.816567 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:43:57.817744 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:43:57.817901 systemd[1]: Finished modprobe@fuse.service. May 14 00:43:57.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.819024 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:43:57.819213 systemd[1]: Finished modprobe@loop.service. May 14 00:43:57.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.820472 systemd[1]: Finished systemd-modules-load.service. May 14 00:43:57.821671 systemd[1]: Finished systemd-network-generator.service. May 14 00:43:57.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.823095 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:43:57.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.824268 systemd[1]: Finished systemd-remount-fs.service. May 14 00:43:57.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.825760 systemd[1]: Reached target network-pre.target. May 14 00:43:57.828026 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:43:57.830390 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:43:57.831249 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:43:57.833444 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:43:57.835759 systemd[1]: Starting systemd-journal-flush.service... May 14 00:43:57.836809 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:43:57.838033 systemd[1]: Starting systemd-random-seed.service... May 14 00:43:57.838982 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:43:57.840192 systemd[1]: Starting systemd-sysctl.service... May 14 00:43:57.842384 systemd[1]: Starting systemd-sysusers.service... May 14 00:43:57.846837 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:43:57.847978 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:43:57.848917 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:43:57.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.852586 systemd[1]: Starting systemd-udev-settle.service... May 14 00:43:57.853813 systemd-journald[992]: Time spent on flushing to /var/log/journal/31389bfcb45f4f97b4f8507e18519ffb is 13.122ms for 993 entries. May 14 00:43:57.853813 systemd-journald[992]: System Journal (/var/log/journal/31389bfcb45f4f97b4f8507e18519ffb) is 8.0M, max 195.6M, 187.6M free. May 14 00:43:57.878262 systemd-journald[992]: Received client request to flush runtime journal. May 14 00:43:57.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.853926 systemd[1]: Finished systemd-random-seed.service. May 14 00:43:57.878651 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:43:57.856035 systemd[1]: Reached target first-boot-complete.target. May 14 00:43:57.868211 systemd[1]: Finished systemd-sysctl.service. May 14 00:43:57.881109 systemd[1]: Finished systemd-journal-flush.service. May 14 00:43:57.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:57.882416 systemd[1]: Finished systemd-sysusers.service. May 14 00:43:57.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.192764 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:43:58.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.193000 audit: BPF prog-id=18 op=LOAD May 14 00:43:58.193000 audit: BPF prog-id=19 op=LOAD May 14 00:43:58.193000 audit: BPF prog-id=7 op=UNLOAD May 14 00:43:58.193000 audit: BPF prog-id=8 op=UNLOAD May 14 00:43:58.195037 systemd[1]: Starting systemd-udevd.service... May 14 00:43:58.211569 systemd-udevd[1033]: Using default interface naming scheme 'v252'. May 14 00:43:58.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.224445 systemd[1]: Started systemd-udevd.service. May 14 00:43:58.226000 audit: BPF prog-id=20 op=LOAD May 14 00:43:58.226833 systemd[1]: Starting systemd-networkd.service... May 14 00:43:58.230000 audit: BPF prog-id=21 op=LOAD May 14 00:43:58.230000 audit: BPF prog-id=22 op=LOAD May 14 00:43:58.230000 audit: BPF prog-id=23 op=LOAD May 14 00:43:58.231783 systemd[1]: Starting systemd-userdbd.service... May 14 00:43:58.251663 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 14 00:43:58.263049 systemd[1]: Started systemd-userdbd.service. May 14 00:43:58.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.287073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:43:58.320820 systemd-networkd[1035]: lo: Link UP May 14 00:43:58.320833 systemd-networkd[1035]: lo: Gained carrier May 14 00:43:58.321171 systemd-networkd[1035]: Enumeration completed May 14 00:43:58.321278 systemd[1]: Started systemd-networkd.service. May 14 00:43:58.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.321868 systemd-networkd[1035]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:43:58.324859 systemd-networkd[1035]: eth0: Link UP May 14 00:43:58.324865 systemd-networkd[1035]: eth0: Gained carrier May 14 00:43:58.333181 systemd[1]: Finished systemd-udev-settle.service. May 14 00:43:58.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.335242 systemd[1]: Starting lvm2-activation-early.service... May 14 00:43:58.344912 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:43:58.347859 systemd-networkd[1035]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:43:58.375538 systemd[1]: Finished lvm2-activation-early.service. May 14 00:43:58.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.376585 systemd[1]: Reached target cryptsetup.target. May 14 00:43:58.378499 systemd[1]: Starting lvm2-activation.service... May 14 00:43:58.382285 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:43:58.424653 systemd[1]: Finished lvm2-activation.service. May 14 00:43:58.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.425683 systemd[1]: Reached target local-fs-pre.target. May 14 00:43:58.426573 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:43:58.426611 systemd[1]: Reached target local-fs.target. May 14 00:43:58.427435 systemd[1]: Reached target machines.target. May 14 00:43:58.429499 systemd[1]: Starting ldconfig.service... May 14 00:43:58.430648 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:43:58.430744 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:58.432169 systemd[1]: Starting systemd-boot-update.service... May 14 00:43:58.435293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:43:58.437874 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:43:58.440410 systemd[1]: Starting systemd-sysext.service... May 14 00:43:58.441844 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) May 14 00:43:58.445662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:43:58.456131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:43:58.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.463914 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:43:58.512657 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:43:58.512873 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:43:58.525670 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:43:58.526191 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:43:58.526757 kernel: loop0: detected capacity change from 0 to 189592 May 14 00:43:58.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.537743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:43:58.537842 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) May 14 00:43:58.537842 systemd-fsck[1077]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:43:58.541916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:43:58.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.548648 systemd[1]: Mounting boot.mount... May 14 00:43:58.555429 systemd[1]: Mounted boot.mount. May 14 00:43:58.564750 kernel: loop1: detected capacity change from 0 to 189592 May 14 00:43:58.570374 systemd[1]: Finished systemd-boot-update.service. May 14 00:43:58.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.572882 (sd-sysext)[1084]: Using extensions 'kubernetes'. May 14 00:43:58.573298 (sd-sysext)[1084]: Merged extensions into '/usr'. May 14 00:43:58.590173 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:43:58.591439 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:43:58.593543 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:43:58.595496 systemd[1]: Starting modprobe@loop.service... May 14 00:43:58.596410 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:43:58.596542 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:58.597357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:43:58.597485 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:43:58.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.598912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:43:58.599026 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:43:58.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.600519 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:43:58.600633 systemd[1]: Finished modprobe@loop.service. May 14 00:43:58.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.602162 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:43:58.602355 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:43:58.649582 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:43:58.653407 systemd[1]: Finished ldconfig.service. May 14 00:43:58.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.795717 systemd[1]: Mounting usr-share-oem.mount... May 14 00:43:58.800673 systemd[1]: Mounted usr-share-oem.mount. May 14 00:43:58.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.802498 systemd[1]: Finished systemd-sysext.service. May 14 00:43:58.804466 systemd[1]: Starting ensure-sysext.service... May 14 00:43:58.806168 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:43:58.810436 systemd[1]: Reloading. May 14 00:43:58.818757 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:43:58.820320 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:43:58.822749 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:43:58.849877 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-14T00:43:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:43:58.850283 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-14T00:43:58Z" level=info msg="torcx already run" May 14 00:43:58.905947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:43:58.905966 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:43:58.922587 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:43:58.964000 audit: BPF prog-id=24 op=LOAD May 14 00:43:58.964000 audit: BPF prog-id=20 op=UNLOAD May 14 00:43:58.965000 audit: BPF prog-id=25 op=LOAD May 14 00:43:58.965000 audit: BPF prog-id=15 op=UNLOAD May 14 00:43:58.965000 audit: BPF prog-id=26 op=LOAD May 14 00:43:58.965000 audit: BPF prog-id=27 op=LOAD May 14 00:43:58.965000 audit: BPF prog-id=16 op=UNLOAD May 14 00:43:58.965000 audit: BPF prog-id=17 op=UNLOAD May 14 00:43:58.967000 audit: BPF prog-id=28 op=LOAD May 14 00:43:58.967000 audit: BPF prog-id=29 op=LOAD May 14 00:43:58.967000 audit: BPF prog-id=18 op=UNLOAD May 14 00:43:58.967000 audit: BPF prog-id=19 op=UNLOAD May 14 00:43:58.968000 audit: BPF prog-id=30 op=LOAD May 14 00:43:58.968000 audit: BPF prog-id=21 op=UNLOAD May 14 00:43:58.968000 audit: BPF prog-id=31 op=LOAD May 14 00:43:58.968000 audit: BPF prog-id=32 op=LOAD May 14 00:43:58.968000 audit: BPF prog-id=22 op=UNLOAD May 14 00:43:58.968000 audit: BPF prog-id=23 op=UNLOAD May 14 00:43:58.971006 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:43:58.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:58.975204 systemd[1]: Starting audit-rules.service... May 14 00:43:58.977067 systemd[1]: Starting clean-ca-certificates.service... May 14 00:43:58.979009 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:43:58.982000 audit: BPF prog-id=33 op=LOAD May 14 00:43:58.984156 systemd[1]: Starting systemd-resolved.service... May 14 00:43:58.985000 audit: BPF prog-id=34 op=LOAD May 14 00:43:58.987422 systemd[1]: Starting systemd-timesyncd.service... May 14 00:43:58.989246 systemd[1]: Starting systemd-update-utmp.service... May 14 00:43:58.994000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:43:58.993660 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:43:58.994887 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:43:58.996749 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:43:58.999543 systemd[1]: Starting modprobe@loop.service... May 14 00:43:59.000343 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:43:59.000471 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:59.001468 systemd[1]: Finished clean-ca-certificates.service. May 14 00:43:59.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.002779 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:43:59.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.004213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:43:59.004331 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:43:59.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.005526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:43:59.005634 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:43:59.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.006965 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:43:59.007076 systemd[1]: Finished modprobe@loop.service. May 14 00:43:59.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.009940 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:43:59.010085 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:43:59.011402 systemd[1]: Starting systemd-update-done.service... May 14 00:43:59.012347 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:43:59.014597 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:43:59.016127 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:43:59.017940 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:43:59.019747 systemd[1]: Starting modprobe@loop.service... May 14 00:43:59.020473 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:43:59.020600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:59.020688 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:43:59.021581 systemd[1]: Finished systemd-update-utmp.service. May 14 00:43:59.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.022976 systemd[1]: Finished systemd-update-done.service. May 14 00:43:59.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.024323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:43:59.024435 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:43:59.025618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:43:59.025738 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:43:59.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.027031 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:43:59.027136 systemd[1]: Finished modprobe@loop.service. May 14 00:43:59.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.029002 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:43:59.029098 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:43:59.031452 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:43:59.032782 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:43:59.034722 systemd[1]: Starting modprobe@drm.service... May 14 00:43:59.036487 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:43:59.038468 systemd[1]: Starting modprobe@loop.service... May 14 00:43:59.039464 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:43:59.039606 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:59.040813 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:43:59.041794 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:43:59.042805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:43:59.042916 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:43:59.043276 systemd-resolved[1154]: Positive Trust Anchors: May 14 00:43:59.043521 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:43:59.043598 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:43:59.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:43:59.043000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:43:59.043000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9965670 a2=420 a3=0 items=0 ppid=1150 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:43:59.043000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:43:59.044595 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:43:59.044700 systemd[1]: Finished modprobe@drm.service. May 14 00:43:59.045022 augenrules[1177]: No rules May 14 00:43:59.045714 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:43:59.045774 systemd-timesyncd[1158]: Initial clock synchronization to Wed 2025-05-14 00:43:59.206953 UTC. May 14 00:43:59.045949 systemd[1]: Started systemd-timesyncd.service. May 14 00:43:59.047388 systemd[1]: Finished audit-rules.service. May 14 00:43:59.048549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:43:59.048661 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:43:59.049874 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:43:59.049980 systemd[1]: Finished modprobe@loop.service. May 14 00:43:59.051579 systemd[1]: Reached target time-set.target. May 14 00:43:59.052395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:43:59.052432 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:43:59.052695 systemd[1]: Finished ensure-sysext.service. May 14 00:43:59.058871 systemd-resolved[1154]: Defaulting to hostname 'linux'. May 14 00:43:59.060182 systemd[1]: Started systemd-resolved.service. May 14 00:43:59.061248 systemd[1]: Reached target network.target. May 14 00:43:59.061982 systemd[1]: Reached target nss-lookup.target. May 14 00:43:59.062756 systemd[1]: Reached target sysinit.target. May 14 00:43:59.063564 systemd[1]: Started motdgen.path. May 14 00:43:59.064305 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:43:59.065505 systemd[1]: Started logrotate.timer. May 14 00:43:59.066320 systemd[1]: Started mdadm.timer. May 14 00:43:59.066975 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:43:59.067883 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:43:59.067913 systemd[1]: Reached target paths.target. May 14 00:43:59.068621 systemd[1]: Reached target timers.target. May 14 00:43:59.069673 systemd[1]: Listening on dbus.socket. May 14 00:43:59.071298 systemd[1]: Starting docker.socket... May 14 00:43:59.074286 systemd[1]: Listening on sshd.socket. May 14 00:43:59.075142 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:59.075552 systemd[1]: Listening on docker.socket. May 14 00:43:59.076396 systemd[1]: Reached target sockets.target. May 14 00:43:59.077369 systemd[1]: Reached target basic.target. May 14 00:43:59.078138 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:43:59.078169 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:43:59.079160 systemd[1]: Starting containerd.service... May 14 00:43:59.080822 systemd[1]: Starting dbus.service... May 14 00:43:59.082412 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:43:59.084362 systemd[1]: Starting extend-filesystems.service... May 14 00:43:59.085339 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:43:59.086432 systemd[1]: Starting motdgen.service... May 14 00:43:59.087512 jq[1193]: false May 14 00:43:59.088256 systemd[1]: Starting prepare-helm.service... May 14 00:43:59.090101 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:43:59.091924 systemd[1]: Starting sshd-keygen.service... May 14 00:43:59.095525 systemd[1]: Starting systemd-logind.service... May 14 00:43:59.096521 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:43:59.096590 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:43:59.096993 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:43:59.098188 systemd[1]: Starting update-engine.service... May 14 00:43:59.099961 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:43:59.102863 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:43:59.103069 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:43:59.104282 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:43:59.104446 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:43:59.107715 dbus-daemon[1192]: [system] SELinux support is enabled May 14 00:43:59.109280 jq[1211]: true May 14 00:43:59.109505 systemd[1]: Started dbus.service. May 14 00:43:59.113409 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:43:59.113440 systemd[1]: Reached target system-config.target. May 14 00:43:59.114351 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:43:59.114367 systemd[1]: Reached target user-config.target. May 14 00:43:59.117583 tar[1213]: linux-arm64/helm May 14 00:43:59.120780 jq[1217]: true May 14 00:43:59.121885 extend-filesystems[1194]: Found loop1 May 14 00:43:59.121885 extend-filesystems[1194]: Found vda May 14 00:43:59.123510 extend-filesystems[1194]: Found vda1 May 14 00:43:59.123510 extend-filesystems[1194]: Found vda2 May 14 00:43:59.125234 extend-filesystems[1194]: Found vda3 May 14 00:43:59.125234 extend-filesystems[1194]: Found usr May 14 00:43:59.125234 extend-filesystems[1194]: Found vda4 May 14 00:43:59.125234 extend-filesystems[1194]: Found vda6 May 14 00:43:59.125234 extend-filesystems[1194]: Found vda7 May 14 00:43:59.125234 extend-filesystems[1194]: Found vda9 May 14 00:43:59.125234 extend-filesystems[1194]: Checking size of /dev/vda9 May 14 00:43:59.131713 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:43:59.131896 systemd[1]: Finished motdgen.service. May 14 00:43:59.172338 extend-filesystems[1194]: Resized partition /dev/vda9 May 14 00:43:59.173733 extend-filesystems[1242]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:43:59.177859 bash[1239]: Updated "/home/core/.ssh/authorized_keys" May 14 00:43:59.179131 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:43:59.182074 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:43:59.182370 systemd-logind[1205]: New seat seat0. May 14 00:43:59.186882 systemd[1]: Started systemd-logind.service. May 14 00:43:59.190791 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:43:59.202910 update_engine[1209]: I0514 00:43:59.200573 1209 main.cc:92] Flatcar Update Engine starting May 14 00:43:59.206356 update_engine[1209]: I0514 00:43:59.206290 1209 update_check_scheduler.cc:74] Next update check in 8m24s May 14 00:43:59.207282 systemd[1]: Started update-engine.service. May 14 00:43:59.211507 systemd[1]: Started locksmithd.service. May 14 00:43:59.217742 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:43:59.228912 extend-filesystems[1242]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:43:59.228912 extend-filesystems[1242]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:43:59.228912 extend-filesystems[1242]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:43:59.233223 extend-filesystems[1194]: Resized filesystem in /dev/vda9 May 14 00:43:59.232138 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:43:59.234966 env[1215]: time="2025-05-14T00:43:59.233444880Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:43:59.232308 systemd[1]: Finished extend-filesystems.service. May 14 00:43:59.252111 env[1215]: time="2025-05-14T00:43:59.252069000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:43:59.252322 env[1215]: time="2025-05-14T00:43:59.252217240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:43:59.256490 env[1215]: time="2025-05-14T00:43:59.256449120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:43:59.256490 env[1215]: time="2025-05-14T00:43:59.256485920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:43:59.256769 env[1215]: time="2025-05-14T00:43:59.256739760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:43:59.256769 env[1215]: time="2025-05-14T00:43:59.256765120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:43:59.256845 env[1215]: time="2025-05-14T00:43:59.256778240Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:43:59.256845 env[1215]: time="2025-05-14T00:43:59.256788640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:43:59.256883 env[1215]: time="2025-05-14T00:43:59.256867360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:43:59.257116 env[1215]: time="2025-05-14T00:43:59.257087520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:43:59.257268 env[1215]: time="2025-05-14T00:43:59.257241720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:43:59.257268 env[1215]: time="2025-05-14T00:43:59.257264680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:43:59.257341 env[1215]: time="2025-05-14T00:43:59.257322800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:43:59.257341 env[1215]: time="2025-05-14T00:43:59.257339560Z" level=info msg="metadata content store policy set" policy=shared May 14 00:43:59.266303 env[1215]: time="2025-05-14T00:43:59.266214600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:43:59.266303 env[1215]: time="2025-05-14T00:43:59.266256520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:43:59.266303 env[1215]: time="2025-05-14T00:43:59.266270120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:43:59.266414 env[1215]: time="2025-05-14T00:43:59.266317800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266414 env[1215]: time="2025-05-14T00:43:59.266333280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266414 env[1215]: time="2025-05-14T00:43:59.266346720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266414 env[1215]: time="2025-05-14T00:43:59.266358840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266908 env[1215]: time="2025-05-14T00:43:59.266717840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266908 env[1215]: time="2025-05-14T00:43:59.266758040Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266908 env[1215]: time="2025-05-14T00:43:59.266773240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266908 env[1215]: time="2025-05-14T00:43:59.266786200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:43:59.266908 env[1215]: time="2025-05-14T00:43:59.266799120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:43:59.267138 env[1215]: time="2025-05-14T00:43:59.266914880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:43:59.267138 env[1215]: time="2025-05-14T00:43:59.266990120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:43:59.267269 env[1215]: time="2025-05-14T00:43:59.267242480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:43:59.267306 env[1215]: time="2025-05-14T00:43:59.267273280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267306 env[1215]: time="2025-05-14T00:43:59.267288200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:43:59.267460 env[1215]: time="2025-05-14T00:43:59.267445880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267496 env[1215]: time="2025-05-14T00:43:59.267462440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267496 env[1215]: time="2025-05-14T00:43:59.267475640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267496 env[1215]: time="2025-05-14T00:43:59.267487200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267552 env[1215]: time="2025-05-14T00:43:59.267499640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267552 env[1215]: time="2025-05-14T00:43:59.267511920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267552 env[1215]: time="2025-05-14T00:43:59.267523840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267552 env[1215]: time="2025-05-14T00:43:59.267535960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267552 env[1215]: time="2025-05-14T00:43:59.267548280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:43:59.267686 env[1215]: time="2025-05-14T00:43:59.267665880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267720 env[1215]: time="2025-05-14T00:43:59.267687880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267720 env[1215]: time="2025-05-14T00:43:59.267701040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:43:59.267720 env[1215]: time="2025-05-14T00:43:59.267712640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:43:59.267798 env[1215]: time="2025-05-14T00:43:59.267760080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:43:59.267798 env[1215]: time="2025-05-14T00:43:59.267773720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:43:59.267798 env[1215]: time="2025-05-14T00:43:59.267791040Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:43:59.267856 env[1215]: time="2025-05-14T00:43:59.267824720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:43:59.268086 env[1215]: time="2025-05-14T00:43:59.268027240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.268097440Z" level=info msg="Connect containerd service" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.268129840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.268890040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269283720Z" level=info msg="Start subscribing containerd event" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269319840Z" level=info msg="Start recovering state" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269371720Z" level=info msg="Start event monitor" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269390320Z" level=info msg="Start snapshots syncer" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269399560Z" level=info msg="Start cni network conf syncer for default" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269407320Z" level=info msg="Start streaming server" May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269869600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:43:59.270986 env[1215]: time="2025-05-14T00:43:59.269922760Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:43:59.270046 systemd[1]: Started containerd.service. May 14 00:43:59.271567 env[1215]: time="2025-05-14T00:43:59.271156800Z" level=info msg="containerd successfully booted in 0.044195s" May 14 00:43:59.282496 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:43:59.524548 tar[1213]: linux-arm64/LICENSE May 14 00:43:59.524817 tar[1213]: linux-arm64/README.md May 14 00:43:59.529146 systemd[1]: Finished prepare-helm.service. May 14 00:44:00.108036 systemd-networkd[1035]: eth0: Gained IPv6LL May 14 00:44:00.109824 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:44:00.111091 systemd[1]: Reached target network-online.target. May 14 00:44:00.113550 systemd[1]: Starting kubelet.service... May 14 00:44:00.502011 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:44:00.519401 systemd[1]: Finished sshd-keygen.service. May 14 00:44:00.521861 systemd[1]: Starting issuegen.service... May 14 00:44:00.526309 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:44:00.526458 systemd[1]: Finished issuegen.service. May 14 00:44:00.528543 systemd[1]: Starting systemd-user-sessions.service... May 14 00:44:00.534851 systemd[1]: Finished systemd-user-sessions.service. May 14 00:44:00.537032 systemd[1]: Started getty@tty1.service. May 14 00:44:00.539047 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:44:00.540142 systemd[1]: Reached target getty.target. May 14 00:44:00.644895 systemd[1]: Started kubelet.service. May 14 00:44:00.646149 systemd[1]: Reached target multi-user.target. May 14 00:44:00.648265 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:44:00.655020 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:44:00.655197 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:44:00.656366 systemd[1]: Startup finished in 600ms (kernel) + 4.206s (initrd) + 4.860s (userspace) = 9.667s. May 14 00:44:01.098668 kubelet[1274]: E0514 00:44:01.098621 1274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:44:01.100732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:44:01.100875 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:44:04.150799 systemd[1]: Created slice system-sshd.slice. May 14 00:44:04.151917 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:45896.service. May 14 00:44:04.196758 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 45896 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:04.198868 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:04.209682 systemd-logind[1205]: New session 1 of user core. May 14 00:44:04.210628 systemd[1]: Created slice user-500.slice. May 14 00:44:04.211802 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:44:04.219973 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:44:04.221315 systemd[1]: Starting user@500.service... May 14 00:44:04.224068 (systemd)[1287]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:04.284086 systemd[1287]: Queued start job for default target default.target. May 14 00:44:04.284613 systemd[1287]: Reached target paths.target. May 14 00:44:04.284645 systemd[1287]: Reached target sockets.target. May 14 00:44:04.284656 systemd[1287]: Reached target timers.target. May 14 00:44:04.284666 systemd[1287]: Reached target basic.target. May 14 00:44:04.284782 systemd[1]: Started user@500.service. May 14 00:44:04.285659 systemd[1]: Started session-1.scope. May 14 00:44:04.285924 systemd[1287]: Reached target default.target. May 14 00:44:04.286057 systemd[1287]: Startup finished in 56ms. May 14 00:44:04.338810 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:45904.service. May 14 00:44:04.386596 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 45904 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:04.388537 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:04.396992 systemd-logind[1205]: New session 2 of user core. May 14 00:44:04.397313 systemd[1]: Started session-2.scope. May 14 00:44:04.453853 sshd[1296]: pam_unix(sshd:session): session closed for user core May 14 00:44:04.457400 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:45916.service. May 14 00:44:04.457909 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:45904.service: Deactivated successfully. May 14 00:44:04.458516 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:44:04.459067 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. May 14 00:44:04.460018 systemd-logind[1205]: Removed session 2. May 14 00:44:04.496976 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 45916 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:04.498374 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:04.501568 systemd-logind[1205]: New session 3 of user core. May 14 00:44:04.502375 systemd[1]: Started session-3.scope. May 14 00:44:04.552074 sshd[1301]: pam_unix(sshd:session): session closed for user core May 14 00:44:04.555095 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:45916.service: Deactivated successfully. May 14 00:44:04.555653 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:44:04.556216 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. May 14 00:44:04.557254 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:45918.service. May 14 00:44:04.558334 systemd-logind[1205]: Removed session 3. May 14 00:44:04.596785 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 45918 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:04.598228 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:04.601426 systemd-logind[1205]: New session 4 of user core. May 14 00:44:04.602264 systemd[1]: Started session-4.scope. May 14 00:44:04.655684 sshd[1308]: pam_unix(sshd:session): session closed for user core May 14 00:44:04.659413 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:45918.service: Deactivated successfully. May 14 00:44:04.659986 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:44:04.660468 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. May 14 00:44:04.661483 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:45924.service. May 14 00:44:04.662350 systemd-logind[1205]: Removed session 4. May 14 00:44:04.701292 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 45924 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:04.702409 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:04.705762 systemd-logind[1205]: New session 5 of user core. May 14 00:44:04.706831 systemd[1]: Started session-5.scope. May 14 00:44:04.765602 sudo[1317]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:44:04.765850 sudo[1317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:44:04.824381 systemd[1]: Starting docker.service... May 14 00:44:04.905484 env[1329]: time="2025-05-14T00:44:04.905431285Z" level=info msg="Starting up" May 14 00:44:04.906968 env[1329]: time="2025-05-14T00:44:04.906945210Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:44:04.906968 env[1329]: time="2025-05-14T00:44:04.906964353Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:44:04.907070 env[1329]: time="2025-05-14T00:44:04.906983740Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:44:04.907070 env[1329]: time="2025-05-14T00:44:04.906993737Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:44:04.911265 env[1329]: time="2025-05-14T00:44:04.911233505Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:44:04.911265 env[1329]: time="2025-05-14T00:44:04.911260419Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:44:04.911381 env[1329]: time="2025-05-14T00:44:04.911278389Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:44:04.911381 env[1329]: time="2025-05-14T00:44:04.911288588Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:44:04.915731 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1563934749-merged.mount: Deactivated successfully. May 14 00:44:05.022876 env[1329]: time="2025-05-14T00:44:05.022784946Z" level=info msg="Loading containers: start." May 14 00:44:05.140769 kernel: Initializing XFRM netlink socket May 14 00:44:05.163754 env[1329]: time="2025-05-14T00:44:05.163697840Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 14 00:44:05.213793 systemd-networkd[1035]: docker0: Link UP May 14 00:44:05.231850 env[1329]: time="2025-05-14T00:44:05.231813586Z" level=info msg="Loading containers: done." May 14 00:44:05.252684 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck165048585-merged.mount: Deactivated successfully. May 14 00:44:05.253932 env[1329]: time="2025-05-14T00:44:05.253893729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:44:05.254072 env[1329]: time="2025-05-14T00:44:05.254045887Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 14 00:44:05.254157 env[1329]: time="2025-05-14T00:44:05.254141950Z" level=info msg="Daemon has completed initialization" May 14 00:44:05.270359 systemd[1]: Started docker.service. May 14 00:44:05.277781 env[1329]: time="2025-05-14T00:44:05.277674360Z" level=info msg="API listen on /run/docker.sock" May 14 00:44:05.978137 env[1215]: time="2025-05-14T00:44:05.977869826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 00:44:06.529766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350962719.mount: Deactivated successfully. May 14 00:44:07.661716 env[1215]: time="2025-05-14T00:44:07.661667806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:07.662992 env[1215]: time="2025-05-14T00:44:07.662964956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:07.664817 env[1215]: time="2025-05-14T00:44:07.664784975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:07.666972 env[1215]: time="2025-05-14T00:44:07.666942001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:07.667780 env[1215]: time="2025-05-14T00:44:07.667754665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 00:44:07.668995 env[1215]: time="2025-05-14T00:44:07.668969408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 00:44:09.234360 env[1215]: time="2025-05-14T00:44:09.234314206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:09.237581 env[1215]: time="2025-05-14T00:44:09.237549261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:09.242123 env[1215]: time="2025-05-14T00:44:09.241660595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:09.244176 env[1215]: time="2025-05-14T00:44:09.244135992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:09.244481 env[1215]: time="2025-05-14T00:44:09.244444692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 00:44:09.245932 env[1215]: time="2025-05-14T00:44:09.245684242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 00:44:10.469390 env[1215]: time="2025-05-14T00:44:10.469311210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:10.471523 env[1215]: time="2025-05-14T00:44:10.471494048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:10.474797 env[1215]: time="2025-05-14T00:44:10.474375935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:10.475929 env[1215]: time="2025-05-14T00:44:10.475902269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:10.476698 env[1215]: time="2025-05-14T00:44:10.476654719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 00:44:10.477801 env[1215]: time="2025-05-14T00:44:10.477765541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 00:44:11.186824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:44:11.186995 systemd[1]: Stopped kubelet.service. May 14 00:44:11.188583 systemd[1]: Starting kubelet.service... May 14 00:44:11.302228 systemd[1]: Started kubelet.service. May 14 00:44:11.361216 kubelet[1462]: E0514 00:44:11.361148 1462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:44:11.363976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:44:11.364102 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:44:11.683389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56424753.mount: Deactivated successfully. May 14 00:44:12.271571 env[1215]: time="2025-05-14T00:44:12.271522384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:12.274682 env[1215]: time="2025-05-14T00:44:12.274654775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:12.276382 env[1215]: time="2025-05-14T00:44:12.276344811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:12.283707 env[1215]: time="2025-05-14T00:44:12.283681126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:12.284106 env[1215]: time="2025-05-14T00:44:12.284079738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 00:44:12.284520 env[1215]: time="2025-05-14T00:44:12.284498753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:44:12.901614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671239609.mount: Deactivated successfully. May 14 00:44:13.690401 env[1215]: time="2025-05-14T00:44:13.690355879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:13.691612 env[1215]: time="2025-05-14T00:44:13.691583265Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:13.694443 env[1215]: time="2025-05-14T00:44:13.694414731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:13.696416 env[1215]: time="2025-05-14T00:44:13.696377964Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:13.697106 env[1215]: time="2025-05-14T00:44:13.697071901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 00:44:13.697544 env[1215]: time="2025-05-14T00:44:13.697510253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:44:14.149807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710878358.mount: Deactivated successfully. May 14 00:44:14.153056 env[1215]: time="2025-05-14T00:44:14.153006237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:14.154406 env[1215]: time="2025-05-14T00:44:14.154378450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:14.155869 env[1215]: time="2025-05-14T00:44:14.155838737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:14.157651 env[1215]: time="2025-05-14T00:44:14.157620902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:14.158189 env[1215]: time="2025-05-14T00:44:14.158157044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 00:44:14.159143 env[1215]: time="2025-05-14T00:44:14.159115535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 00:44:14.688255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150624796.mount: Deactivated successfully. May 14 00:44:16.761741 env[1215]: time="2025-05-14T00:44:16.761678561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:16.903806 env[1215]: time="2025-05-14T00:44:16.903741570Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:16.906452 env[1215]: time="2025-05-14T00:44:16.906423138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:16.909152 env[1215]: time="2025-05-14T00:44:16.909119623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:16.910214 env[1215]: time="2025-05-14T00:44:16.910185514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 00:44:21.435194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:44:21.435372 systemd[1]: Stopped kubelet.service. May 14 00:44:21.436749 systemd[1]: Starting kubelet.service... May 14 00:44:21.525039 systemd[1]: Started kubelet.service. May 14 00:44:21.558466 kubelet[1494]: E0514 00:44:21.558414 1494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:44:21.560513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:44:21.560641 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:44:21.769669 systemd[1]: Stopped kubelet.service. May 14 00:44:21.771724 systemd[1]: Starting kubelet.service... May 14 00:44:21.792247 systemd[1]: Reloading. May 14 00:44:21.835623 /usr/lib/systemd/system-generators/torcx-generator[1528]: time="2025-05-14T00:44:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:44:21.835650 /usr/lib/systemd/system-generators/torcx-generator[1528]: time="2025-05-14T00:44:21Z" level=info msg="torcx already run" May 14 00:44:21.926506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:44:21.926681 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:44:21.942909 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:44:22.010110 systemd[1]: Started kubelet.service. May 14 00:44:22.011417 systemd[1]: Stopping kubelet.service... May 14 00:44:22.011646 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:44:22.011872 systemd[1]: Stopped kubelet.service. May 14 00:44:22.013386 systemd[1]: Starting kubelet.service... May 14 00:44:22.099839 systemd[1]: Started kubelet.service. May 14 00:44:22.132276 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:44:22.132276 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:44:22.132276 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:44:22.132635 kubelet[1573]: I0514 00:44:22.132457 1573 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:44:23.427030 kubelet[1573]: I0514 00:44:23.426985 1573 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:44:23.427030 kubelet[1573]: I0514 00:44:23.427017 1573 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:44:23.427405 kubelet[1573]: I0514 00:44:23.427250 1573 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:44:23.460833 kubelet[1573]: E0514 00:44:23.460804 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:23.463019 kubelet[1573]: I0514 00:44:23.463000 1573 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:44:23.473913 kubelet[1573]: E0514 00:44:23.473888 1573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:44:23.473913 kubelet[1573]: I0514 00:44:23.473915 1573 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:44:23.477888 kubelet[1573]: I0514 00:44:23.477865 1573 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:44:23.478787 kubelet[1573]: I0514 00:44:23.478765 1573 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:44:23.478941 kubelet[1573]: I0514 00:44:23.478912 1573 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:44:23.479098 kubelet[1573]: I0514 00:44:23.478942 1573 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:44:23.479243 kubelet[1573]: I0514 00:44:23.479232 1573 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:44:23.479243 kubelet[1573]: I0514 00:44:23.479244 1573 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:44:23.479425 kubelet[1573]: I0514 00:44:23.479413 1573 state_mem.go:36] "Initialized new in-memory state store" May 14 00:44:23.480851 kubelet[1573]: I0514 00:44:23.480831 1573 kubelet.go:408] "Attempting to sync node with API server" May 14 00:44:23.481036 kubelet[1573]: I0514 00:44:23.480925 1573 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:44:23.481075 kubelet[1573]: I0514 00:44:23.481054 1573 kubelet.go:314] "Adding apiserver pod source" May 14 00:44:23.481075 kubelet[1573]: I0514 00:44:23.481066 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:44:23.494906 kubelet[1573]: I0514 00:44:23.494879 1573 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:44:23.494993 kubelet[1573]: W0514 00:44:23.494918 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:23.494993 kubelet[1573]: E0514 00:44:23.494979 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:23.495090 kubelet[1573]: W0514 00:44:23.495014 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:23.495090 kubelet[1573]: E0514 00:44:23.495049 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:23.498461 kubelet[1573]: I0514 00:44:23.498437 1573 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:44:23.501130 kubelet[1573]: W0514 00:44:23.501103 1573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:44:23.502390 kubelet[1573]: I0514 00:44:23.502373 1573 server.go:1269] "Started kubelet" May 14 00:44:23.502590 kubelet[1573]: I0514 00:44:23.502559 1573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:44:23.504365 kubelet[1573]: I0514 00:44:23.504340 1573 server.go:460] "Adding debug handlers to kubelet server" May 14 00:44:23.504436 kubelet[1573]: I0514 00:44:23.504342 1573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:44:23.504597 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:44:23.504656 kubelet[1573]: I0514 00:44:23.504609 1573 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:44:23.504748 kubelet[1573]: I0514 00:44:23.504716 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:44:23.505402 kubelet[1573]: I0514 00:44:23.505339 1573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:44:23.505612 kubelet[1573]: E0514 00:44:23.504191 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3e1a63760d27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:44:23.501794599 +0000 UTC m=+1.398857396,LastTimestamp:2025-05-14 00:44:23.501794599 +0000 UTC m=+1.398857396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:44:23.505912 kubelet[1573]: E0514 00:44:23.505894 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:44:23.506062 kubelet[1573]: I0514 00:44:23.506051 1573 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:44:23.506139 kubelet[1573]: E0514 00:44:23.506108 1573 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:44:23.506352 kubelet[1573]: I0514 00:44:23.506330 1573 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:44:23.506503 kubelet[1573]: I0514 00:44:23.506490 1573 reconciler.go:26] "Reconciler: start to sync state" May 14 00:44:23.506781 kubelet[1573]: I0514 00:44:23.506755 1573 factory.go:221] Registration of the systemd container factory successfully May 14 00:44:23.506866 kubelet[1573]: I0514 00:44:23.506846 1573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:44:23.507085 kubelet[1573]: W0514 00:44:23.507049 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:23.507214 kubelet[1573]: E0514 00:44:23.507193 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:23.507423 kubelet[1573]: E0514 00:44:23.507393 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" May 14 00:44:23.508390 kubelet[1573]: I0514 00:44:23.508367 1573 factory.go:221] Registration of the containerd container factory successfully May 14 00:44:23.518183 kubelet[1573]: I0514 00:44:23.518132 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:44:23.519434 kubelet[1573]: I0514 00:44:23.519392 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:44:23.519575 kubelet[1573]: I0514 00:44:23.519563 1573 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:44:23.519660 kubelet[1573]: I0514 00:44:23.519650 1573 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:44:23.519790 kubelet[1573]: E0514 00:44:23.519768 1573 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:44:23.520610 kubelet[1573]: W0514 00:44:23.520565 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:23.520793 kubelet[1573]: E0514 00:44:23.520769 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:23.520880 kubelet[1573]: I0514 00:44:23.520606 1573 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:44:23.520939 kubelet[1573]: I0514 00:44:23.520928 1573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:44:23.520995 kubelet[1573]: I0514 00:44:23.520986 1573 state_mem.go:36] "Initialized new in-memory state store" May 14 00:44:23.606162 kubelet[1573]: E0514 00:44:23.606124 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:44:23.620145 kubelet[1573]: E0514 00:44:23.620121 1573 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:44:23.636695 kubelet[1573]: I0514 00:44:23.636671 1573 policy_none.go:49] "None policy: Start" May 14 00:44:23.637535 kubelet[1573]: I0514 00:44:23.637511 1573 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:44:23.637614 kubelet[1573]: I0514 00:44:23.637546 1573 state_mem.go:35] "Initializing new in-memory state store" May 14 00:44:23.645284 systemd[1]: Created slice kubepods.slice. May 14 00:44:23.649648 systemd[1]: Created slice kubepods-burstable.slice. May 14 00:44:23.652226 systemd[1]: Created slice kubepods-besteffort.slice. May 14 00:44:23.659394 kubelet[1573]: I0514 00:44:23.659352 1573 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:44:23.659576 kubelet[1573]: I0514 00:44:23.659501 1573 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:44:23.659576 kubelet[1573]: I0514 00:44:23.659518 1573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:44:23.659767 kubelet[1573]: I0514 00:44:23.659747 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:44:23.661093 kubelet[1573]: E0514 00:44:23.661041 1573 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:44:23.708781 kubelet[1573]: E0514 00:44:23.708722 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" May 14 00:44:23.761035 kubelet[1573]: I0514 00:44:23.761009 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:44:23.761543 kubelet[1573]: E0514 00:44:23.761515 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" May 14 00:44:23.829667 systemd[1]: Created slice kubepods-burstable-pod3946f030beebab0ab0c61776b66525df.slice. May 14 00:44:23.839564 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 00:44:23.842615 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 00:44:23.907966 kubelet[1573]: I0514 00:44:23.907926 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3946f030beebab0ab0c61776b66525df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3946f030beebab0ab0c61776b66525df\") " pod="kube-system/kube-apiserver-localhost" May 14 00:44:23.907966 kubelet[1573]: I0514 00:44:23.907968 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3946f030beebab0ab0c61776b66525df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3946f030beebab0ab0c61776b66525df\") " pod="kube-system/kube-apiserver-localhost" May 14 00:44:23.908112 kubelet[1573]: I0514 00:44:23.907998 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:23.908112 kubelet[1573]: I0514 00:44:23.908016 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3946f030beebab0ab0c61776b66525df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3946f030beebab0ab0c61776b66525df\") " pod="kube-system/kube-apiserver-localhost" May 14 00:44:23.908112 kubelet[1573]: I0514 00:44:23.908030 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:23.908112 kubelet[1573]: I0514 00:44:23.908043 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:23.908112 kubelet[1573]: I0514 00:44:23.908086 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:23.908227 kubelet[1573]: I0514 00:44:23.908105 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:44:23.908227 kubelet[1573]: I0514 00:44:23.908119 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:23.963084 kubelet[1573]: I0514 00:44:23.962995 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:44:23.963613 kubelet[1573]: E0514 00:44:23.963572 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" May 14 00:44:24.109999 kubelet[1573]: E0514 00:44:24.109962 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" May 14 00:44:24.138211 kubelet[1573]: E0514 00:44:24.138180 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:24.138944 env[1215]: time="2025-05-14T00:44:24.138902585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3946f030beebab0ab0c61776b66525df,Namespace:kube-system,Attempt:0,}" May 14 00:44:24.141747 kubelet[1573]: E0514 00:44:24.141717 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:24.142471 env[1215]: time="2025-05-14T00:44:24.142224861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 00:44:24.145034 kubelet[1573]: E0514 00:44:24.144833 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:24.145220 env[1215]: time="2025-05-14T00:44:24.145158979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 00:44:24.365380 kubelet[1573]: I0514 00:44:24.365294 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:44:24.365594 kubelet[1573]: E0514 00:44:24.365573 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" May 14 00:44:24.387141 kubelet[1573]: W0514 00:44:24.387070 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:24.387141 kubelet[1573]: E0514 00:44:24.387136 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:24.421061 kubelet[1573]: W0514 00:44:24.420999 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:24.421112 kubelet[1573]: E0514 00:44:24.421067 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:24.482064 kubelet[1573]: W0514 00:44:24.482032 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:24.482416 kubelet[1573]: E0514 00:44:24.482393 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:24.688941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848902582.mount: Deactivated successfully. May 14 00:44:24.693207 env[1215]: time="2025-05-14T00:44:24.693110016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.694936 env[1215]: time="2025-05-14T00:44:24.694906125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.695822 env[1215]: time="2025-05-14T00:44:24.695794290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.696675 env[1215]: time="2025-05-14T00:44:24.696649670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.697890 env[1215]: time="2025-05-14T00:44:24.697862221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.699062 env[1215]: time="2025-05-14T00:44:24.699012441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.700949 env[1215]: time="2025-05-14T00:44:24.700898182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.702675 env[1215]: time="2025-05-14T00:44:24.702353011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.704675 env[1215]: time="2025-05-14T00:44:24.704645005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.705931 env[1215]: time="2025-05-14T00:44:24.705902232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.706690 env[1215]: time="2025-05-14T00:44:24.706599442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.708144 env[1215]: time="2025-05-14T00:44:24.708116082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:24.739083 env[1215]: time="2025-05-14T00:44:24.739011455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:24.739083 env[1215]: time="2025-05-14T00:44:24.739051728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:24.739261 env[1215]: time="2025-05-14T00:44:24.739061856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:24.739333 env[1215]: time="2025-05-14T00:44:24.739280114Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e341fa238bc2de1b1e15fd6a463c90a734f6da3f31cae46e9f986159624c734 pid=1627 runtime=io.containerd.runc.v2 May 14 00:44:24.739928 env[1215]: time="2025-05-14T00:44:24.739874560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:24.739928 env[1215]: time="2025-05-14T00:44:24.739908348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:24.740030 env[1215]: time="2025-05-14T00:44:24.739918596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:24.740085 env[1215]: time="2025-05-14T00:44:24.740048903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee88335d3b5d1c3a71ed05965d7053ec30618bbd564e6d340a5b44e056481343 pid=1628 runtime=io.containerd.runc.v2 May 14 00:44:24.742218 env[1215]: time="2025-05-14T00:44:24.742135929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:24.742218 env[1215]: time="2025-05-14T00:44:24.742171998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:24.742218 env[1215]: time="2025-05-14T00:44:24.742181646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:24.742467 env[1215]: time="2025-05-14T00:44:24.742418520Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/701acde5df5aeebbada0a609bb2353158b9a8a2c0e737d77b9177a49870014ac pid=1638 runtime=io.containerd.runc.v2 May 14 00:44:24.754904 systemd[1]: Started cri-containerd-ee88335d3b5d1c3a71ed05965d7053ec30618bbd564e6d340a5b44e056481343.scope. May 14 00:44:24.758429 systemd[1]: Started cri-containerd-701acde5df5aeebbada0a609bb2353158b9a8a2c0e737d77b9177a49870014ac.scope. May 14 00:44:24.761445 systemd[1]: Started cri-containerd-7e341fa238bc2de1b1e15fd6a463c90a734f6da3f31cae46e9f986159624c734.scope. May 14 00:44:24.830017 env[1215]: time="2025-05-14T00:44:24.829970001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee88335d3b5d1c3a71ed05965d7053ec30618bbd564e6d340a5b44e056481343\"" May 14 00:44:24.831068 env[1215]: time="2025-05-14T00:44:24.831029147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3946f030beebab0ab0c61776b66525df,Namespace:kube-system,Attempt:0,} returns sandbox id \"701acde5df5aeebbada0a609bb2353158b9a8a2c0e737d77b9177a49870014ac\"" May 14 00:44:24.831767 kubelet[1573]: E0514 00:44:24.831704 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:24.832326 kubelet[1573]: E0514 00:44:24.832193 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:24.833576 env[1215]: time="2025-05-14T00:44:24.833223301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e341fa238bc2de1b1e15fd6a463c90a734f6da3f31cae46e9f986159624c734\"" May 14 00:44:24.833944 env[1215]: time="2025-05-14T00:44:24.833899253Z" level=info msg="CreateContainer within sandbox \"ee88335d3b5d1c3a71ed05965d7053ec30618bbd564e6d340a5b44e056481343\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:44:24.834508 env[1215]: time="2025-05-14T00:44:24.834477406Z" level=info msg="CreateContainer within sandbox \"701acde5df5aeebbada0a609bb2353158b9a8a2c0e737d77b9177a49870014ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:44:24.835603 kubelet[1573]: E0514 00:44:24.835579 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:24.837215 env[1215]: time="2025-05-14T00:44:24.837171568Z" level=info msg="CreateContainer within sandbox \"7e341fa238bc2de1b1e15fd6a463c90a734f6da3f31cae46e9f986159624c734\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:44:24.854618 env[1215]: time="2025-05-14T00:44:24.854568868Z" level=info msg="CreateContainer within sandbox \"ee88335d3b5d1c3a71ed05965d7053ec30618bbd564e6d340a5b44e056481343\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abd7282e94576fbf90bcb7db9f5c7ef38e0d9fd140140eccae9d6a53cdf7df93\"" May 14 00:44:24.855667 env[1215]: time="2025-05-14T00:44:24.855636500Z" level=info msg="StartContainer for \"abd7282e94576fbf90bcb7db9f5c7ef38e0d9fd140140eccae9d6a53cdf7df93\"" May 14 00:44:24.855918 env[1215]: time="2025-05-14T00:44:24.855885984Z" level=info msg="CreateContainer within sandbox \"701acde5df5aeebbada0a609bb2353158b9a8a2c0e737d77b9177a49870014ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0106984e472d44f80eff67ae806a46988f1f0cfd65dbfb7e0c9b8f3b8a08042\"" May 14 00:44:24.857568 env[1215]: time="2025-05-14T00:44:24.857124557Z" level=info msg="CreateContainer within sandbox \"7e341fa238bc2de1b1e15fd6a463c90a734f6da3f31cae46e9f986159624c734\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a09df887eef494e00aa1d6ca16c90e214fbe8af5bd24edb46d3693fe94a246f\"" May 14 00:44:24.858035 env[1215]: time="2025-05-14T00:44:24.858004996Z" level=info msg="StartContainer for \"6a09df887eef494e00aa1d6ca16c90e214fbe8af5bd24edb46d3693fe94a246f\"" May 14 00:44:24.858168 env[1215]: time="2025-05-14T00:44:24.858143710Z" level=info msg="StartContainer for \"a0106984e472d44f80eff67ae806a46988f1f0cfd65dbfb7e0c9b8f3b8a08042\"" May 14 00:44:24.872202 systemd[1]: Started cri-containerd-abd7282e94576fbf90bcb7db9f5c7ef38e0d9fd140140eccae9d6a53cdf7df93.scope. May 14 00:44:24.875165 systemd[1]: Started cri-containerd-a0106984e472d44f80eff67ae806a46988f1f0cfd65dbfb7e0c9b8f3b8a08042.scope. May 14 00:44:24.911820 kubelet[1573]: E0514 00:44:24.910718 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="1.6s" May 14 00:44:24.912578 systemd[1]: Started cri-containerd-6a09df887eef494e00aa1d6ca16c90e214fbe8af5bd24edb46d3693fe94a246f.scope. May 14 00:44:24.974199 env[1215]: time="2025-05-14T00:44:24.973702884Z" level=info msg="StartContainer for \"abd7282e94576fbf90bcb7db9f5c7ef38e0d9fd140140eccae9d6a53cdf7df93\" returns successfully" May 14 00:44:24.993646 env[1215]: time="2025-05-14T00:44:24.993482611Z" level=info msg="StartContainer for \"6a09df887eef494e00aa1d6ca16c90e214fbe8af5bd24edb46d3693fe94a246f\" returns successfully" May 14 00:44:25.010415 env[1215]: time="2025-05-14T00:44:25.005399839Z" level=info msg="StartContainer for \"a0106984e472d44f80eff67ae806a46988f1f0cfd65dbfb7e0c9b8f3b8a08042\" returns successfully" May 14 00:44:25.021496 kubelet[1573]: W0514 00:44:25.021396 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused May 14 00:44:25.021496 kubelet[1573]: E0514 00:44:25.021462 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" May 14 00:44:25.167259 kubelet[1573]: I0514 00:44:25.167223 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:44:25.525634 kubelet[1573]: E0514 00:44:25.525599 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:25.527596 kubelet[1573]: E0514 00:44:25.527574 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:25.529442 kubelet[1573]: E0514 00:44:25.529421 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:26.516320 kubelet[1573]: I0514 00:44:26.516243 1573 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:44:26.516320 kubelet[1573]: E0514 00:44:26.516286 1573 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 00:44:26.530841 kubelet[1573]: E0514 00:44:26.530802 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:44:26.531184 kubelet[1573]: E0514 00:44:26.531160 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:26.631598 kubelet[1573]: E0514 00:44:26.631541 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:44:26.732158 kubelet[1573]: E0514 00:44:26.732125 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:44:26.832985 kubelet[1573]: E0514 00:44:26.832887 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:44:27.483312 kubelet[1573]: I0514 00:44:27.483261 1573 apiserver.go:52] "Watching apiserver" May 14 00:44:27.507586 kubelet[1573]: I0514 00:44:27.507540 1573 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:44:27.543963 kubelet[1573]: E0514 00:44:27.543922 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:28.113901 kubelet[1573]: E0514 00:44:28.113856 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:28.533847 kubelet[1573]: E0514 00:44:28.533821 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:28.533993 kubelet[1573]: E0514 00:44:28.533879 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:28.563978 systemd[1]: Reloading. May 14 00:44:28.602279 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2025-05-14T00:44:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:44:28.602689 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2025-05-14T00:44:28Z" level=info msg="torcx already run" May 14 00:44:28.663592 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:44:28.663622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:44:28.688361 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:44:28.784333 kubelet[1573]: I0514 00:44:28.784246 1573 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:44:28.785879 systemd[1]: Stopping kubelet.service... May 14 00:44:28.803138 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:44:28.803319 systemd[1]: Stopped kubelet.service. May 14 00:44:28.803365 systemd[1]: kubelet.service: Consumed 1.722s CPU time. May 14 00:44:28.804971 systemd[1]: Starting kubelet.service... May 14 00:44:28.892656 systemd[1]: Started kubelet.service. May 14 00:44:28.929359 kubelet[1912]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:44:28.929359 kubelet[1912]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:44:28.929359 kubelet[1912]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:44:28.929762 kubelet[1912]: I0514 00:44:28.929431 1912 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:44:28.935165 kubelet[1912]: I0514 00:44:28.935119 1912 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:44:28.935165 kubelet[1912]: I0514 00:44:28.935151 1912 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:44:28.935443 kubelet[1912]: I0514 00:44:28.935347 1912 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:44:28.936637 kubelet[1912]: I0514 00:44:28.936611 1912 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:44:28.938860 kubelet[1912]: I0514 00:44:28.938828 1912 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:44:28.941812 kubelet[1912]: E0514 00:44:28.941784 1912 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:44:28.941915 kubelet[1912]: I0514 00:44:28.941900 1912 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:44:28.944530 kubelet[1912]: I0514 00:44:28.944504 1912 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:44:28.944721 kubelet[1912]: I0514 00:44:28.944707 1912 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:44:28.945329 kubelet[1912]: I0514 00:44:28.944917 1912 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:44:28.945594 kubelet[1912]: I0514 00:44:28.945427 1912 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:44:28.945690 kubelet[1912]: I0514 00:44:28.945604 1912 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:44:28.945690 kubelet[1912]: I0514 00:44:28.945613 1912 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:44:28.945690 kubelet[1912]: I0514 00:44:28.945649 1912 state_mem.go:36] "Initialized new in-memory state store" May 14 00:44:28.945821 kubelet[1912]: I0514 00:44:28.945765 1912 kubelet.go:408] "Attempting to sync node with API server" May 14 00:44:28.945821 kubelet[1912]: I0514 00:44:28.945776 1912 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:44:28.945821 kubelet[1912]: I0514 00:44:28.945795 1912 kubelet.go:314] "Adding apiserver pod source" May 14 00:44:28.945821 kubelet[1912]: I0514 00:44:28.945804 1912 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:44:28.948911 kubelet[1912]: I0514 00:44:28.948879 1912 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:44:28.954203 kubelet[1912]: I0514 00:44:28.949915 1912 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:44:28.954203 kubelet[1912]: I0514 00:44:28.950390 1912 server.go:1269] "Started kubelet" May 14 00:44:28.954203 kubelet[1912]: I0514 00:44:28.950946 1912 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:44:28.954203 kubelet[1912]: I0514 00:44:28.951032 1912 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:44:28.954203 kubelet[1912]: I0514 00:44:28.951345 1912 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:44:28.954203 kubelet[1912]: I0514 00:44:28.951763 1912 server.go:460] "Adding debug handlers to kubelet server" May 14 00:44:28.960141 kubelet[1912]: I0514 00:44:28.958640 1912 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:44:28.965813 kubelet[1912]: I0514 00:44:28.963991 1912 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:44:28.966640 kubelet[1912]: E0514 00:44:28.966619 1912 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:44:28.969162 kubelet[1912]: I0514 00:44:28.967446 1912 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:44:28.969162 kubelet[1912]: I0514 00:44:28.967622 1912 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:44:28.969162 kubelet[1912]: I0514 00:44:28.967786 1912 reconciler.go:26] "Reconciler: start to sync state" May 14 00:44:28.970441 kubelet[1912]: I0514 00:44:28.970373 1912 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:44:28.973451 kubelet[1912]: I0514 00:44:28.973246 1912 factory.go:221] Registration of the containerd container factory successfully May 14 00:44:28.973451 kubelet[1912]: I0514 00:44:28.973268 1912 factory.go:221] Registration of the systemd container factory successfully May 14 00:44:28.997046 kubelet[1912]: I0514 00:44:28.996987 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:44:29.003720 kubelet[1912]: I0514 00:44:29.003535 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:44:29.003720 kubelet[1912]: I0514 00:44:29.003565 1912 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:44:29.003720 kubelet[1912]: I0514 00:44:29.003582 1912 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:44:29.003720 kubelet[1912]: E0514 00:44:29.003648 1912 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:44:29.023598 kubelet[1912]: I0514 00:44:29.023575 1912 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:44:29.023792 kubelet[1912]: I0514 00:44:29.023775 1912 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:44:29.023865 kubelet[1912]: I0514 00:44:29.023855 1912 state_mem.go:36] "Initialized new in-memory state store" May 14 00:44:29.024091 kubelet[1912]: I0514 00:44:29.024071 1912 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:44:29.024177 kubelet[1912]: I0514 00:44:29.024152 1912 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:44:29.024227 kubelet[1912]: I0514 00:44:29.024218 1912 policy_none.go:49] "None policy: Start" May 14 00:44:29.024985 kubelet[1912]: I0514 00:44:29.024966 1912 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:44:29.025056 kubelet[1912]: I0514 00:44:29.024993 1912 state_mem.go:35] "Initializing new in-memory state store" May 14 00:44:29.025196 kubelet[1912]: I0514 00:44:29.025178 1912 state_mem.go:75] "Updated machine memory state" May 14 00:44:29.028866 kubelet[1912]: I0514 00:44:29.028841 1912 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:44:29.029133 kubelet[1912]: I0514 00:44:29.029113 1912 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:44:29.029621 kubelet[1912]: I0514 00:44:29.029477 1912 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:44:29.031756 kubelet[1912]: I0514 00:44:29.031290 1912 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:44:29.111380 kubelet[1912]: E0514 00:44:29.111254 1912 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 00:44:29.112940 kubelet[1912]: E0514 00:44:29.112918 1912 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:44:29.137414 kubelet[1912]: I0514 00:44:29.137385 1912 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:44:29.145980 kubelet[1912]: I0514 00:44:29.145958 1912 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 00:44:29.146158 kubelet[1912]: I0514 00:44:29.146139 1912 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:44:29.168767 kubelet[1912]: I0514 00:44:29.168664 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:44:29.168767 kubelet[1912]: I0514 00:44:29.168742 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3946f030beebab0ab0c61776b66525df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3946f030beebab0ab0c61776b66525df\") " pod="kube-system/kube-apiserver-localhost" May 14 00:44:29.169165 kubelet[1912]: I0514 00:44:29.168967 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:29.169165 kubelet[1912]: I0514 00:44:29.168996 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:29.169165 kubelet[1912]: I0514 00:44:29.169015 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:29.169165 kubelet[1912]: I0514 00:44:29.169041 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3946f030beebab0ab0c61776b66525df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3946f030beebab0ab0c61776b66525df\") " pod="kube-system/kube-apiserver-localhost" May 14 00:44:29.169165 kubelet[1912]: I0514 00:44:29.169063 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3946f030beebab0ab0c61776b66525df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3946f030beebab0ab0c61776b66525df\") " pod="kube-system/kube-apiserver-localhost" May 14 00:44:29.169317 kubelet[1912]: I0514 00:44:29.169077 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:29.169317 kubelet[1912]: I0514 00:44:29.169106 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:44:29.412067 kubelet[1912]: E0514 00:44:29.411965 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:29.412258 kubelet[1912]: E0514 00:44:29.412237 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:29.413284 kubelet[1912]: E0514 00:44:29.413256 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:29.946590 kubelet[1912]: I0514 00:44:29.946556 1912 apiserver.go:52] "Watching apiserver" May 14 00:44:29.968709 kubelet[1912]: I0514 00:44:29.968665 1912 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:44:30.018022 kubelet[1912]: E0514 00:44:30.017992 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:30.018129 kubelet[1912]: E0514 00:44:30.018058 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:30.018335 kubelet[1912]: E0514 00:44:30.018318 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:30.081586 kubelet[1912]: I0514 00:44:30.081245 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.081229479 podStartE2EDuration="3.081229479s" podCreationTimestamp="2025-05-14 00:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:44:30.047657918 +0000 UTC m=+1.151586512" watchObservedRunningTime="2025-05-14 00:44:30.081229479 +0000 UTC m=+1.185158073" May 14 00:44:30.081586 kubelet[1912]: I0514 00:44:30.081385 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.081379494 podStartE2EDuration="1.081379494s" podCreationTimestamp="2025-05-14 00:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:44:30.081087027 +0000 UTC m=+1.185015621" watchObservedRunningTime="2025-05-14 00:44:30.081379494 +0000 UTC m=+1.185308088" May 14 00:44:30.091122 kubelet[1912]: I0514 00:44:30.091063 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.091049163 podStartE2EDuration="2.091049163s" podCreationTimestamp="2025-05-14 00:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:44:30.090482475 +0000 UTC m=+1.194411069" watchObservedRunningTime="2025-05-14 00:44:30.091049163 +0000 UTC m=+1.194977757" May 14 00:44:30.657163 sudo[1317]: pam_unix(sudo:session): session closed for user root May 14 00:44:30.658914 sshd[1314]: pam_unix(sshd:session): session closed for user core May 14 00:44:30.661765 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:45924.service: Deactivated successfully. May 14 00:44:30.663142 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:44:30.663301 systemd[1]: session-5.scope: Consumed 5.555s CPU time. May 14 00:44:30.663998 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. May 14 00:44:30.664804 systemd-logind[1205]: Removed session 5. May 14 00:44:31.019216 kubelet[1912]: E0514 00:44:31.019188 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:32.394046 kubelet[1912]: E0514 00:44:32.392697 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:33.156791 kubelet[1912]: E0514 00:44:33.156763 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:33.258470 kubelet[1912]: I0514 00:44:33.258418 1912 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:44:33.258881 env[1215]: time="2025-05-14T00:44:33.258841637Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:44:33.259346 kubelet[1912]: I0514 00:44:33.259325 1912 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:44:34.292485 systemd[1]: Created slice kubepods-besteffort-pod886a0400_0f34_42f0_b37b_b6fc68791b9c.slice. May 14 00:44:34.302932 systemd[1]: Created slice kubepods-burstable-pod82d355ed_0ff1_463f_8912_1b5b7c458f7e.slice. May 14 00:44:34.306866 kubelet[1912]: I0514 00:44:34.306835 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82d355ed-0ff1-463f-8912-1b5b7c458f7e-xtables-lock\") pod \"kube-flannel-ds-2c2dd\" (UID: \"82d355ed-0ff1-463f-8912-1b5b7c458f7e\") " pod="kube-flannel/kube-flannel-ds-2c2dd" May 14 00:44:34.307218 kubelet[1912]: I0514 00:44:34.307198 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/886a0400-0f34-42f0-b37b-b6fc68791b9c-lib-modules\") pod \"kube-proxy-rstxz\" (UID: \"886a0400-0f34-42f0-b37b-b6fc68791b9c\") " pod="kube-system/kube-proxy-rstxz" May 14 00:44:34.307318 kubelet[1912]: I0514 00:44:34.307301 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmxc\" (UniqueName: \"kubernetes.io/projected/886a0400-0f34-42f0-b37b-b6fc68791b9c-kube-api-access-hdmxc\") pod \"kube-proxy-rstxz\" (UID: \"886a0400-0f34-42f0-b37b-b6fc68791b9c\") " pod="kube-system/kube-proxy-rstxz" May 14 00:44:34.307404 kubelet[1912]: I0514 00:44:34.307389 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/82d355ed-0ff1-463f-8912-1b5b7c458f7e-cni-plugin\") pod \"kube-flannel-ds-2c2dd\" (UID: \"82d355ed-0ff1-463f-8912-1b5b7c458f7e\") " pod="kube-flannel/kube-flannel-ds-2c2dd" May 14 00:44:34.307480 kubelet[1912]: I0514 00:44:34.307465 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/886a0400-0f34-42f0-b37b-b6fc68791b9c-xtables-lock\") pod \"kube-proxy-rstxz\" (UID: \"886a0400-0f34-42f0-b37b-b6fc68791b9c\") " pod="kube-system/kube-proxy-rstxz" May 14 00:44:34.307576 kubelet[1912]: I0514 00:44:34.307561 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/82d355ed-0ff1-463f-8912-1b5b7c458f7e-cni\") pod \"kube-flannel-ds-2c2dd\" (UID: \"82d355ed-0ff1-463f-8912-1b5b7c458f7e\") " pod="kube-flannel/kube-flannel-ds-2c2dd" May 14 00:44:34.307652 kubelet[1912]: I0514 00:44:34.307638 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjshx\" (UniqueName: \"kubernetes.io/projected/82d355ed-0ff1-463f-8912-1b5b7c458f7e-kube-api-access-xjshx\") pod \"kube-flannel-ds-2c2dd\" (UID: \"82d355ed-0ff1-463f-8912-1b5b7c458f7e\") " pod="kube-flannel/kube-flannel-ds-2c2dd" May 14 00:44:34.307766 kubelet[1912]: I0514 00:44:34.307719 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/886a0400-0f34-42f0-b37b-b6fc68791b9c-kube-proxy\") pod \"kube-proxy-rstxz\" (UID: \"886a0400-0f34-42f0-b37b-b6fc68791b9c\") " pod="kube-system/kube-proxy-rstxz" May 14 00:44:34.307887 kubelet[1912]: I0514 00:44:34.307853 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/82d355ed-0ff1-463f-8912-1b5b7c458f7e-flannel-cfg\") pod \"kube-flannel-ds-2c2dd\" (UID: \"82d355ed-0ff1-463f-8912-1b5b7c458f7e\") " pod="kube-flannel/kube-flannel-ds-2c2dd" May 14 00:44:34.307947 kubelet[1912]: I0514 00:44:34.307929 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/82d355ed-0ff1-463f-8912-1b5b7c458f7e-run\") pod \"kube-flannel-ds-2c2dd\" (UID: \"82d355ed-0ff1-463f-8912-1b5b7c458f7e\") " pod="kube-flannel/kube-flannel-ds-2c2dd" May 14 00:44:34.420078 kubelet[1912]: I0514 00:44:34.420044 1912 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 14 00:44:34.601290 kubelet[1912]: E0514 00:44:34.601159 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:34.602457 env[1215]: time="2025-05-14T00:44:34.602360469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rstxz,Uid:886a0400-0f34-42f0-b37b-b6fc68791b9c,Namespace:kube-system,Attempt:0,}" May 14 00:44:34.605895 kubelet[1912]: E0514 00:44:34.605874 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:34.606660 env[1215]: time="2025-05-14T00:44:34.606608960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2c2dd,Uid:82d355ed-0ff1-463f-8912-1b5b7c458f7e,Namespace:kube-flannel,Attempt:0,}" May 14 00:44:34.631434 env[1215]: time="2025-05-14T00:44:34.631363418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:34.631434 env[1215]: time="2025-05-14T00:44:34.631410869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:34.631434 env[1215]: time="2025-05-14T00:44:34.631421351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:34.631607 env[1215]: time="2025-05-14T00:44:34.631557142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916 pid=1988 runtime=io.containerd.runc.v2 May 14 00:44:34.632373 env[1215]: time="2025-05-14T00:44:34.632324397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:34.632468 env[1215]: time="2025-05-14T00:44:34.632358525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:34.632539 env[1215]: time="2025-05-14T00:44:34.632457028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:34.635116 env[1215]: time="2025-05-14T00:44:34.634447683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37760d70ad4ef320ce003ef2eb9f9274931008e63cb14798c512ac183d1fb2a9 pid=1995 runtime=io.containerd.runc.v2 May 14 00:44:34.645890 systemd[1]: Started cri-containerd-37760d70ad4ef320ce003ef2eb9f9274931008e63cb14798c512ac183d1fb2a9.scope. May 14 00:44:34.646775 systemd[1]: Started cri-containerd-e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916.scope. May 14 00:44:34.687953 env[1215]: time="2025-05-14T00:44:34.687903540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rstxz,Uid:886a0400-0f34-42f0-b37b-b6fc68791b9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"37760d70ad4ef320ce003ef2eb9f9274931008e63cb14798c512ac183d1fb2a9\"" May 14 00:44:34.689553 kubelet[1912]: E0514 00:44:34.688968 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:34.694225 env[1215]: time="2025-05-14T00:44:34.692489909Z" level=info msg="CreateContainer within sandbox \"37760d70ad4ef320ce003ef2eb9f9274931008e63cb14798c512ac183d1fb2a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:44:34.707491 env[1215]: time="2025-05-14T00:44:34.707436765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2c2dd,Uid:82d355ed-0ff1-463f-8912-1b5b7c458f7e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\"" May 14 00:44:34.708629 kubelet[1912]: E0514 00:44:34.708214 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:34.710990 env[1215]: time="2025-05-14T00:44:34.710932324Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 14 00:44:34.712054 env[1215]: time="2025-05-14T00:44:34.711940474Z" level=info msg="CreateContainer within sandbox \"37760d70ad4ef320ce003ef2eb9f9274931008e63cb14798c512ac183d1fb2a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c39086516d9a9e1bf4b739883dbd2be487d4850064dab7424483ad7e5ac9acf7\"" May 14 00:44:34.712784 env[1215]: time="2025-05-14T00:44:34.712672442Z" level=info msg="StartContainer for \"c39086516d9a9e1bf4b739883dbd2be487d4850064dab7424483ad7e5ac9acf7\"" May 14 00:44:34.735886 systemd[1]: Started cri-containerd-c39086516d9a9e1bf4b739883dbd2be487d4850064dab7424483ad7e5ac9acf7.scope. May 14 00:44:34.772655 env[1215]: time="2025-05-14T00:44:34.772600779Z" level=info msg="StartContainer for \"c39086516d9a9e1bf4b739883dbd2be487d4850064dab7424483ad7e5ac9acf7\" returns successfully" May 14 00:44:35.028067 kubelet[1912]: E0514 00:44:35.028034 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:35.946339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193901296.mount: Deactivated successfully. May 14 00:44:35.988186 env[1215]: time="2025-05-14T00:44:35.988138252Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:35.989639 env[1215]: time="2025-05-14T00:44:35.989610450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:35.990788 env[1215]: time="2025-05-14T00:44:35.990764779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:35.992138 env[1215]: time="2025-05-14T00:44:35.992105989Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:35.992570 env[1215]: time="2025-05-14T00:44:35.992543964Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 14 00:44:35.994850 env[1215]: time="2025-05-14T00:44:35.994820896Z" level=info msg="CreateContainer within sandbox \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 14 00:44:36.006015 env[1215]: time="2025-05-14T00:44:36.005974364Z" level=info msg="CreateContainer within sandbox \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8\"" May 14 00:44:36.006454 env[1215]: time="2025-05-14T00:44:36.006402611Z" level=info msg="StartContainer for \"ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8\"" May 14 00:44:36.020054 systemd[1]: Started cri-containerd-ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8.scope. May 14 00:44:36.059266 systemd[1]: cri-containerd-ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8.scope: Deactivated successfully. May 14 00:44:36.061948 env[1215]: time="2025-05-14T00:44:36.061903192Z" level=info msg="StartContainer for \"ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8\" returns successfully" May 14 00:44:36.097593 env[1215]: time="2025-05-14T00:44:36.097547235Z" level=info msg="shim disconnected" id=ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8 May 14 00:44:36.097593 env[1215]: time="2025-05-14T00:44:36.097594165Z" level=warning msg="cleaning up after shim disconnected" id=ca237d74e26cfb72fc27d7a857f43e8df68593017a995319657e71b31f48e6e8 namespace=k8s.io May 14 00:44:36.097875 env[1215]: time="2025-05-14T00:44:36.097603327Z" level=info msg="cleaning up dead shim" May 14 00:44:36.103861 env[1215]: time="2025-05-14T00:44:36.103827518Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2267 runtime=io.containerd.runc.v2\n" May 14 00:44:37.036890 kubelet[1912]: E0514 00:44:37.036827 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:37.037807 env[1215]: time="2025-05-14T00:44:37.037750126Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 14 00:44:37.047267 kubelet[1912]: I0514 00:44:37.047204 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rstxz" podStartSLOduration=3.04718763 podStartE2EDuration="3.04718763s" podCreationTimestamp="2025-05-14 00:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:44:35.036959426 +0000 UTC m=+6.140888060" watchObservedRunningTime="2025-05-14 00:44:37.04718763 +0000 UTC m=+8.151116224" May 14 00:44:38.212276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298316916.mount: Deactivated successfully. May 14 00:44:38.387691 kubelet[1912]: E0514 00:44:38.387640 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:38.896616 env[1215]: time="2025-05-14T00:44:38.896571105Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:38.898106 env[1215]: time="2025-05-14T00:44:38.898068339Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:38.900004 env[1215]: time="2025-05-14T00:44:38.899976128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:38.901663 env[1215]: time="2025-05-14T00:44:38.901637352Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:44:38.903403 env[1215]: time="2025-05-14T00:44:38.903368469Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 14 00:44:38.906494 env[1215]: time="2025-05-14T00:44:38.906460235Z" level=info msg="CreateContainer within sandbox \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:44:38.917008 env[1215]: time="2025-05-14T00:44:38.916962117Z" level=info msg="CreateContainer within sandbox \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276\"" May 14 00:44:38.917945 env[1215]: time="2025-05-14T00:44:38.917914452Z" level=info msg="StartContainer for \"cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276\"" May 14 00:44:38.934996 systemd[1]: Started cri-containerd-cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276.scope. May 14 00:44:38.975578 env[1215]: time="2025-05-14T00:44:38.975474747Z" level=info msg="StartContainer for \"cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276\" returns successfully" May 14 00:44:38.976787 systemd[1]: cri-containerd-cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276.scope: Deactivated successfully. May 14 00:44:38.978926 kubelet[1912]: I0514 00:44:38.978899 1912 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 00:44:39.003038 systemd[1]: Created slice kubepods-burstable-pode83ed600_db56_4fba_a2b3_36805d10ae54.slice. May 14 00:44:39.008112 systemd[1]: Created slice kubepods-burstable-podb8032d8a_187c_49df_9e8b_0481a8f9e1ca.slice. May 14 00:44:39.038654 kubelet[1912]: I0514 00:44:39.038613 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6pp\" (UniqueName: \"kubernetes.io/projected/b8032d8a-187c-49df-9e8b-0481a8f9e1ca-kube-api-access-jz6pp\") pod \"coredns-6f6b679f8f-rhql9\" (UID: \"b8032d8a-187c-49df-9e8b-0481a8f9e1ca\") " pod="kube-system/coredns-6f6b679f8f-rhql9" May 14 00:44:39.038880 kubelet[1912]: I0514 00:44:39.038674 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83ed600-db56-4fba-a2b3-36805d10ae54-config-volume\") pod \"coredns-6f6b679f8f-5z87l\" (UID: \"e83ed600-db56-4fba-a2b3-36805d10ae54\") " pod="kube-system/coredns-6f6b679f8f-5z87l" May 14 00:44:39.038880 kubelet[1912]: I0514 00:44:39.038774 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8032d8a-187c-49df-9e8b-0481a8f9e1ca-config-volume\") pod \"coredns-6f6b679f8f-rhql9\" (UID: \"b8032d8a-187c-49df-9e8b-0481a8f9e1ca\") " pod="kube-system/coredns-6f6b679f8f-rhql9" May 14 00:44:39.038880 kubelet[1912]: I0514 00:44:39.038807 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8ps\" (UniqueName: \"kubernetes.io/projected/e83ed600-db56-4fba-a2b3-36805d10ae54-kube-api-access-lx8ps\") pod \"coredns-6f6b679f8f-5z87l\" (UID: \"e83ed600-db56-4fba-a2b3-36805d10ae54\") " pod="kube-system/coredns-6f6b679f8f-5z87l" May 14 00:44:39.041263 kubelet[1912]: E0514 00:44:39.040506 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:39.041263 kubelet[1912]: E0514 00:44:39.040633 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:39.119801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276-rootfs.mount: Deactivated successfully. May 14 00:44:39.123368 env[1215]: time="2025-05-14T00:44:39.123327875Z" level=info msg="shim disconnected" id=cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276 May 14 00:44:39.123571 env[1215]: time="2025-05-14T00:44:39.123551834Z" level=warning msg="cleaning up after shim disconnected" id=cc40c4c21383f83a2e9cc9db64f1eafeafa64340e29540877c4e9442c6ae8276 namespace=k8s.io May 14 00:44:39.123640 env[1215]: time="2025-05-14T00:44:39.123626767Z" level=info msg="cleaning up dead shim" May 14 00:44:39.130337 env[1215]: time="2025-05-14T00:44:39.130302765Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:44:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2325 runtime=io.containerd.runc.v2\n" May 14 00:44:39.307357 kubelet[1912]: E0514 00:44:39.307075 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:39.307642 env[1215]: time="2025-05-14T00:44:39.307566698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5z87l,Uid:e83ed600-db56-4fba-a2b3-36805d10ae54,Namespace:kube-system,Attempt:0,}" May 14 00:44:39.311215 kubelet[1912]: E0514 00:44:39.310991 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:39.311581 env[1215]: time="2025-05-14T00:44:39.311551869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhql9,Uid:b8032d8a-187c-49df-9e8b-0481a8f9e1ca,Namespace:kube-system,Attempt:0,}" May 14 00:44:39.377488 env[1215]: time="2025-05-14T00:44:39.377417249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5z87l,Uid:e83ed600-db56-4fba-a2b3-36805d10ae54,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c342f37c64de1a1c3df9364b9aa25bd125377aef064f77ba82968a9f127be76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:44:39.378092 kubelet[1912]: E0514 00:44:39.377750 1912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c342f37c64de1a1c3df9364b9aa25bd125377aef064f77ba82968a9f127be76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:44:39.378092 kubelet[1912]: E0514 00:44:39.377808 1912 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c342f37c64de1a1c3df9364b9aa25bd125377aef064f77ba82968a9f127be76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-5z87l" May 14 00:44:39.378092 kubelet[1912]: E0514 00:44:39.377835 1912 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c342f37c64de1a1c3df9364b9aa25bd125377aef064f77ba82968a9f127be76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-5z87l" May 14 00:44:39.378092 kubelet[1912]: E0514 00:44:39.377884 1912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5z87l_kube-system(e83ed600-db56-4fba-a2b3-36805d10ae54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5z87l_kube-system(e83ed600-db56-4fba-a2b3-36805d10ae54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c342f37c64de1a1c3df9364b9aa25bd125377aef064f77ba82968a9f127be76\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-5z87l" podUID="e83ed600-db56-4fba-a2b3-36805d10ae54" May 14 00:44:39.378310 env[1215]: time="2025-05-14T00:44:39.378254714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhql9,Uid:b8032d8a-187c-49df-9e8b-0481a8f9e1ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd669c51bb634ff63c6efd433d527c1a5080c4edc087915a5c5e50a6b5abdfbc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:44:39.379590 kubelet[1912]: E0514 00:44:39.379388 1912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd669c51bb634ff63c6efd433d527c1a5080c4edc087915a5c5e50a6b5abdfbc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:44:39.379590 kubelet[1912]: E0514 00:44:39.379442 1912 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd669c51bb634ff63c6efd433d527c1a5080c4edc087915a5c5e50a6b5abdfbc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rhql9" May 14 00:44:39.379590 kubelet[1912]: E0514 00:44:39.379469 1912 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd669c51bb634ff63c6efd433d527c1a5080c4edc087915a5c5e50a6b5abdfbc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rhql9" May 14 00:44:39.379590 kubelet[1912]: E0514 00:44:39.379527 1912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rhql9_kube-system(b8032d8a-187c-49df-9e8b-0481a8f9e1ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rhql9_kube-system(b8032d8a-187c-49df-9e8b-0481a8f9e1ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd669c51bb634ff63c6efd433d527c1a5080c4edc087915a5c5e50a6b5abdfbc\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-rhql9" podUID="b8032d8a-187c-49df-9e8b-0481a8f9e1ca" May 14 00:44:40.043496 kubelet[1912]: E0514 00:44:40.043457 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:40.046025 env[1215]: time="2025-05-14T00:44:40.045974228Z" level=info msg="CreateContainer within sandbox \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 14 00:44:40.060212 env[1215]: time="2025-05-14T00:44:40.060145757Z" level=info msg="CreateContainer within sandbox \"e1d7c7ca06c0701bda07deb17d45a4d8c9f14b2b09d39a6a17ecaf560417c916\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"03fc3307da1d6627155fce0cd2a20506126d42fdcffa2a5a069ce583ee30c214\"" May 14 00:44:40.061597 env[1215]: time="2025-05-14T00:44:40.061479896Z" level=info msg="StartContainer for \"03fc3307da1d6627155fce0cd2a20506126d42fdcffa2a5a069ce583ee30c214\"" May 14 00:44:40.076660 systemd[1]: Started cri-containerd-03fc3307da1d6627155fce0cd2a20506126d42fdcffa2a5a069ce583ee30c214.scope. May 14 00:44:40.112061 env[1215]: time="2025-05-14T00:44:40.112014480Z" level=info msg="StartContainer for \"03fc3307da1d6627155fce0cd2a20506126d42fdcffa2a5a069ce583ee30c214\" returns successfully" May 14 00:44:40.120656 systemd[1]: run-netns-cni\x2de8303381\x2d46e3\x2d01c5\x2dd262\x2df6353bac86b4.mount: Deactivated successfully. May 14 00:44:40.120763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c342f37c64de1a1c3df9364b9aa25bd125377aef064f77ba82968a9f127be76-shm.mount: Deactivated successfully. May 14 00:44:41.047172 kubelet[1912]: E0514 00:44:41.047134 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:41.060803 kubelet[1912]: I0514 00:44:41.060721 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-2c2dd" podStartSLOduration=2.8664515699999997 podStartE2EDuration="7.060706385s" podCreationTimestamp="2025-05-14 00:44:34 +0000 UTC" firstStartedPulling="2025-05-14 00:44:34.709683798 +0000 UTC m=+5.813612392" lastFinishedPulling="2025-05-14 00:44:38.903938613 +0000 UTC m=+10.007867207" observedRunningTime="2025-05-14 00:44:41.059750676 +0000 UTC m=+12.163679271" watchObservedRunningTime="2025-05-14 00:44:41.060706385 +0000 UTC m=+12.164634979" May 14 00:44:41.215136 systemd-networkd[1035]: flannel.1: Link UP May 14 00:44:41.215142 systemd-networkd[1035]: flannel.1: Gained carrier May 14 00:44:42.048541 kubelet[1912]: E0514 00:44:42.048501 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:42.401799 kubelet[1912]: E0514 00:44:42.400903 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:42.796164 systemd-networkd[1035]: flannel.1: Gained IPv6LL May 14 00:44:43.050529 kubelet[1912]: E0514 00:44:43.049824 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:43.161613 kubelet[1912]: E0514 00:44:43.161576 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:43.992865 update_engine[1209]: I0514 00:44:43.992782 1209 update_attempter.cc:509] Updating boot flags... May 14 00:44:51.004973 kubelet[1912]: E0514 00:44:51.004929 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:51.005640 env[1215]: time="2025-05-14T00:44:51.005596341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5z87l,Uid:e83ed600-db56-4fba-a2b3-36805d10ae54,Namespace:kube-system,Attempt:0,}" May 14 00:44:51.023971 systemd-networkd[1035]: cni0: Link UP May 14 00:44:51.023984 systemd-networkd[1035]: cni0: Gained carrier May 14 00:44:51.025835 systemd-networkd[1035]: cni0: Lost carrier May 14 00:44:51.031608 systemd-networkd[1035]: veth3a66a829: Link UP May 14 00:44:51.034437 kernel: cni0: port 1(veth3a66a829) entered blocking state May 14 00:44:51.034570 kernel: cni0: port 1(veth3a66a829) entered disabled state May 14 00:44:51.035432 kernel: device veth3a66a829 entered promiscuous mode May 14 00:44:51.037256 kernel: cni0: port 1(veth3a66a829) entered blocking state May 14 00:44:51.037327 kernel: cni0: port 1(veth3a66a829) entered forwarding state May 14 00:44:51.038241 kernel: cni0: port 1(veth3a66a829) entered disabled state May 14 00:44:51.050452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3a66a829: link becomes ready May 14 00:44:51.050582 kernel: cni0: port 1(veth3a66a829) entered blocking state May 14 00:44:51.050604 kernel: cni0: port 1(veth3a66a829) entered forwarding state May 14 00:44:51.050434 systemd-networkd[1035]: veth3a66a829: Gained carrier May 14 00:44:51.050687 systemd-networkd[1035]: cni0: Gained carrier May 14 00:44:51.052277 env[1215]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 14 00:44:51.052277 env[1215]: delegateAdd: netconf sent to delegate plugin: May 14 00:44:51.062200 env[1215]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T00:44:51.062132167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:51.062314 env[1215]: time="2025-05-14T00:44:51.062211815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:51.062314 env[1215]: time="2025-05-14T00:44:51.062238737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:51.062541 env[1215]: time="2025-05-14T00:44:51.062484681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6baad321a57d4eb3608775b79a8fe270aa8379938adaf815f36f05dd394dd0e1 pid=2599 runtime=io.containerd.runc.v2 May 14 00:44:51.077089 systemd[1]: run-containerd-runc-k8s.io-6baad321a57d4eb3608775b79a8fe270aa8379938adaf815f36f05dd394dd0e1-runc.qikBhY.mount: Deactivated successfully. May 14 00:44:51.079630 systemd[1]: Started cri-containerd-6baad321a57d4eb3608775b79a8fe270aa8379938adaf815f36f05dd394dd0e1.scope. May 14 00:44:51.099680 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:44:51.118248 env[1215]: time="2025-05-14T00:44:51.118209950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5z87l,Uid:e83ed600-db56-4fba-a2b3-36805d10ae54,Namespace:kube-system,Attempt:0,} returns sandbox id \"6baad321a57d4eb3608775b79a8fe270aa8379938adaf815f36f05dd394dd0e1\"" May 14 00:44:51.119780 kubelet[1912]: E0514 00:44:51.118926 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:51.120451 env[1215]: time="2025-05-14T00:44:51.120421040Z" level=info msg="CreateContainer within sandbox \"6baad321a57d4eb3608775b79a8fe270aa8379938adaf815f36f05dd394dd0e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:44:51.176695 env[1215]: time="2025-05-14T00:44:51.176633115Z" level=info msg="CreateContainer within sandbox \"6baad321a57d4eb3608775b79a8fe270aa8379938adaf815f36f05dd394dd0e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c34f50d719295f516e181276cebdabb9a40d424b4b41b6b316643aed9ae762a\"" May 14 00:44:51.177077 env[1215]: time="2025-05-14T00:44:51.177053235Z" level=info msg="StartContainer for \"6c34f50d719295f516e181276cebdabb9a40d424b4b41b6b316643aed9ae762a\"" May 14 00:44:51.191846 systemd[1]: Started cri-containerd-6c34f50d719295f516e181276cebdabb9a40d424b4b41b6b316643aed9ae762a.scope. May 14 00:44:51.225677 env[1215]: time="2025-05-14T00:44:51.224495435Z" level=info msg="StartContainer for \"6c34f50d719295f516e181276cebdabb9a40d424b4b41b6b316643aed9ae762a\" returns successfully" May 14 00:44:52.065946 kubelet[1912]: E0514 00:44:52.065885 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:52.076872 kubelet[1912]: I0514 00:44:52.076823 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5z87l" podStartSLOduration=18.07680276 podStartE2EDuration="18.07680276s" podCreationTimestamp="2025-05-14 00:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:44:52.07614342 +0000 UTC m=+23.180072014" watchObservedRunningTime="2025-05-14 00:44:52.07680276 +0000 UTC m=+23.180731354" May 14 00:44:52.588294 systemd-networkd[1035]: cni0: Gained IPv6LL May 14 00:44:52.715839 systemd-networkd[1035]: veth3a66a829: Gained IPv6LL May 14 00:44:53.068769 kubelet[1912]: E0514 00:44:53.066567 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:53.908208 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:42630.service. May 14 00:44:53.948258 sshd[2698]: Accepted publickey for core from 10.0.0.1 port 42630 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:53.949894 sshd[2698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:53.953137 systemd-logind[1205]: New session 6 of user core. May 14 00:44:53.954041 systemd[1]: Started session-6.scope. May 14 00:44:54.004634 kubelet[1912]: E0514 00:44:54.004590 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:54.005145 env[1215]: time="2025-05-14T00:44:54.005020156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhql9,Uid:b8032d8a-187c-49df-9e8b-0481a8f9e1ca,Namespace:kube-system,Attempt:0,}" May 14 00:44:54.029665 systemd-networkd[1035]: veth61fb8f73: Link UP May 14 00:44:54.031755 kernel: cni0: port 2(veth61fb8f73) entered blocking state May 14 00:44:54.031825 kernel: cni0: port 2(veth61fb8f73) entered disabled state May 14 00:44:54.032854 kernel: device veth61fb8f73 entered promiscuous mode May 14 00:44:54.038061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:44:54.038131 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth61fb8f73: link becomes ready May 14 00:44:54.038151 kernel: cni0: port 2(veth61fb8f73) entered blocking state May 14 00:44:54.039447 kernel: cni0: port 2(veth61fb8f73) entered forwarding state May 14 00:44:54.040834 systemd-networkd[1035]: veth61fb8f73: Gained carrier May 14 00:44:54.043042 env[1215]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} May 14 00:44:54.043042 env[1215]: delegateAdd: netconf sent to delegate plugin: May 14 00:44:54.054769 env[1215]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T00:44:54.054582095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:44:54.054769 env[1215]: time="2025-05-14T00:44:54.054626339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:44:54.054769 env[1215]: time="2025-05-14T00:44:54.054636859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:44:54.054975 env[1215]: time="2025-05-14T00:44:54.054793673Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78fbe09d80f907db714f203a6647ab0bd43bf90c05fc4533c33009eb279c5c40 pid=2748 runtime=io.containerd.runc.v2 May 14 00:44:54.074948 kubelet[1912]: E0514 00:44:54.071385 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:54.074847 systemd[1]: run-containerd-runc-k8s.io-78fbe09d80f907db714f203a6647ab0bd43bf90c05fc4533c33009eb279c5c40-runc.28ilbI.mount: Deactivated successfully. May 14 00:44:54.076471 systemd[1]: Started cri-containerd-78fbe09d80f907db714f203a6647ab0bd43bf90c05fc4533c33009eb279c5c40.scope. May 14 00:44:54.106471 sshd[2698]: pam_unix(sshd:session): session closed for user core May 14 00:44:54.106886 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:44:54.110573 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:42630.service: Deactivated successfully. May 14 00:44:54.111323 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:44:54.111952 systemd-logind[1205]: Session 6 logged out. Waiting for processes to exit. May 14 00:44:54.112834 systemd-logind[1205]: Removed session 6. May 14 00:44:54.124666 env[1215]: time="2025-05-14T00:44:54.124601022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhql9,Uid:b8032d8a-187c-49df-9e8b-0481a8f9e1ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"78fbe09d80f907db714f203a6647ab0bd43bf90c05fc4533c33009eb279c5c40\"" May 14 00:44:54.125470 kubelet[1912]: E0514 00:44:54.125442 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:54.127691 env[1215]: time="2025-05-14T00:44:54.127645916Z" level=info msg="CreateContainer within sandbox \"78fbe09d80f907db714f203a6647ab0bd43bf90c05fc4533c33009eb279c5c40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:44:54.140707 env[1215]: time="2025-05-14T00:44:54.140612599Z" level=info msg="CreateContainer within sandbox \"78fbe09d80f907db714f203a6647ab0bd43bf90c05fc4533c33009eb279c5c40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"003d09b6a88962e26c3bcb6c47baa464e8c1bc0c9b6399ed91162a736147262a\"" May 14 00:44:54.142325 env[1215]: time="2025-05-14T00:44:54.141356581Z" level=info msg="StartContainer for \"003d09b6a88962e26c3bcb6c47baa464e8c1bc0c9b6399ed91162a736147262a\"" May 14 00:44:54.156122 systemd[1]: Started cri-containerd-003d09b6a88962e26c3bcb6c47baa464e8c1bc0c9b6399ed91162a736147262a.scope. May 14 00:44:54.222169 env[1215]: time="2025-05-14T00:44:54.222118046Z" level=info msg="StartContainer for \"003d09b6a88962e26c3bcb6c47baa464e8c1bc0c9b6399ed91162a736147262a\" returns successfully" May 14 00:44:55.074158 kubelet[1912]: E0514 00:44:55.073833 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:55.083206 kubelet[1912]: I0514 00:44:55.083148 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rhql9" podStartSLOduration=21.083134713 podStartE2EDuration="21.083134713s" podCreationTimestamp="2025-05-14 00:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:44:55.082801526 +0000 UTC m=+26.186730120" watchObservedRunningTime="2025-05-14 00:44:55.083134713 +0000 UTC m=+26.187063347" May 14 00:44:55.211898 systemd-networkd[1035]: veth61fb8f73: Gained IPv6LL May 14 00:44:56.075682 kubelet[1912]: E0514 00:44:56.075650 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:57.076776 kubelet[1912]: E0514 00:44:57.076651 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:44:59.111315 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:42646.service. May 14 00:44:59.150843 sshd[2850]: Accepted publickey for core from 10.0.0.1 port 42646 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:44:59.152801 sshd[2850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:44:59.156664 systemd-logind[1205]: New session 7 of user core. May 14 00:44:59.157030 systemd[1]: Started session-7.scope. May 14 00:44:59.262898 sshd[2850]: pam_unix(sshd:session): session closed for user core May 14 00:44:59.265463 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:42646.service: Deactivated successfully. May 14 00:44:59.266196 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:44:59.266830 systemd-logind[1205]: Session 7 logged out. Waiting for processes to exit. May 14 00:44:59.267575 systemd-logind[1205]: Removed session 7. May 14 00:45:04.267518 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:37792.service. May 14 00:45:04.307245 sshd[2885]: Accepted publickey for core from 10.0.0.1 port 37792 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:04.308755 sshd[2885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:04.311924 systemd-logind[1205]: New session 8 of user core. May 14 00:45:04.312701 systemd[1]: Started session-8.scope. May 14 00:45:04.420013 sshd[2885]: pam_unix(sshd:session): session closed for user core May 14 00:45:04.423064 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:37792.service: Deactivated successfully. May 14 00:45:04.423631 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:45:04.424169 systemd-logind[1205]: Session 8 logged out. Waiting for processes to exit. May 14 00:45:04.425223 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:37806.service. May 14 00:45:04.425875 systemd-logind[1205]: Removed session 8. May 14 00:45:04.465595 sshd[2900]: Accepted publickey for core from 10.0.0.1 port 37806 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:04.466810 sshd[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:04.470658 systemd-logind[1205]: New session 9 of user core. May 14 00:45:04.471057 systemd[1]: Started session-9.scope. May 14 00:45:04.612772 sshd[2900]: pam_unix(sshd:session): session closed for user core May 14 00:45:04.617687 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:37810.service. May 14 00:45:04.619375 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:45:04.619495 systemd-logind[1205]: Session 9 logged out. Waiting for processes to exit. May 14 00:45:04.620797 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:37806.service: Deactivated successfully. May 14 00:45:04.621206 systemd-logind[1205]: Removed session 9. May 14 00:45:04.664634 sshd[2910]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:04.666318 sshd[2910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:04.670382 systemd-logind[1205]: New session 10 of user core. May 14 00:45:04.670837 systemd[1]: Started session-10.scope. May 14 00:45:04.777947 sshd[2910]: pam_unix(sshd:session): session closed for user core May 14 00:45:04.780865 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:45:04.781531 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:37810.service: Deactivated successfully. May 14 00:45:04.782256 systemd-logind[1205]: Session 10 logged out. Waiting for processes to exit. May 14 00:45:04.782859 systemd-logind[1205]: Removed session 10. May 14 00:45:09.782632 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:37814.service. May 14 00:45:09.822674 sshd[2948]: Accepted publickey for core from 10.0.0.1 port 37814 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:09.823975 sshd[2948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:09.827610 systemd-logind[1205]: New session 11 of user core. May 14 00:45:09.828486 systemd[1]: Started session-11.scope. May 14 00:45:09.935510 sshd[2948]: pam_unix(sshd:session): session closed for user core May 14 00:45:09.939297 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:37816.service. May 14 00:45:09.939836 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:37814.service: Deactivated successfully. May 14 00:45:09.940567 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:45:09.941074 systemd-logind[1205]: Session 11 logged out. Waiting for processes to exit. May 14 00:45:09.941666 systemd-logind[1205]: Removed session 11. May 14 00:45:09.980641 sshd[2960]: Accepted publickey for core from 10.0.0.1 port 37816 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:09.982111 sshd[2960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:09.985786 systemd-logind[1205]: New session 12 of user core. May 14 00:45:09.986564 systemd[1]: Started session-12.scope. May 14 00:45:10.142775 sshd[2960]: pam_unix(sshd:session): session closed for user core May 14 00:45:10.146369 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:37824.service. May 14 00:45:10.146961 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:37816.service: Deactivated successfully. May 14 00:45:10.147630 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:45:10.148235 systemd-logind[1205]: Session 12 logged out. Waiting for processes to exit. May 14 00:45:10.149128 systemd-logind[1205]: Removed session 12. May 14 00:45:10.186269 sshd[2972]: Accepted publickey for core from 10.0.0.1 port 37824 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:10.187749 sshd[2972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:10.190901 systemd-logind[1205]: New session 13 of user core. May 14 00:45:10.191691 systemd[1]: Started session-13.scope. May 14 00:45:11.277622 sshd[2972]: pam_unix(sshd:session): session closed for user core May 14 00:45:11.281145 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:37838.service. May 14 00:45:11.283916 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:37824.service: Deactivated successfully. May 14 00:45:11.284543 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:45:11.285166 systemd-logind[1205]: Session 13 logged out. Waiting for processes to exit. May 14 00:45:11.285993 systemd-logind[1205]: Removed session 13. May 14 00:45:11.326490 sshd[2989]: Accepted publickey for core from 10.0.0.1 port 37838 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:11.328057 sshd[2989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:11.333165 systemd-logind[1205]: New session 14 of user core. May 14 00:45:11.334012 systemd[1]: Started session-14.scope. May 14 00:45:11.548599 sshd[2989]: pam_unix(sshd:session): session closed for user core May 14 00:45:11.551586 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:37846.service. May 14 00:45:11.555068 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:45:11.556278 systemd-logind[1205]: Session 14 logged out. Waiting for processes to exit. May 14 00:45:11.556419 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:37838.service: Deactivated successfully. May 14 00:45:11.558160 systemd-logind[1205]: Removed session 14. May 14 00:45:11.592628 sshd[3024]: Accepted publickey for core from 10.0.0.1 port 37846 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:11.593858 sshd[3024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:11.597172 systemd-logind[1205]: New session 15 of user core. May 14 00:45:11.598038 systemd[1]: Started session-15.scope. May 14 00:45:11.709416 sshd[3024]: pam_unix(sshd:session): session closed for user core May 14 00:45:11.711923 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:37846.service: Deactivated successfully. May 14 00:45:11.712618 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:45:11.714266 systemd-logind[1205]: Session 15 logged out. Waiting for processes to exit. May 14 00:45:11.715260 systemd-logind[1205]: Removed session 15. May 14 00:45:16.714062 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:35456.service. May 14 00:45:16.753996 sshd[3063]: Accepted publickey for core from 10.0.0.1 port 35456 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:16.755660 sshd[3063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:16.760595 systemd-logind[1205]: New session 16 of user core. May 14 00:45:16.760790 systemd[1]: Started session-16.scope. May 14 00:45:16.868281 sshd[3063]: pam_unix(sshd:session): session closed for user core May 14 00:45:16.870749 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:35456.service: Deactivated successfully. May 14 00:45:16.871441 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:45:16.872027 systemd-logind[1205]: Session 16 logged out. Waiting for processes to exit. May 14 00:45:16.872800 systemd-logind[1205]: Removed session 16. May 14 00:45:21.873273 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:35472.service. May 14 00:45:21.913557 sshd[3098]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:21.914742 sshd[3098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:21.917890 systemd-logind[1205]: New session 17 of user core. May 14 00:45:21.918683 systemd[1]: Started session-17.scope. May 14 00:45:22.020535 sshd[3098]: pam_unix(sshd:session): session closed for user core May 14 00:45:22.022955 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:35472.service: Deactivated successfully. May 14 00:45:22.023626 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:45:22.024122 systemd-logind[1205]: Session 17 logged out. Waiting for processes to exit. May 14 00:45:22.024800 systemd-logind[1205]: Removed session 17. May 14 00:45:27.025466 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:54696.service. May 14 00:45:27.066137 sshd[3133]: Accepted publickey for core from 10.0.0.1 port 54696 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:27.067341 sshd[3133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:27.070652 systemd-logind[1205]: New session 18 of user core. May 14 00:45:27.071526 systemd[1]: Started session-18.scope. May 14 00:45:27.178355 sshd[3133]: pam_unix(sshd:session): session closed for user core May 14 00:45:27.181052 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:54696.service: Deactivated successfully. May 14 00:45:27.181751 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:45:27.182262 systemd-logind[1205]: Session 18 logged out. Waiting for processes to exit. May 14 00:45:27.183038 systemd-logind[1205]: Removed session 18. May 14 00:45:32.182972 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:54706.service. May 14 00:45:32.222715 sshd[3170]: Accepted publickey for core from 10.0.0.1 port 54706 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:45:32.224071 sshd[3170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:45:32.227363 systemd-logind[1205]: New session 19 of user core. May 14 00:45:32.228252 systemd[1]: Started session-19.scope. May 14 00:45:32.362420 sshd[3170]: pam_unix(sshd:session): session closed for user core May 14 00:45:32.364921 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:54706.service: Deactivated successfully. May 14 00:45:32.365624 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:45:32.366100 systemd-logind[1205]: Session 19 logged out. Waiting for processes to exit. May 14 00:45:32.366854 systemd-logind[1205]: Removed session 19.