May 13 00:41:02.869409 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:41:02.869427 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:02.869435 kernel: BIOS-provided physical RAM map: May 13 00:41:02.869441 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:41:02.869446 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:41:02.869451 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:41:02.869458 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:41:02.869463 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:41:02.869470 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:02.869476 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:41:02.869481 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:41:02.869487 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:41:02.869492 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:41:02.869497 kernel: NX (Execute Disable) protection: active May 13 00:41:02.869506 kernel: SMBIOS 2.8 present. May 13 00:41:02.869512 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:41:02.869517 kernel: Hypervisor detected: KVM May 13 00:41:02.869523 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:41:02.869529 kernel: kvm-clock: cpu 0, msr 55196001, primary cpu clock May 13 00:41:02.869535 kernel: kvm-clock: using sched offset of 2423264610 cycles May 13 00:41:02.869541 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:41:02.869547 kernel: tsc: Detected 2794.746 MHz processor May 13 00:41:02.869553 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:41:02.869560 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:41:02.869566 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:41:02.869572 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:41:02.869578 kernel: Using GB pages for direct mapping May 13 00:41:02.869584 kernel: ACPI: Early table checksum verification disabled May 13 00:41:02.869590 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:41:02.869596 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869611 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869629 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869640 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:41:02.869649 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869655 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869661 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869667 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:02.869673 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:41:02.869689 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:41:02.869695 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:41:02.869705 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:41:02.869711 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:41:02.869730 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:41:02.869736 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:41:02.869742 kernel: No NUMA configuration found May 13 00:41:02.869749 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:41:02.869756 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:41:02.869763 kernel: Zone ranges: May 13 00:41:02.869769 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:41:02.869776 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:41:02.869782 kernel: Normal empty May 13 00:41:02.869788 kernel: Movable zone start for each node May 13 00:41:02.869794 kernel: Early memory node ranges May 13 00:41:02.869801 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:41:02.869807 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:41:02.869815 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:41:02.869821 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:02.869827 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:41:02.869834 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:41:02.869840 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:41:02.869846 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:41:02.869853 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:41:02.869859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:41:02.869865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:41:02.869872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:41:02.869879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:41:02.869886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:41:02.869892 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:41:02.869903 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:41:02.869909 kernel: TSC deadline timer available May 13 00:41:02.869915 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:41:02.869921 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:41:02.869928 kernel: kvm-guest: setup PV sched yield May 13 00:41:02.869934 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:41:02.869942 kernel: Booting paravirtualized kernel on KVM May 13 00:41:02.869948 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:41:02.869955 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:41:02.869961 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:41:02.869967 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:41:02.869973 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:41:02.869980 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:41:02.869986 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 13 00:41:02.869992 kernel: kvm-guest: PV spinlocks enabled May 13 00:41:02.870000 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:41:02.870006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:41:02.870012 kernel: Policy zone: DMA32 May 13 00:41:02.870028 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:02.870035 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:41:02.870042 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:41:02.870048 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:41:02.870055 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:41:02.870063 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 13 00:41:02.870070 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:41:02.870076 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:41:02.870082 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:41:02.870088 kernel: rcu: Hierarchical RCU implementation. May 13 00:41:02.870095 kernel: rcu: RCU event tracing is enabled. May 13 00:41:02.870102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:41:02.870109 kernel: Rude variant of Tasks RCU enabled. May 13 00:41:02.870115 kernel: Tracing variant of Tasks RCU enabled. May 13 00:41:02.870123 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:41:02.870129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:41:02.870136 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:41:02.870142 kernel: random: crng init done May 13 00:41:02.870148 kernel: Console: colour VGA+ 80x25 May 13 00:41:02.870154 kernel: printk: console [ttyS0] enabled May 13 00:41:02.870161 kernel: ACPI: Core revision 20210730 May 13 00:41:02.870167 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:41:02.870173 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:41:02.870181 kernel: x2apic enabled May 13 00:41:02.870187 kernel: Switched APIC routing to physical x2apic. May 13 00:41:02.870194 kernel: kvm-guest: setup PV IPIs May 13 00:41:02.870200 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:41:02.870206 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:41:02.870213 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 00:41:02.870219 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:41:02.870226 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:41:02.870232 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:41:02.870245 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:41:02.870252 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:41:02.870259 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:41:02.870266 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:41:02.870273 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:41:02.870280 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:41:02.870287 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:41:02.870293 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:41:02.870300 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:41:02.870308 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:41:02.870315 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:41:02.870322 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:41:02.870328 kernel: Freeing SMP alternatives memory: 32K May 13 00:41:02.870335 kernel: pid_max: default: 32768 minimum: 301 May 13 00:41:02.870342 kernel: LSM: Security Framework initializing May 13 00:41:02.870348 kernel: SELinux: Initializing. May 13 00:41:02.870355 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:02.870363 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:02.870370 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:41:02.870377 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:41:02.870384 kernel: ... version: 0 May 13 00:41:02.870390 kernel: ... bit width: 48 May 13 00:41:02.870397 kernel: ... generic registers: 6 May 13 00:41:02.870403 kernel: ... value mask: 0000ffffffffffff May 13 00:41:02.870410 kernel: ... max period: 00007fffffffffff May 13 00:41:02.870417 kernel: ... fixed-purpose events: 0 May 13 00:41:02.870425 kernel: ... event mask: 000000000000003f May 13 00:41:02.870432 kernel: signal: max sigframe size: 1776 May 13 00:41:02.870438 kernel: rcu: Hierarchical SRCU implementation. May 13 00:41:02.870445 kernel: smp: Bringing up secondary CPUs ... May 13 00:41:02.870452 kernel: x86: Booting SMP configuration: May 13 00:41:02.870458 kernel: .... node #0, CPUs: #1 May 13 00:41:02.870465 kernel: kvm-clock: cpu 1, msr 55196041, secondary cpu clock May 13 00:41:02.870471 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:41:02.870478 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 13 00:41:02.870486 kernel: #2 May 13 00:41:02.870493 kernel: kvm-clock: cpu 2, msr 55196081, secondary cpu clock May 13 00:41:02.870499 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:41:02.870506 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 13 00:41:02.870512 kernel: #3 May 13 00:41:02.870519 kernel: kvm-clock: cpu 3, msr 551960c1, secondary cpu clock May 13 00:41:02.870526 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:41:02.870532 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 13 00:41:02.870539 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:41:02.870547 kernel: smpboot: Max logical packages: 1 May 13 00:41:02.870554 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 00:41:02.870560 kernel: devtmpfs: initialized May 13 00:41:02.870567 kernel: x86/mm: Memory block size: 128MB May 13 00:41:02.870574 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:41:02.870581 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:41:02.870587 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:41:02.870594 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:41:02.870600 kernel: audit: initializing netlink subsys (disabled) May 13 00:41:02.870609 kernel: audit: type=2000 audit(1747096863.133:1): state=initialized audit_enabled=0 res=1 May 13 00:41:02.870616 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:41:02.870624 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:41:02.870631 kernel: cpuidle: using governor menu May 13 00:41:02.870639 kernel: ACPI: bus type PCI registered May 13 00:41:02.870647 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:41:02.870654 kernel: dca service started, version 1.12.1 May 13 00:41:02.870661 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:41:02.870668 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:41:02.870675 kernel: PCI: Using configuration type 1 for base access May 13 00:41:02.870692 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:41:02.870699 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:41:02.870706 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:41:02.870712 kernel: ACPI: Added _OSI(Module Device) May 13 00:41:02.870719 kernel: ACPI: Added _OSI(Processor Device) May 13 00:41:02.870725 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:41:02.870732 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:41:02.870739 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:41:02.870745 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:41:02.870754 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:41:02.870761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:41:02.870767 kernel: ACPI: Interpreter enabled May 13 00:41:02.870774 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:41:02.870780 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:41:02.870787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:41:02.870794 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:41:02.870801 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:41:02.870914 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:41:02.870989 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:41:02.871065 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:41:02.871075 kernel: PCI host bridge to bus 0000:00 May 13 00:41:02.871149 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:41:02.871212 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:41:02.871274 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:41:02.871337 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:41:02.871401 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:41:02.871463 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:41:02.871525 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:41:02.871608 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:41:02.871737 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:41:02.871821 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:41:02.871889 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:41:02.871957 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:41:02.872033 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:41:02.872110 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:41:02.872181 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:41:02.872254 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:41:02.872325 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:41:02.872400 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:41:02.872469 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:41:02.872540 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:41:02.872606 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:41:02.872701 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:41:02.872792 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:41:02.872865 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:41:02.872931 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:41:02.873089 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:41:02.873196 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:41:02.873289 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:41:02.873386 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:41:02.873475 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:41:02.873569 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:41:02.873669 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:41:02.873822 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:41:02.873836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:41:02.873845 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:41:02.873854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:41:02.873863 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:41:02.873875 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:41:02.873884 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:41:02.873892 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:41:02.873901 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:41:02.873910 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:41:02.873919 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:41:02.873927 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:41:02.873936 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:41:02.873945 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:41:02.873955 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:41:02.873963 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:41:02.873972 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:41:02.873980 kernel: iommu: Default domain type: Translated May 13 00:41:02.873989 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:41:02.874099 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:41:02.874196 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:41:02.874292 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:41:02.874305 kernel: vgaarb: loaded May 13 00:41:02.874317 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:41:02.874326 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:41:02.874334 kernel: PTP clock support registered May 13 00:41:02.874343 kernel: PCI: Using ACPI for IRQ routing May 13 00:41:02.874352 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:41:02.874361 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:41:02.874369 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:41:02.874378 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:41:02.874386 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:41:02.874397 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:41:02.874405 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:41:02.874414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:41:02.874423 kernel: pnp: PnP ACPI init May 13 00:41:02.874530 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:41:02.874544 kernel: pnp: PnP ACPI: found 6 devices May 13 00:41:02.874553 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:41:02.874561 kernel: NET: Registered PF_INET protocol family May 13 00:41:02.874573 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:41:02.874582 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:41:02.874591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:41:02.874600 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:41:02.874609 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:41:02.874618 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:41:02.874627 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:02.874638 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:02.874649 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:41:02.874660 kernel: NET: Registered PF_XDP protocol family May 13 00:41:02.874769 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:41:02.874857 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:41:02.874941 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:41:02.875036 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:41:02.875160 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:41:02.875234 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:41:02.875244 kernel: PCI: CLS 0 bytes, default 64 May 13 00:41:02.875255 kernel: Initialise system trusted keyrings May 13 00:41:02.875262 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:41:02.875269 kernel: Key type asymmetric registered May 13 00:41:02.875276 kernel: Asymmetric key parser 'x509' registered May 13 00:41:02.875283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:41:02.875290 kernel: io scheduler mq-deadline registered May 13 00:41:02.875296 kernel: io scheduler kyber registered May 13 00:41:02.875303 kernel: io scheduler bfq registered May 13 00:41:02.875310 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:41:02.875318 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:41:02.875325 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:41:02.875332 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:41:02.875339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:41:02.875346 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:41:02.875353 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:41:02.875359 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:41:02.875366 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:41:02.875373 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:41:02.875451 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:41:02.875519 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:41:02.875585 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:41:02 UTC (1747096862) May 13 00:41:02.875651 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:41:02.875660 kernel: NET: Registered PF_INET6 protocol family May 13 00:41:02.875667 kernel: Segment Routing with IPv6 May 13 00:41:02.875674 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:41:02.875694 kernel: NET: Registered PF_PACKET protocol family May 13 00:41:02.875703 kernel: Key type dns_resolver registered May 13 00:41:02.875710 kernel: IPI shorthand broadcast: enabled May 13 00:41:02.875717 kernel: sched_clock: Marking stable (436339502, 103415617)->(557281078, -17525959) May 13 00:41:02.875724 kernel: registered taskstats version 1 May 13 00:41:02.875731 kernel: Loading compiled-in X.509 certificates May 13 00:41:02.875738 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:41:02.875745 kernel: Key type .fscrypt registered May 13 00:41:02.875751 kernel: Key type fscrypt-provisioning registered May 13 00:41:02.875758 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:41:02.875766 kernel: ima: Allocated hash algorithm: sha1 May 13 00:41:02.875773 kernel: ima: No architecture policies found May 13 00:41:02.875780 kernel: clk: Disabling unused clocks May 13 00:41:02.875786 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:41:02.875793 kernel: Write protecting the kernel read-only data: 28672k May 13 00:41:02.875800 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:41:02.875807 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:41:02.875814 kernel: Run /init as init process May 13 00:41:02.875821 kernel: with arguments: May 13 00:41:02.875829 kernel: /init May 13 00:41:02.875836 kernel: with environment: May 13 00:41:02.875842 kernel: HOME=/ May 13 00:41:02.875849 kernel: TERM=linux May 13 00:41:02.875855 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:41:02.875865 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:02.875874 systemd[1]: Detected virtualization kvm. May 13 00:41:02.875882 systemd[1]: Detected architecture x86-64. May 13 00:41:02.875890 systemd[1]: Running in initrd. May 13 00:41:02.875897 systemd[1]: No hostname configured, using default hostname. May 13 00:41:02.875904 systemd[1]: Hostname set to . May 13 00:41:02.875912 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:02.875919 systemd[1]: Queued start job for default target initrd.target. May 13 00:41:02.875926 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:02.875933 systemd[1]: Reached target cryptsetup.target. May 13 00:41:02.875940 systemd[1]: Reached target paths.target. May 13 00:41:02.875949 systemd[1]: Reached target slices.target. May 13 00:41:02.875962 systemd[1]: Reached target swap.target. May 13 00:41:02.875971 systemd[1]: Reached target timers.target. May 13 00:41:02.875979 systemd[1]: Listening on iscsid.socket. May 13 00:41:02.875987 systemd[1]: Listening on iscsiuio.socket. May 13 00:41:02.875995 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:41:02.876003 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:41:02.876011 systemd[1]: Listening on systemd-journald.socket. May 13 00:41:02.876018 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:02.876036 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:02.876043 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:02.876051 systemd[1]: Reached target sockets.target. May 13 00:41:02.876058 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:02.876066 systemd[1]: Finished network-cleanup.service. May 13 00:41:02.876075 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:41:02.876083 systemd[1]: Starting systemd-journald.service... May 13 00:41:02.876091 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:02.876098 systemd[1]: Starting systemd-resolved.service... May 13 00:41:02.876105 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:41:02.876113 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:02.876120 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:41:02.876127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:41:02.876138 systemd-journald[199]: Journal started May 13 00:41:02.876176 systemd-journald[199]: Runtime Journal (/run/log/journal/8e034cd90178427a81cb69b7177b4aeb) is 6.0M, max 48.5M, 42.5M free. May 13 00:41:02.867154 systemd-modules-load[200]: Inserted module 'overlay' May 13 00:41:02.888191 systemd-resolved[201]: Positive Trust Anchors: May 13 00:41:02.888213 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:02.903210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:41:02.888240 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:02.909437 systemd[1]: Started systemd-journald.service. May 13 00:41:02.890590 systemd-resolved[201]: Defaulting to hostname 'linux'. May 13 00:41:02.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.911204 systemd[1]: Started systemd-resolved.service. May 13 00:41:02.916209 kernel: audit: type=1130 audit(1747096862.910:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.916230 kernel: Bridge firewalling registered May 13 00:41:02.916242 kernel: audit: type=1130 audit(1747096862.915:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.914612 systemd-modules-load[200]: Inserted module 'br_netfilter' May 13 00:41:02.916388 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:41:02.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.922128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:41:02.927403 kernel: audit: type=1130 audit(1747096862.921:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.927581 systemd[1]: Reached target nss-lookup.target. May 13 00:41:02.932648 kernel: audit: type=1130 audit(1747096862.926:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.931630 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:41:02.935702 kernel: SCSI subsystem initialized May 13 00:41:02.946293 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:41:02.946316 kernel: device-mapper: uevent: version 1.0.3 May 13 00:41:02.946455 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:41:02.949404 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:41:02.949421 kernel: audit: type=1130 audit(1747096862.948:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.950118 systemd[1]: Starting dracut-cmdline.service... May 13 00:41:02.953966 systemd-modules-load[200]: Inserted module 'dm_multipath' May 13 00:41:02.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.954572 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:02.961720 kernel: audit: type=1130 audit(1747096862.955:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.961739 dracut-cmdline[217]: dracut-dracut-053 May 13 00:41:02.957134 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:02.963788 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:02.966419 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:02.973847 kernel: audit: type=1130 audit(1747096862.968:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:02.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.031715 kernel: Loading iSCSI transport class v2.0-870. May 13 00:41:03.049720 kernel: iscsi: registered transport (tcp) May 13 00:41:03.071719 kernel: iscsi: registered transport (qla4xxx) May 13 00:41:03.071769 kernel: QLogic iSCSI HBA Driver May 13 00:41:03.095524 systemd[1]: Finished dracut-cmdline.service. May 13 00:41:03.100278 kernel: audit: type=1130 audit(1747096863.095:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.100292 systemd[1]: Starting dracut-pre-udev.service... May 13 00:41:03.146715 kernel: raid6: avx2x4 gen() 28378 MB/s May 13 00:41:03.163702 kernel: raid6: avx2x4 xor() 7330 MB/s May 13 00:41:03.180703 kernel: raid6: avx2x2 gen() 31236 MB/s May 13 00:41:03.197700 kernel: raid6: avx2x2 xor() 18737 MB/s May 13 00:41:03.214701 kernel: raid6: avx2x1 gen() 25867 MB/s May 13 00:41:03.231700 kernel: raid6: avx2x1 xor() 14933 MB/s May 13 00:41:03.248702 kernel: raid6: sse2x4 gen() 14399 MB/s May 13 00:41:03.265702 kernel: raid6: sse2x4 xor() 7478 MB/s May 13 00:41:03.282708 kernel: raid6: sse2x2 gen() 15941 MB/s May 13 00:41:03.299705 kernel: raid6: sse2x2 xor() 9646 MB/s May 13 00:41:03.316712 kernel: raid6: sse2x1 gen() 10787 MB/s May 13 00:41:03.334153 kernel: raid6: sse2x1 xor() 7447 MB/s May 13 00:41:03.334173 kernel: raid6: using algorithm avx2x2 gen() 31236 MB/s May 13 00:41:03.334182 kernel: raid6: .... xor() 18737 MB/s, rmw enabled May 13 00:41:03.334918 kernel: raid6: using avx2x2 recovery algorithm May 13 00:41:03.347711 kernel: xor: automatically using best checksumming function avx May 13 00:41:03.437732 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:41:03.446019 systemd[1]: Finished dracut-pre-udev.service. May 13 00:41:03.450610 kernel: audit: type=1130 audit(1747096863.445:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.450000 audit: BPF prog-id=7 op=LOAD May 13 00:41:03.450000 audit: BPF prog-id=8 op=LOAD May 13 00:41:03.450956 systemd[1]: Starting systemd-udevd.service... May 13 00:41:03.463911 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 13 00:41:03.468855 systemd[1]: Started systemd-udevd.service. May 13 00:41:03.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.470556 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:41:03.480861 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 13 00:41:03.509102 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:41:03.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.510295 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:03.541578 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:03.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:03.570806 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:41:03.576517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:41:03.576530 kernel: GPT:9289727 != 19775487 May 13 00:41:03.576539 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:41:03.576547 kernel: GPT:9289727 != 19775487 May 13 00:41:03.576556 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:41:03.576564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:03.578701 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:41:03.586706 kernel: libata version 3.00 loaded. May 13 00:41:03.592298 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:41:03.592319 kernel: AES CTR mode by8 optimization enabled May 13 00:41:03.595898 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:41:03.599874 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:41:03.599886 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:41:03.599968 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:41:03.600074 kernel: scsi host0: ahci May 13 00:41:03.600184 kernel: scsi host1: ahci May 13 00:41:03.600272 kernel: scsi host2: ahci May 13 00:41:03.600367 kernel: scsi host3: ahci May 13 00:41:03.600453 kernel: scsi host4: ahci May 13 00:41:03.600533 kernel: scsi host5: ahci May 13 00:41:03.600611 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:41:03.600620 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:41:03.600632 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:41:03.600640 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:41:03.600648 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:41:03.600657 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:41:03.608700 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) May 13 00:41:03.609551 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:41:03.646720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:41:03.652525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:41:03.658270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:03.661507 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:41:03.663541 systemd[1]: Starting disk-uuid.service... May 13 00:41:03.673858 disk-uuid[517]: Primary Header is updated. May 13 00:41:03.673858 disk-uuid[517]: Secondary Entries is updated. May 13 00:41:03.673858 disk-uuid[517]: Secondary Header is updated. May 13 00:41:03.678713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:03.682720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:03.913950 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:41:03.914032 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:41:03.914043 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:41:03.914051 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:41:03.914060 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:41:03.915718 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:41:03.916748 kernel: ata3.00: applying bridge limits May 13 00:41:03.917708 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:41:03.918714 kernel: ata3.00: configured for UDMA/100 May 13 00:41:03.919711 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:41:03.956107 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:41:03.973608 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:41:03.973620 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:41:04.689383 disk-uuid[518]: The operation has completed successfully. May 13 00:41:04.690615 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:04.710150 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:41:04.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.710238 systemd[1]: Finished disk-uuid.service. May 13 00:41:04.716699 systemd[1]: Starting verity-setup.service... May 13 00:41:04.747709 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:41:04.766279 systemd[1]: Found device dev-mapper-usr.device. May 13 00:41:04.768457 systemd[1]: Mounting sysusr-usr.mount... May 13 00:41:04.771490 systemd[1]: Finished verity-setup.service. May 13 00:41:04.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.839639 systemd[1]: Mounted sysusr-usr.mount. May 13 00:41:04.841146 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:41:04.840509 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:41:04.841170 systemd[1]: Starting ignition-setup.service... May 13 00:41:04.843812 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:41:04.849799 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:04.849824 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:04.849833 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:04.857106 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:41:04.897858 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:41:04.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.914000 audit: BPF prog-id=9 op=LOAD May 13 00:41:04.915457 systemd[1]: Starting systemd-networkd.service... May 13 00:41:04.935393 systemd-networkd[709]: lo: Link UP May 13 00:41:04.935403 systemd-networkd[709]: lo: Gained carrier May 13 00:41:04.950830 systemd-networkd[709]: Enumeration completed May 13 00:41:04.950914 systemd[1]: Started systemd-networkd.service. May 13 00:41:04.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.951558 systemd[1]: Reached target network.target. May 13 00:41:04.954332 systemd[1]: Starting iscsiuio.service... May 13 00:41:04.956322 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:04.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.958087 systemd[1]: Started iscsiuio.service. May 13 00:41:04.959734 systemd-networkd[709]: eth0: Link UP May 13 00:41:04.959739 systemd-networkd[709]: eth0: Gained carrier May 13 00:41:04.960444 systemd[1]: Starting iscsid.service... May 13 00:41:04.964105 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:04.964105 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:41:04.964105 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:41:04.964105 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:41:04.964105 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:04.964105 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:41:04.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.964415 systemd[1]: Started iscsid.service. May 13 00:41:04.972294 systemd[1]: Finished ignition-setup.service. May 13 00:41:04.974622 systemd[1]: Starting dracut-initqueue.service... May 13 00:41:04.974771 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:04.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.976484 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:41:04.984800 systemd[1]: Finished dracut-initqueue.service. May 13 00:41:04.986605 systemd[1]: Reached target remote-fs-pre.target. May 13 00:41:04.987220 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:04.987421 systemd[1]: Reached target remote-fs.target. May 13 00:41:04.988519 systemd[1]: Starting dracut-pre-mount.service... May 13 00:41:04.996212 systemd[1]: Finished dracut-pre-mount.service. May 13 00:41:04.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.020526 ignition[717]: Ignition 2.14.0 May 13 00:41:05.020540 ignition[717]: Stage: fetch-offline May 13 00:41:05.020596 ignition[717]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:05.020607 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:05.020749 ignition[717]: parsed url from cmdline: "" May 13 00:41:05.020754 ignition[717]: no config URL provided May 13 00:41:05.020759 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:41:05.020768 ignition[717]: no config at "/usr/lib/ignition/user.ign" May 13 00:41:05.020791 ignition[717]: op(1): [started] loading QEMU firmware config module May 13 00:41:05.020797 ignition[717]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:41:05.024927 ignition[717]: op(1): [finished] loading QEMU firmware config module May 13 00:41:05.045491 ignition[717]: parsing config with SHA512: 91301948df14762cde5ed543ec6020d0c84d396c5e88fbf6b83fd3f53ced5047fbb20f176985fa2f0421875b908bfb88bdb38c6bc53c09ae1cf26e272bac3da3 May 13 00:41:05.051512 unknown[717]: fetched base config from "system" May 13 00:41:05.051528 unknown[717]: fetched user config from "qemu" May 13 00:41:05.053671 ignition[717]: fetch-offline: fetch-offline passed May 13 00:41:05.054551 ignition[717]: Ignition finished successfully May 13 00:41:05.056025 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:41:05.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.056568 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:41:05.057400 systemd[1]: Starting ignition-kargs.service... May 13 00:41:05.067625 ignition[737]: Ignition 2.14.0 May 13 00:41:05.067633 ignition[737]: Stage: kargs May 13 00:41:05.067722 ignition[737]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:05.067730 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:05.070286 systemd[1]: Finished ignition-kargs.service. May 13 00:41:05.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.068573 ignition[737]: kargs: kargs passed May 13 00:41:05.072703 systemd[1]: Starting ignition-disks.service... May 13 00:41:05.068600 ignition[737]: Ignition finished successfully May 13 00:41:05.078529 ignition[743]: Ignition 2.14.0 May 13 00:41:05.078537 ignition[743]: Stage: disks May 13 00:41:05.078604 ignition[743]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:05.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.080067 systemd[1]: Finished ignition-disks.service. May 13 00:41:05.078612 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:05.081044 systemd[1]: Reached target initrd-root-device.target. May 13 00:41:05.079401 ignition[743]: disks: disks passed May 13 00:41:05.082869 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:05.079426 ignition[743]: Ignition finished successfully May 13 00:41:05.083795 systemd[1]: Reached target local-fs.target. May 13 00:41:05.084658 systemd[1]: Reached target sysinit.target. May 13 00:41:05.086289 systemd[1]: Reached target basic.target. May 13 00:41:05.087932 systemd[1]: Starting systemd-fsck-root.service... May 13 00:41:05.099158 systemd-fsck[751]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:41:05.104239 systemd[1]: Finished systemd-fsck-root.service. May 13 00:41:05.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.105854 systemd[1]: Mounting sysroot.mount... May 13 00:41:05.111449 systemd[1]: Mounted sysroot.mount. May 13 00:41:05.112665 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:41:05.112046 systemd[1]: Reached target initrd-root-fs.target. May 13 00:41:05.114534 systemd[1]: Mounting sysroot-usr.mount... May 13 00:41:05.116131 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:41:05.116169 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:41:05.116192 systemd[1]: Reached target ignition-diskful.target. May 13 00:41:05.122673 systemd[1]: Mounted sysroot-usr.mount. May 13 00:41:05.124140 systemd[1]: Starting initrd-setup-root.service... May 13 00:41:05.129710 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:41:05.133794 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory May 13 00:41:05.136364 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:41:05.139256 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:41:05.163483 systemd[1]: Finished initrd-setup-root.service. May 13 00:41:05.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.165004 systemd[1]: Starting ignition-mount.service... May 13 00:41:05.166247 systemd[1]: Starting sysroot-boot.service... May 13 00:41:05.170341 bash[802]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:41:05.177308 ignition[804]: INFO : Ignition 2.14.0 May 13 00:41:05.177308 ignition[804]: INFO : Stage: mount May 13 00:41:05.179111 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:05.179111 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:05.179111 ignition[804]: INFO : mount: mount passed May 13 00:41:05.179111 ignition[804]: INFO : Ignition finished successfully May 13 00:41:05.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.179214 systemd[1]: Finished ignition-mount.service. May 13 00:41:05.184668 systemd[1]: Finished sysroot-boot.service. May 13 00:41:05.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.777137 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:41:05.782703 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) May 13 00:41:05.784976 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:05.784995 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:05.785005 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:05.788612 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:41:05.790295 systemd[1]: Starting ignition-files.service... May 13 00:41:05.803548 ignition[832]: INFO : Ignition 2.14.0 May 13 00:41:05.803548 ignition[832]: INFO : Stage: files May 13 00:41:05.805518 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:05.805518 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:05.805518 ignition[832]: DEBUG : files: compiled without relabeling support, skipping May 13 00:41:05.809778 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:41:05.809778 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:41:05.809778 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:41:05.809778 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:41:05.809778 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:41:05.809778 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:41:05.809778 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:41:05.808474 unknown[832]: wrote ssh authorized keys file for user: core May 13 00:41:05.896864 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:41:06.016945 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:41:06.019013 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 00:41:06.103803 systemd-networkd[709]: eth0: Gained IPv6LL May 13 00:41:06.618082 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:41:07.707147 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:41:07.707147 ignition[832]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:41:07.711009 ignition[832]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 13 00:41:07.726129 ignition[832]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:41:07.726129 ignition[832]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:41:07.726129 ignition[832]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:07.744840 ignition[832]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:07.746969 ignition[832]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:41:07.748595 ignition[832]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:07.750579 ignition[832]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:07.752595 ignition[832]: INFO : files: files passed May 13 00:41:07.753488 ignition[832]: INFO : Ignition finished successfully May 13 00:41:07.755730 systemd[1]: Finished ignition-files.service. May 13 00:41:07.762231 kernel: kauditd_printk_skb: 23 callbacks suppressed May 13 00:41:07.762256 kernel: audit: type=1130 audit(1747096867.755:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.762264 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:41:07.762993 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:41:07.763598 systemd[1]: Starting ignition-quench.service... May 13 00:41:07.767572 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:41:07.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.767658 systemd[1]: Finished ignition-quench.service. May 13 00:41:07.777171 kernel: audit: type=1130 audit(1747096867.768:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.777193 kernel: audit: type=1131 audit(1747096867.768:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.774152 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:41:07.778926 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:41:07.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.778860 systemd[1]: Reached target ignition-complete.target. May 13 00:41:07.787788 kernel: audit: type=1130 audit(1747096867.778:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.787809 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:41:07.781658 systemd[1]: Starting initrd-parse-etc.service... May 13 00:41:07.795627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:41:07.795740 systemd[1]: Finished initrd-parse-etc.service. May 13 00:41:07.806112 kernel: audit: type=1130 audit(1747096867.795:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.806141 kernel: audit: type=1131 audit(1747096867.795:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.796485 systemd[1]: Reached target initrd-fs.target. May 13 00:41:07.806386 systemd[1]: Reached target initrd.target. May 13 00:41:07.808410 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:41:07.809614 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:41:07.824414 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:41:07.830169 kernel: audit: type=1130 audit(1747096867.823:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.825420 systemd[1]: Starting initrd-cleanup.service... May 13 00:41:07.835442 systemd[1]: Stopped target nss-lookup.target. May 13 00:41:07.835975 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:41:07.836306 systemd[1]: Stopped target timers.target. May 13 00:41:07.839101 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:41:07.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.839208 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:41:07.840557 systemd[1]: Stopped target initrd.target. May 13 00:41:07.847721 kernel: audit: type=1131 audit(1747096867.839:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.845927 systemd[1]: Stopped target basic.target. May 13 00:41:07.847368 systemd[1]: Stopped target ignition-complete.target. May 13 00:41:07.847958 systemd[1]: Stopped target ignition-diskful.target. May 13 00:41:07.850031 systemd[1]: Stopped target initrd-root-device.target. May 13 00:41:07.851497 systemd[1]: Stopped target remote-fs.target. May 13 00:41:07.853116 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:41:07.854653 systemd[1]: Stopped target sysinit.target. May 13 00:41:07.856137 systemd[1]: Stopped target local-fs.target. May 13 00:41:07.857933 systemd[1]: Stopped target local-fs-pre.target. May 13 00:41:07.859473 systemd[1]: Stopped target swap.target. May 13 00:41:07.860849 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:41:07.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.860968 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:41:07.869027 kernel: audit: type=1131 audit(1747096867.861:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.862477 systemd[1]: Stopped target cryptsetup.target. May 13 00:41:07.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.863902 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:41:07.875282 kernel: audit: type=1131 audit(1747096867.867:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.864029 systemd[1]: Stopped dracut-initqueue.service. May 13 00:41:07.868542 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:41:07.868642 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:41:07.870108 systemd[1]: Stopped target paths.target. May 13 00:41:07.874357 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:41:07.879659 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:41:07.881450 systemd[1]: Stopped target slices.target. May 13 00:41:07.881784 systemd[1]: Stopped target sockets.target. May 13 00:41:07.883344 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:41:07.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.883451 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:41:07.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.884827 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:41:07.884936 systemd[1]: Stopped ignition-files.service. May 13 00:41:07.887750 systemd[1]: Stopping ignition-mount.service... May 13 00:41:07.888597 systemd[1]: Stopping iscsid.service... May 13 00:41:07.897263 ignition[872]: INFO : Ignition 2.14.0 May 13 00:41:07.897263 ignition[872]: INFO : Stage: umount May 13 00:41:07.897263 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:07.897263 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:07.897263 ignition[872]: INFO : umount: umount passed May 13 00:41:07.897263 ignition[872]: INFO : Ignition finished successfully May 13 00:41:07.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.909375 iscsid[714]: iscsid shutting down. May 13 00:41:07.889983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:41:07.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.890114 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:41:07.892115 systemd[1]: Stopping sysroot-boot.service... May 13 00:41:07.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.895531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:41:07.895692 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:41:07.897378 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:41:07.897466 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:41:07.900469 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:41:07.900545 systemd[1]: Stopped iscsid.service. May 13 00:41:07.902439 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:41:07.902518 systemd[1]: Stopped ignition-mount.service. May 13 00:41:07.904526 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:41:07.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.904589 systemd[1]: Closed iscsid.socket. May 13 00:41:07.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.905663 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:41:07.905708 systemd[1]: Stopped ignition-disks.service. May 13 00:41:07.907525 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:41:07.907558 systemd[1]: Stopped ignition-kargs.service. May 13 00:41:07.908431 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:41:07.934000 audit: BPF prog-id=6 op=UNLOAD May 13 00:41:07.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.908461 systemd[1]: Stopped ignition-setup.service. May 13 00:41:07.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.909414 systemd[1]: Stopping iscsiuio.service... May 13 00:41:07.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.910933 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:41:07.911295 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:41:07.911357 systemd[1]: Finished initrd-cleanup.service. May 13 00:41:07.912757 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:41:07.912823 systemd[1]: Stopped iscsiuio.service. May 13 00:41:07.915128 systemd[1]: Stopped target network.target. May 13 00:41:07.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.915941 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:41:07.915968 systemd[1]: Closed iscsiuio.socket. May 13 00:41:07.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.916456 systemd[1]: Stopping systemd-networkd.service... May 13 00:41:07.917066 systemd[1]: Stopping systemd-resolved.service... May 13 00:41:07.923720 systemd-networkd[709]: eth0: DHCPv6 lease lost May 13 00:41:07.952000 audit: BPF prog-id=9 op=UNLOAD May 13 00:41:07.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.925098 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:41:07.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.925259 systemd[1]: Stopped systemd-networkd.service. May 13 00:41:07.927811 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:41:07.927877 systemd[1]: Stopped systemd-resolved.service. May 13 00:41:07.931154 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:41:07.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.931177 systemd[1]: Closed systemd-networkd.socket. May 13 00:41:07.933450 systemd[1]: Stopping network-cleanup.service... May 13 00:41:07.934420 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:41:07.934456 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:41:07.936414 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:41:07.936446 systemd[1]: Stopped systemd-sysctl.service. May 13 00:41:07.938283 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:41:07.938315 systemd[1]: Stopped systemd-modules-load.service. May 13 00:41:07.939260 systemd[1]: Stopping systemd-udevd.service... May 13 00:41:07.943163 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:41:07.945220 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:41:07.945287 systemd[1]: Stopped network-cleanup.service. May 13 00:41:07.947474 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:41:07.947569 systemd[1]: Stopped systemd-udevd.service. May 13 00:41:07.950056 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:41:07.950085 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:41:07.951653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:41:07.951675 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:41:07.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.953239 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:41:07.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.953272 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:41:07.954190 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:41:07.954220 systemd[1]: Stopped dracut-cmdline.service. May 13 00:41:07.955785 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:41:07.955813 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:41:07.957186 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:41:07.958472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:41:07.958507 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:41:07.961626 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:41:07.961701 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:41:07.980109 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:41:07.980190 systemd[1]: Stopped sysroot-boot.service. May 13 00:41:07.981342 systemd[1]: Reached target initrd-switch-root.target. May 13 00:41:07.982954 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:41:07.982994 systemd[1]: Stopped initrd-setup-root.service. May 13 00:41:07.984435 systemd[1]: Starting initrd-switch-root.service... May 13 00:41:08.000273 systemd[1]: Switching root. May 13 00:41:08.019415 systemd-journald[199]: Journal stopped May 13 00:41:10.865378 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). May 13 00:41:10.865433 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:41:10.865449 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:41:10.865459 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:41:10.865468 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:41:10.865480 kernel: SELinux: policy capability open_perms=1 May 13 00:41:10.865489 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:41:10.865498 kernel: SELinux: policy capability always_check_network=0 May 13 00:41:10.865508 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:41:10.865520 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:41:10.865533 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:41:10.865542 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:41:10.865552 systemd[1]: Successfully loaded SELinux policy in 48.810ms. May 13 00:41:10.865567 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.193ms. May 13 00:41:10.865579 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:10.865589 systemd[1]: Detected virtualization kvm. May 13 00:41:10.865604 systemd[1]: Detected architecture x86-64. May 13 00:41:10.865613 systemd[1]: Detected first boot. May 13 00:41:10.865624 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:10.865633 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:41:10.865643 systemd[1]: Populated /etc with preset unit settings. May 13 00:41:10.865653 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:10.865665 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:10.865676 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:10.865714 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:41:10.865727 systemd[1]: Stopped initrd-switch-root.service. May 13 00:41:10.865738 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:41:10.865748 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:41:10.865758 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:41:10.865771 systemd[1]: Created slice system-getty.slice. May 13 00:41:10.865781 systemd[1]: Created slice system-modprobe.slice. May 13 00:41:10.865791 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:41:10.865803 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:41:10.865814 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:41:10.865825 systemd[1]: Created slice user.slice. May 13 00:41:10.865839 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:10.865858 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:41:10.865870 systemd[1]: Set up automount boot.automount. May 13 00:41:10.865880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:41:10.865890 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:41:10.865900 systemd[1]: Stopped target initrd-fs.target. May 13 00:41:10.865910 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:41:10.865920 systemd[1]: Reached target integritysetup.target. May 13 00:41:10.865932 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:10.865942 systemd[1]: Reached target remote-fs.target. May 13 00:41:10.865952 systemd[1]: Reached target slices.target. May 13 00:41:10.865962 systemd[1]: Reached target swap.target. May 13 00:41:10.865972 systemd[1]: Reached target torcx.target. May 13 00:41:10.865982 systemd[1]: Reached target veritysetup.target. May 13 00:41:10.865992 systemd[1]: Listening on systemd-coredump.socket. May 13 00:41:10.866003 systemd[1]: Listening on systemd-initctl.socket. May 13 00:41:10.866013 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:10.866024 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:10.866034 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:10.866045 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:41:10.866055 systemd[1]: Mounting dev-hugepages.mount... May 13 00:41:10.866065 systemd[1]: Mounting dev-mqueue.mount... May 13 00:41:10.866075 systemd[1]: Mounting media.mount... May 13 00:41:10.866087 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:10.866098 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:41:10.866108 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:41:10.866118 systemd[1]: Mounting tmp.mount... May 13 00:41:10.866130 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:41:10.866141 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:10.866152 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:10.866162 systemd[1]: Starting modprobe@configfs.service... May 13 00:41:10.866172 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:10.866183 systemd[1]: Starting modprobe@drm.service... May 13 00:41:10.866193 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:10.866204 systemd[1]: Starting modprobe@fuse.service... May 13 00:41:10.866214 systemd[1]: Starting modprobe@loop.service... May 13 00:41:10.866226 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:41:10.866236 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:41:10.866246 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:41:10.866256 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:41:10.866267 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:41:10.866278 systemd[1]: Stopped systemd-journald.service. May 13 00:41:10.866288 kernel: loop: module loaded May 13 00:41:10.866298 kernel: fuse: init (API version 7.34) May 13 00:41:10.866309 systemd[1]: Starting systemd-journald.service... May 13 00:41:10.866321 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:10.866332 systemd[1]: Starting systemd-network-generator.service... May 13 00:41:10.866343 systemd[1]: Starting systemd-remount-fs.service... May 13 00:41:10.866353 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:10.866363 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:41:10.866373 systemd[1]: Stopped verity-setup.service. May 13 00:41:10.866384 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:10.866396 systemd-journald[986]: Journal started May 13 00:41:10.866437 systemd-journald[986]: Runtime Journal (/run/log/journal/8e034cd90178427a81cb69b7177b4aeb) is 6.0M, max 48.5M, 42.5M free. May 13 00:41:08.090000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:41:08.628000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:08.628000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:08.628000 audit: BPF prog-id=10 op=LOAD May 13 00:41:08.628000 audit: BPF prog-id=10 op=UNLOAD May 13 00:41:08.628000 audit: BPF prog-id=11 op=LOAD May 13 00:41:08.628000 audit: BPF prog-id=11 op=UNLOAD May 13 00:41:08.658000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:41:08.658000 audit[905]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d58a2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:08.658000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:08.660000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:41:08.660000 audit[905]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d5979 a2=1ed a3=0 items=2 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:08.660000 audit: CWD cwd="/" May 13 00:41:08.660000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:08.660000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:08.660000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:10.742000 audit: BPF prog-id=12 op=LOAD May 13 00:41:10.742000 audit: BPF prog-id=3 op=UNLOAD May 13 00:41:10.742000 audit: BPF prog-id=13 op=LOAD May 13 00:41:10.742000 audit: BPF prog-id=14 op=LOAD May 13 00:41:10.742000 audit: BPF prog-id=4 op=UNLOAD May 13 00:41:10.742000 audit: BPF prog-id=5 op=UNLOAD May 13 00:41:10.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.754000 audit: BPF prog-id=12 op=UNLOAD May 13 00:41:10.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.846000 audit: BPF prog-id=15 op=LOAD May 13 00:41:10.846000 audit: BPF prog-id=16 op=LOAD May 13 00:41:10.846000 audit: BPF prog-id=17 op=LOAD May 13 00:41:10.846000 audit: BPF prog-id=13 op=UNLOAD May 13 00:41:10.846000 audit: BPF prog-id=14 op=UNLOAD May 13 00:41:10.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.863000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:41:10.863000 audit[986]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff88532bf0 a2=4000 a3=7fff88532c8c items=0 ppid=1 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:10.863000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:41:10.741215 systemd[1]: Queued start job for default target multi-user.target. May 13 00:41:08.658003 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:10.741226 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:41:08.658201 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:41:10.743757 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:41:08.658216 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:41:08.658243 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:41:08.658252 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:41:08.658278 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:41:08.658289 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:41:08.658459 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:41:08.658490 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:41:08.658502 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:41:08.659021 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:41:08.659061 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:41:08.659082 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:41:08.659100 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:41:10.868700 systemd[1]: Started systemd-journald.service. May 13 00:41:10.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.659119 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:41:08.659137 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:41:10.486356 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:10.486603 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:10.486717 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:10.486871 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:10.486915 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:41:10.486972 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-05-13T00:41:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:41:10.869142 systemd[1]: Mounted dev-hugepages.mount. May 13 00:41:10.869988 systemd[1]: Mounted dev-mqueue.mount. May 13 00:41:10.870762 systemd[1]: Mounted media.mount. May 13 00:41:10.871493 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:41:10.872335 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:41:10.873194 systemd[1]: Mounted tmp.mount. May 13 00:41:10.874107 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:41:10.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.875224 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:10.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.876246 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:41:10.876408 systemd[1]: Finished modprobe@configfs.service. May 13 00:41:10.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.877415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:10.877550 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:10.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.878554 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:10.878719 systemd[1]: Finished modprobe@drm.service. May 13 00:41:10.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.879874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:10.880020 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:10.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.881055 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:41:10.881188 systemd[1]: Finished modprobe@fuse.service. May 13 00:41:10.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.882157 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:10.882311 systemd[1]: Finished modprobe@loop.service. May 13 00:41:10.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.883355 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:10.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.884472 systemd[1]: Finished systemd-network-generator.service. May 13 00:41:10.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.885621 systemd[1]: Finished systemd-remount-fs.service. May 13 00:41:10.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.886840 systemd[1]: Reached target network-pre.target. May 13 00:41:10.888592 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:41:10.890376 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:41:10.891109 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:41:10.892227 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:41:10.894057 systemd[1]: Starting systemd-journal-flush.service... May 13 00:41:10.895083 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:10.899005 systemd[1]: Starting systemd-random-seed.service... May 13 00:41:10.899916 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:10.900773 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:10.902489 systemd[1]: Starting systemd-sysusers.service... May 13 00:41:10.904797 systemd-journald[986]: Time spent on flushing to /var/log/journal/8e034cd90178427a81cb69b7177b4aeb is 13.785ms for 1090 entries. May 13 00:41:10.904797 systemd-journald[986]: System Journal (/var/log/journal/8e034cd90178427a81cb69b7177b4aeb) is 8.0M, max 195.6M, 187.6M free. May 13 00:41:10.932611 systemd-journald[986]: Received client request to flush runtime journal. May 13 00:41:10.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.905651 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:41:10.907083 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:41:10.908402 systemd[1]: Finished systemd-random-seed.service. May 13 00:41:10.909366 systemd[1]: Reached target first-boot-complete.target. May 13 00:41:10.918268 systemd[1]: Finished systemd-sysusers.service. May 13 00:41:10.919426 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:10.933325 systemd[1]: Finished systemd-journal-flush.service. May 13 00:41:10.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.939598 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:10.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:10.941438 systemd[1]: Starting systemd-udev-settle.service... May 13 00:41:10.948239 udevadm[1010]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:41:11.369131 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:41:11.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.370000 audit: BPF prog-id=18 op=LOAD May 13 00:41:11.370000 audit: BPF prog-id=19 op=LOAD May 13 00:41:11.370000 audit: BPF prog-id=7 op=UNLOAD May 13 00:41:11.370000 audit: BPF prog-id=8 op=UNLOAD May 13 00:41:11.371658 systemd[1]: Starting systemd-udevd.service... May 13 00:41:11.385901 systemd-udevd[1011]: Using default interface naming scheme 'v252'. May 13 00:41:11.397614 systemd[1]: Started systemd-udevd.service. May 13 00:41:11.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.398000 audit: BPF prog-id=20 op=LOAD May 13 00:41:11.400234 systemd[1]: Starting systemd-networkd.service... May 13 00:41:11.402000 audit: BPF prog-id=21 op=LOAD May 13 00:41:11.403000 audit: BPF prog-id=22 op=LOAD May 13 00:41:11.403000 audit: BPF prog-id=23 op=LOAD May 13 00:41:11.404310 systemd[1]: Starting systemd-userdbd.service... May 13 00:41:11.429948 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 00:41:11.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.430219 systemd[1]: Started systemd-userdbd.service. May 13 00:41:11.442382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:11.461716 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:41:11.465705 kernel: ACPI: button: Power Button [PWRF] May 13 00:41:11.469162 systemd-networkd[1019]: lo: Link UP May 13 00:41:11.469402 systemd-networkd[1019]: lo: Gained carrier May 13 00:41:11.469885 systemd-networkd[1019]: Enumeration completed May 13 00:41:11.470034 systemd[1]: Started systemd-networkd.service. May 13 00:41:11.470041 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:11.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.471387 systemd-networkd[1019]: eth0: Link UP May 13 00:41:11.471394 systemd-networkd[1019]: eth0: Gained carrier May 13 00:41:11.481818 systemd-networkd[1019]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:11.477000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:41:11.477000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555f88a7a3a0 a1=338ac a2=7f231b6a0bc5 a3=5 items=110 ppid=1011 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:11.477000 audit: CWD cwd="/" May 13 00:41:11.477000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=1 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=2 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=3 name=(null) inode=913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=4 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=5 name=(null) inode=914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=6 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=7 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=8 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=9 name=(null) inode=916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=10 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=11 name=(null) inode=917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=12 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=13 name=(null) inode=918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=14 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=15 name=(null) inode=919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=16 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=17 name=(null) inode=920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=18 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=19 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=20 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=21 name=(null) inode=922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=22 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=23 name=(null) inode=923 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=24 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=25 name=(null) inode=924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=26 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=27 name=(null) inode=925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=28 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=29 name=(null) inode=926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=30 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=31 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=32 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=33 name=(null) inode=928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=34 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=35 name=(null) inode=929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=36 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=37 name=(null) inode=930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=38 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=39 name=(null) inode=931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=40 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=41 name=(null) inode=932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=42 name=(null) inode=912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=43 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=44 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=45 name=(null) inode=934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=46 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=47 name=(null) inode=935 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=48 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=49 name=(null) inode=936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=50 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=51 name=(null) inode=937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=52 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=53 name=(null) inode=938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=55 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=56 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=57 name=(null) inode=940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=58 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=59 name=(null) inode=941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=60 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=61 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=62 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=63 name=(null) inode=943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=64 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=65 name=(null) inode=944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=66 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=67 name=(null) inode=945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=68 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=69 name=(null) inode=946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=70 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=71 name=(null) inode=947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=72 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=73 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=74 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=75 name=(null) inode=949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=76 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=77 name=(null) inode=950 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=78 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=79 name=(null) inode=951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=80 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=81 name=(null) inode=952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=82 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=83 name=(null) inode=953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=84 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=85 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=86 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=87 name=(null) inode=955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=88 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=89 name=(null) inode=956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=90 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=91 name=(null) inode=957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=92 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=93 name=(null) inode=958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=94 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=95 name=(null) inode=959 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=96 name=(null) inode=939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=97 name=(null) inode=960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=98 name=(null) inode=960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=99 name=(null) inode=961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=100 name=(null) inode=960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=101 name=(null) inode=962 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=102 name=(null) inode=960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=103 name=(null) inode=963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=104 name=(null) inode=960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=105 name=(null) inode=964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=106 name=(null) inode=960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=107 name=(null) inode=965 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PATH item=109 name=(null) inode=966 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:11.477000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:41:11.494702 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:41:11.500132 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:41:11.500336 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:41:11.500451 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:41:11.513707 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:41:11.555706 kernel: kvm: Nested Virtualization enabled May 13 00:41:11.555822 kernel: SVM: kvm: Nested Paging enabled May 13 00:41:11.555849 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:41:11.555868 kernel: SVM: Virtual GIF supported May 13 00:41:11.572703 kernel: EDAC MC: Ver: 3.0.0 May 13 00:41:11.601050 systemd[1]: Finished systemd-udev-settle.service. May 13 00:41:11.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.603000 systemd[1]: Starting lvm2-activation-early.service... May 13 00:41:11.609904 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:11.634242 systemd[1]: Finished lvm2-activation-early.service. May 13 00:41:11.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.635219 systemd[1]: Reached target cryptsetup.target. May 13 00:41:11.636877 systemd[1]: Starting lvm2-activation.service... May 13 00:41:11.639752 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:11.662938 systemd[1]: Finished lvm2-activation.service. May 13 00:41:11.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.663929 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:11.664788 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:41:11.664805 systemd[1]: Reached target local-fs.target. May 13 00:41:11.665589 systemd[1]: Reached target machines.target. May 13 00:41:11.667368 systemd[1]: Starting ldconfig.service... May 13 00:41:11.668334 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:11.668369 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:11.669153 systemd[1]: Starting systemd-boot-update.service... May 13 00:41:11.670584 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:41:11.672753 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:41:11.674577 systemd[1]: Starting systemd-sysext.service... May 13 00:41:11.676538 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) May 13 00:41:11.677321 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:41:11.680237 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:41:11.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.687335 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:41:11.691238 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:41:11.691371 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:41:11.700704 kernel: loop0: detected capacity change from 0 to 205544 May 13 00:41:11.709265 systemd-fsck[1051]: fsck.fat 4.2 (2021-01-31) May 13 00:41:11.709265 systemd-fsck[1051]: /dev/vda1: 790 files, 120692/258078 clusters May 13 00:41:11.711123 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:41:11.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.715651 systemd[1]: Mounting boot.mount... May 13 00:41:11.958701 systemd[1]: Mounted boot.mount. May 13 00:41:11.966712 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:41:11.974277 systemd[1]: Finished systemd-boot-update.service. May 13 00:41:11.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.978365 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:41:11.978860 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:41:11.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:11.992705 kernel: loop1: detected capacity change from 0 to 205544 May 13 00:41:11.997112 (sd-sysext)[1057]: Using extensions 'kubernetes'. May 13 00:41:11.997407 (sd-sysext)[1057]: Merged extensions into '/usr'. May 13 00:41:12.013043 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.014225 systemd[1]: Mounting usr-share-oem.mount... May 13 00:41:12.015391 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:12.016999 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:12.019194 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:12.021389 systemd[1]: Starting modprobe@loop.service... May 13 00:41:12.022370 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:12.022512 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:12.022653 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.025863 systemd[1]: Mounted usr-share-oem.mount. May 13 00:41:12.027125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:12.027246 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:12.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.028486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:12.028590 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:12.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.029857 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:12.029964 systemd[1]: Finished modprobe@loop.service. May 13 00:41:12.030759 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:41:12.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.031231 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:12.031328 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:12.032449 systemd[1]: Finished systemd-sysext.service. May 13 00:41:12.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.034522 systemd[1]: Starting ensure-sysext.service... May 13 00:41:12.036337 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:41:12.037629 systemd[1]: Finished ldconfig.service. May 13 00:41:12.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.042095 systemd[1]: Reloading. May 13 00:41:12.049003 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:41:12.050937 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:41:12.053497 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:41:12.099991 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-05-13T00:41:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:12.100787 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-05-13T00:41:12Z" level=info msg="torcx already run" May 13 00:41:12.156299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:12.156315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:12.172758 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:12.223000 audit: BPF prog-id=24 op=LOAD May 13 00:41:12.223000 audit: BPF prog-id=25 op=LOAD May 13 00:41:12.223000 audit: BPF prog-id=18 op=UNLOAD May 13 00:41:12.223000 audit: BPF prog-id=19 op=UNLOAD May 13 00:41:12.224000 audit: BPF prog-id=26 op=LOAD May 13 00:41:12.224000 audit: BPF prog-id=15 op=UNLOAD May 13 00:41:12.224000 audit: BPF prog-id=27 op=LOAD May 13 00:41:12.224000 audit: BPF prog-id=28 op=LOAD May 13 00:41:12.224000 audit: BPF prog-id=16 op=UNLOAD May 13 00:41:12.224000 audit: BPF prog-id=17 op=UNLOAD May 13 00:41:12.225000 audit: BPF prog-id=29 op=LOAD May 13 00:41:12.225000 audit: BPF prog-id=20 op=UNLOAD May 13 00:41:12.226000 audit: BPF prog-id=30 op=LOAD May 13 00:41:12.226000 audit: BPF prog-id=21 op=UNLOAD May 13 00:41:12.226000 audit: BPF prog-id=31 op=LOAD May 13 00:41:12.226000 audit: BPF prog-id=32 op=LOAD May 13 00:41:12.226000 audit: BPF prog-id=22 op=UNLOAD May 13 00:41:12.226000 audit: BPF prog-id=23 op=UNLOAD May 13 00:41:12.230171 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:41:12.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.235150 systemd[1]: Starting audit-rules.service... May 13 00:41:12.237081 systemd[1]: Starting clean-ca-certificates.service... May 13 00:41:12.239177 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:41:12.240000 audit: BPF prog-id=33 op=LOAD May 13 00:41:12.241780 systemd[1]: Starting systemd-resolved.service... May 13 00:41:12.242000 audit: BPF prog-id=34 op=LOAD May 13 00:41:12.244346 systemd[1]: Starting systemd-timesyncd.service... May 13 00:41:12.246080 systemd[1]: Starting systemd-update-utmp.service... May 13 00:41:12.247563 systemd[1]: Finished clean-ca-certificates.service. May 13 00:41:12.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.251208 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:12.251000 audit[1137]: SYSTEM_BOOT pid=1137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:41:12.255218 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.255531 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:12.257547 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:12.260868 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:12.262605 systemd[1]: Starting modprobe@loop.service... May 13 00:41:12.263392 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:12.263535 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:12.263668 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:12.263787 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.265043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:12.266379 augenrules[1151]: No rules May 13 00:41:12.265154 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:12.265000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:41:12.265000 audit[1151]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb64a0b30 a2=420 a3=0 items=0 ppid=1128 pid=1151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:12.265000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:41:12.266536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:12.266676 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:12.268061 systemd[1]: Finished audit-rules.service. May 13 00:41:12.269603 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:12.269856 systemd[1]: Finished modprobe@loop.service. May 13 00:41:12.272414 systemd[1]: Finished systemd-update-utmp.service. May 13 00:41:12.274134 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:41:12.278240 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.278503 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:12.280138 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:12.282160 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:12.284251 systemd[1]: Starting modprobe@loop.service... May 13 00:41:12.285145 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:12.285293 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:12.286806 systemd[1]: Starting systemd-update-done.service... May 13 00:41:12.287716 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:12.287881 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.289355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:12.289530 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:12.290889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:12.291045 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:12.292365 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:12.292530 systemd[1]: Finished modprobe@loop.service. May 13 00:41:12.293926 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:12.294107 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:12.298076 systemd[1]: Finished systemd-update-done.service. May 13 00:41:12.299459 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.299777 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:12.301371 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:12.303441 systemd[1]: Starting modprobe@drm.service... May 13 00:41:12.305986 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:12.306066 systemd-resolved[1132]: Positive Trust Anchors: May 13 00:41:12.306076 systemd-resolved[1132]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:12.306103 systemd-resolved[1132]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:12.308664 systemd[1]: Starting modprobe@loop.service... May 13 00:41:12.309744 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:12.310110 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:12.311553 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:41:12.312626 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:12.312797 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.314108 systemd-resolved[1132]: Defaulting to hostname 'linux'. May 13 00:41:12.314300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:12.314452 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:12.315801 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:12.315948 systemd[1]: Finished modprobe@drm.service. May 13 00:41:12.317086 systemd[1]: Started systemd-resolved.service. May 13 00:41:12.318381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:12.318517 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:12.319848 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:12.319979 systemd[1]: Finished modprobe@loop.service. May 13 00:41:12.321537 systemd[1]: Reached target network.target. May 13 00:41:12.322452 systemd[1]: Reached target nss-lookup.target. May 13 00:41:12.323338 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:12.323381 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:12.323500 systemd[1]: Started systemd-timesyncd.service. May 13 00:41:13.416719 systemd-resolved[1132]: Clock change detected. Flushing caches. May 13 00:41:13.416754 systemd-timesyncd[1135]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:41:13.416792 systemd-timesyncd[1135]: Initial clock synchronization to Tue 2025-05-13 00:41:13.416674 UTC. May 13 00:41:13.417534 systemd[1]: Finished ensure-sysext.service. May 13 00:41:13.419157 systemd[1]: Reached target sysinit.target. May 13 00:41:13.420008 systemd[1]: Started motdgen.path. May 13 00:41:13.420747 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:41:13.421846 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:41:13.422717 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:41:13.422743 systemd[1]: Reached target paths.target. May 13 00:41:13.423486 systemd[1]: Reached target time-set.target. May 13 00:41:13.424368 systemd[1]: Started logrotate.timer. May 13 00:41:13.425175 systemd[1]: Started mdadm.timer. May 13 00:41:13.425838 systemd[1]: Reached target timers.target. May 13 00:41:13.426849 systemd[1]: Listening on dbus.socket. May 13 00:41:13.428424 systemd[1]: Starting docker.socket... May 13 00:41:13.431012 systemd[1]: Listening on sshd.socket. May 13 00:41:13.431854 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.432193 systemd[1]: Listening on docker.socket. May 13 00:41:13.433027 systemd[1]: Reached target sockets.target. May 13 00:41:13.433814 systemd[1]: Reached target basic.target. May 13 00:41:13.434595 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:13.434618 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:13.435440 systemd[1]: Starting containerd.service... May 13 00:41:13.437115 systemd[1]: Starting dbus.service... May 13 00:41:13.438639 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:41:13.440614 systemd[1]: Starting extend-filesystems.service... May 13 00:41:13.441614 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:41:13.442807 systemd[1]: Starting motdgen.service... May 13 00:41:13.443833 jq[1170]: false May 13 00:41:13.445060 systemd[1]: Starting prepare-helm.service... May 13 00:41:13.447087 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:41:13.449137 systemd[1]: Starting sshd-keygen.service... May 13 00:41:13.453420 systemd[1]: Starting systemd-logind.service... May 13 00:41:13.454334 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.454391 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:41:13.454914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:41:13.455724 systemd[1]: Starting update-engine.service... May 13 00:41:13.457432 dbus-daemon[1169]: [system] SELinux support is enabled May 13 00:41:13.458003 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:41:13.459539 systemd[1]: Started dbus.service. May 13 00:41:13.462773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:41:13.462997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:41:13.463801 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:41:13.464963 extend-filesystems[1171]: Found loop1 May 13 00:41:13.464963 extend-filesystems[1171]: Found sr0 May 13 00:41:13.464963 extend-filesystems[1171]: Found vda May 13 00:41:13.464963 extend-filesystems[1171]: Found vda1 May 13 00:41:13.464963 extend-filesystems[1171]: Found vda2 May 13 00:41:13.464963 extend-filesystems[1171]: Found vda3 May 13 00:41:13.464963 extend-filesystems[1171]: Found usr May 13 00:41:13.464963 extend-filesystems[1171]: Found vda4 May 13 00:41:13.464963 extend-filesystems[1171]: Found vda6 May 13 00:41:13.464963 extend-filesystems[1171]: Found vda7 May 13 00:41:13.464963 extend-filesystems[1171]: Found vda9 May 13 00:41:13.464963 extend-filesystems[1171]: Checking size of /dev/vda9 May 13 00:41:13.463944 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:41:13.498927 extend-filesystems[1171]: Resized partition /dev/vda9 May 13 00:41:13.506708 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:41:13.506735 jq[1189]: true May 13 00:41:13.506839 tar[1191]: linux-amd64/helm May 13 00:41:13.465885 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:41:13.507128 extend-filesystems[1205]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:41:13.465913 systemd[1]: Reached target system-config.target. May 13 00:41:13.510804 jq[1196]: true May 13 00:41:13.466884 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:41:13.466896 systemd[1]: Reached target user-config.target. May 13 00:41:13.478055 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:41:13.478202 systemd[1]: Finished motdgen.service. May 13 00:41:13.514521 update_engine[1187]: I0513 00:41:13.514267 1187 main.cc:92] Flatcar Update Engine starting May 13 00:41:13.516206 systemd[1]: Started update-engine.service. May 13 00:41:13.517157 update_engine[1187]: I0513 00:41:13.516258 1187 update_check_scheduler.cc:74] Next update check in 2m9s May 13 00:41:13.519305 env[1193]: time="2025-05-13T00:41:13.519255002Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:41:13.519653 systemd[1]: Started locksmithd.service. May 13 00:41:13.540405 systemd-logind[1184]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:41:13.568751 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:41:13.568829 env[1193]: time="2025-05-13T00:41:13.543016569Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:41:13.540426 systemd-logind[1184]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:41:13.569263 extend-filesystems[1205]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:41:13.569263 extend-filesystems[1205]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:41:13.569263 extend-filesystems[1205]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:41:13.574883 env[1193]: time="2025-05-13T00:41:13.568844854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.541919 systemd-logind[1184]: New seat seat0. May 13 00:41:13.574954 extend-filesystems[1171]: Resized filesystem in /dev/vda9 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.574869819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.574895487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575097306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575116262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575130028Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575139716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575202804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575384084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575485204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:13.575968 env[1193]: time="2025-05-13T00:41:13.575497277Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:41:13.545651 systemd[1]: Started systemd-logind.service. May 13 00:41:13.576270 env[1193]: time="2025-05-13T00:41:13.575534136Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:41:13.576270 env[1193]: time="2025-05-13T00:41:13.575544395Z" level=info msg="metadata content store policy set" policy=shared May 13 00:41:13.570854 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:41:13.571003 systemd[1]: Finished extend-filesystems.service. May 13 00:41:13.579284 bash[1224]: Updated "/home/core/.ssh/authorized_keys" May 13 00:41:13.579868 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:41:13.583602 env[1193]: time="2025-05-13T00:41:13.583580124Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:41:13.583646 env[1193]: time="2025-05-13T00:41:13.583604018Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:41:13.583646 env[1193]: time="2025-05-13T00:41:13.583616883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:41:13.583646 env[1193]: time="2025-05-13T00:41:13.583642420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583726 env[1193]: time="2025-05-13T00:41:13.583656497Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583726 env[1193]: time="2025-05-13T00:41:13.583669201Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583726 env[1193]: time="2025-05-13T00:41:13.583692765Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583726 env[1193]: time="2025-05-13T00:41:13.583708004Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583726 env[1193]: time="2025-05-13T00:41:13.583724084Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583818 env[1193]: time="2025-05-13T00:41:13.583737028Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583818 env[1193]: time="2025-05-13T00:41:13.583748590Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:41:13.583818 env[1193]: time="2025-05-13T00:41:13.583760302Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:41:13.583876 env[1193]: time="2025-05-13T00:41:13.583833269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:41:13.583924 env[1193]: time="2025-05-13T00:41:13.583907157Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:41:13.584146 env[1193]: time="2025-05-13T00:41:13.584122912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:41:13.584170 env[1193]: time="2025-05-13T00:41:13.584148781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584170 env[1193]: time="2025-05-13T00:41:13.584162306Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:41:13.584212 env[1193]: time="2025-05-13T00:41:13.584198875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584232 env[1193]: time="2025-05-13T00:41:13.584216898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584232 env[1193]: time="2025-05-13T00:41:13.584228811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584270 env[1193]: time="2025-05-13T00:41:13.584239130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584270 env[1193]: time="2025-05-13T00:41:13.584250992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584270 env[1193]: time="2025-05-13T00:41:13.584261282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584327 env[1193]: time="2025-05-13T00:41:13.584271110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584327 env[1193]: time="2025-05-13T00:41:13.584282502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584327 env[1193]: time="2025-05-13T00:41:13.584294334Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:41:13.584405 env[1193]: time="2025-05-13T00:41:13.584388090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584427 env[1193]: time="2025-05-13T00:41:13.584406604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584427 env[1193]: time="2025-05-13T00:41:13.584417375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584465 env[1193]: time="2025-05-13T00:41:13.584427554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:41:13.584465 env[1193]: time="2025-05-13T00:41:13.584441380Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:41:13.584465 env[1193]: time="2025-05-13T00:41:13.584451328Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:41:13.584522 env[1193]: time="2025-05-13T00:41:13.584467048Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:41:13.584522 env[1193]: time="2025-05-13T00:41:13.584501563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:41:13.584769 env[1193]: time="2025-05-13T00:41:13.584726615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:41:13.585277 env[1193]: time="2025-05-13T00:41:13.584775717Z" level=info msg="Connect containerd service" May 13 00:41:13.585277 env[1193]: time="2025-05-13T00:41:13.584807076Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:41:13.585277 env[1193]: time="2025-05-13T00:41:13.585227224Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:41:13.585420 env[1193]: time="2025-05-13T00:41:13.585402643Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:41:13.585445 env[1193]: time="2025-05-13T00:41:13.585437809Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:41:13.585521 env[1193]: time="2025-05-13T00:41:13.585471663Z" level=info msg="Start subscribing containerd event" May 13 00:41:13.585566 env[1193]: time="2025-05-13T00:41:13.585543608Z" level=info msg="Start recovering state" May 13 00:41:13.585576 systemd[1]: Started containerd.service. May 13 00:41:13.585640 env[1193]: time="2025-05-13T00:41:13.585626984Z" level=info msg="Start event monitor" May 13 00:41:13.586386 env[1193]: time="2025-05-13T00:41:13.585644016Z" level=info msg="Start snapshots syncer" May 13 00:41:13.586386 env[1193]: time="2025-05-13T00:41:13.585654306Z" level=info msg="Start cni network conf syncer for default" May 13 00:41:13.586386 env[1193]: time="2025-05-13T00:41:13.585663543Z" level=info msg="Start streaming server" May 13 00:41:13.593274 env[1193]: time="2025-05-13T00:41:13.590973757Z" level=info msg="containerd successfully booted in 0.072649s" May 13 00:41:13.596720 locksmithd[1220]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:41:13.788742 systemd-networkd[1019]: eth0: Gained IPv6LL May 13 00:41:13.790608 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:41:13.791953 systemd[1]: Reached target network-online.target. May 13 00:41:13.794254 systemd[1]: Starting kubelet.service... May 13 00:41:13.861783 tar[1191]: linux-amd64/LICENSE May 13 00:41:13.862010 tar[1191]: linux-amd64/README.md May 13 00:41:13.866513 systemd[1]: Finished prepare-helm.service. May 13 00:41:14.120485 sshd_keygen[1188]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:41:14.139423 systemd[1]: Finished sshd-keygen.service. May 13 00:41:14.141710 systemd[1]: Starting issuegen.service... May 13 00:41:14.147061 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:41:14.147317 systemd[1]: Finished issuegen.service. May 13 00:41:14.150291 systemd[1]: Starting systemd-user-sessions.service... May 13 00:41:14.156126 systemd[1]: Finished systemd-user-sessions.service. May 13 00:41:14.158532 systemd[1]: Started getty@tty1.service. May 13 00:41:14.160288 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:41:14.161302 systemd[1]: Reached target getty.target. May 13 00:41:14.347510 systemd[1]: Started kubelet.service. May 13 00:41:14.348794 systemd[1]: Reached target multi-user.target. May 13 00:41:14.350772 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:41:14.358175 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:41:14.358304 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:41:14.359448 systemd[1]: Startup finished in 661ms (kernel) + 5.308s (initrd) + 5.226s (userspace) = 11.195s. May 13 00:41:14.737487 kubelet[1252]: E0513 00:41:14.737415 1252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:14.739163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:14.739276 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:16.749752 systemd[1]: Created slice system-sshd.slice. May 13 00:41:16.750566 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:46324.service. May 13 00:41:16.782452 sshd[1261]: Accepted publickey for core from 10.0.0.1 port 46324 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:16.783923 sshd[1261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:16.793318 systemd-logind[1184]: New session 1 of user core. May 13 00:41:16.794219 systemd[1]: Created slice user-500.slice. May 13 00:41:16.795189 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:41:16.802897 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:41:16.804470 systemd[1]: Starting user@500.service... May 13 00:41:16.807149 (systemd)[1264]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:16.875001 systemd[1264]: Queued start job for default target default.target. May 13 00:41:16.875397 systemd[1264]: Reached target paths.target. May 13 00:41:16.875417 systemd[1264]: Reached target sockets.target. May 13 00:41:16.875429 systemd[1264]: Reached target timers.target. May 13 00:41:16.875440 systemd[1264]: Reached target basic.target. May 13 00:41:16.875473 systemd[1264]: Reached target default.target. May 13 00:41:16.875498 systemd[1264]: Startup finished in 62ms. May 13 00:41:16.875659 systemd[1]: Started user@500.service. May 13 00:41:16.876647 systemd[1]: Started session-1.scope. May 13 00:41:16.927051 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:46336.service. May 13 00:41:16.955115 sshd[1273]: Accepted publickey for core from 10.0.0.1 port 46336 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:16.956153 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:16.959177 systemd-logind[1184]: New session 2 of user core. May 13 00:41:16.960137 systemd[1]: Started session-2.scope. May 13 00:41:17.012109 sshd[1273]: pam_unix(sshd:session): session closed for user core May 13 00:41:17.014815 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:46336.service: Deactivated successfully. May 13 00:41:17.015336 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:41:17.015812 systemd-logind[1184]: Session 2 logged out. Waiting for processes to exit. May 13 00:41:17.016724 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:46350.service. May 13 00:41:17.017362 systemd-logind[1184]: Removed session 2. May 13 00:41:17.043769 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 46350 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.044665 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.047334 systemd-logind[1184]: New session 3 of user core. May 13 00:41:17.047978 systemd[1]: Started session-3.scope. May 13 00:41:17.096459 sshd[1279]: pam_unix(sshd:session): session closed for user core May 13 00:41:17.098872 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:46350.service: Deactivated successfully. May 13 00:41:17.099327 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:41:17.099769 systemd-logind[1184]: Session 3 logged out. Waiting for processes to exit. May 13 00:41:17.100776 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:46360.service. May 13 00:41:17.101264 systemd-logind[1184]: Removed session 3. May 13 00:41:17.127944 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.128879 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.131627 systemd-logind[1184]: New session 4 of user core. May 13 00:41:17.132265 systemd[1]: Started session-4.scope. May 13 00:41:17.183940 sshd[1285]: pam_unix(sshd:session): session closed for user core May 13 00:41:17.186492 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:46360.service: Deactivated successfully. May 13 00:41:17.186995 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:41:17.187439 systemd-logind[1184]: Session 4 logged out. Waiting for processes to exit. May 13 00:41:17.188400 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:46376.service. May 13 00:41:17.188976 systemd-logind[1184]: Removed session 4. May 13 00:41:17.215758 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 46376 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.216671 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.219376 systemd-logind[1184]: New session 5 of user core. May 13 00:41:17.220070 systemd[1]: Started session-5.scope. May 13 00:41:17.272630 sudo[1295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:41:17.272806 sudo[1295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:41:17.290434 systemd[1]: Starting docker.service... May 13 00:41:17.319303 env[1307]: time="2025-05-13T00:41:17.319254569Z" level=info msg="Starting up" May 13 00:41:17.320242 env[1307]: time="2025-05-13T00:41:17.320213308Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:17.320242 env[1307]: time="2025-05-13T00:41:17.320229769Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:17.320305 env[1307]: time="2025-05-13T00:41:17.320245959Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:17.320305 env[1307]: time="2025-05-13T00:41:17.320255637Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:17.321855 env[1307]: time="2025-05-13T00:41:17.321820443Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:17.321855 env[1307]: time="2025-05-13T00:41:17.321846272Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:17.321927 env[1307]: time="2025-05-13T00:41:17.321865879Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:17.321927 env[1307]: time="2025-05-13T00:41:17.321875146Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:17.773932 env[1307]: time="2025-05-13T00:41:17.773879500Z" level=info msg="Loading containers: start." May 13 00:41:17.882589 kernel: Initializing XFRM netlink socket May 13 00:41:17.908532 env[1307]: time="2025-05-13T00:41:17.908488414Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:41:17.953316 systemd-networkd[1019]: docker0: Link UP May 13 00:41:17.970441 env[1307]: time="2025-05-13T00:41:17.970406939Z" level=info msg="Loading containers: done." May 13 00:41:17.981382 env[1307]: time="2025-05-13T00:41:17.981344112Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:41:17.981512 env[1307]: time="2025-05-13T00:41:17.981477973Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:41:17.981604 env[1307]: time="2025-05-13T00:41:17.981574985Z" level=info msg="Daemon has completed initialization" May 13 00:41:17.997037 systemd[1]: Started docker.service. May 13 00:41:18.004005 env[1307]: time="2025-05-13T00:41:18.003958486Z" level=info msg="API listen on /run/docker.sock" May 13 00:41:18.661776 env[1193]: time="2025-05-13T00:41:18.661729517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 00:41:19.302036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140071273.mount: Deactivated successfully. May 13 00:41:20.620084 env[1193]: time="2025-05-13T00:41:20.619989341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:20.623236 env[1193]: time="2025-05-13T00:41:20.623199114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:20.625045 env[1193]: time="2025-05-13T00:41:20.625013288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:20.626896 env[1193]: time="2025-05-13T00:41:20.626858981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:20.627437 env[1193]: time="2025-05-13T00:41:20.627405797Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 00:41:20.628842 env[1193]: time="2025-05-13T00:41:20.628810192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 00:41:22.232689 env[1193]: time="2025-05-13T00:41:22.232624705Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.234585 env[1193]: time="2025-05-13T00:41:22.234545179Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.237642 env[1193]: time="2025-05-13T00:41:22.237601825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.239358 env[1193]: time="2025-05-13T00:41:22.239321511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.240909 env[1193]: time="2025-05-13T00:41:22.240850480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 00:41:22.241427 env[1193]: time="2025-05-13T00:41:22.241381908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 00:41:24.031043 env[1193]: time="2025-05-13T00:41:24.030978137Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.033105 env[1193]: time="2025-05-13T00:41:24.033077446Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.034826 env[1193]: time="2025-05-13T00:41:24.034777556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.036350 env[1193]: time="2025-05-13T00:41:24.036322785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.036999 env[1193]: time="2025-05-13T00:41:24.036965281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 00:41:24.037360 env[1193]: time="2025-05-13T00:41:24.037342188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:41:24.943435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:41:24.943571 systemd[1]: Stopped kubelet.service. May 13 00:41:24.944894 systemd[1]: Starting kubelet.service... May 13 00:41:25.042046 systemd[1]: Started kubelet.service. May 13 00:41:25.446852 kubelet[1442]: E0513 00:41:25.446815 1442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:25.449403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:25.449524 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:25.516484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861839616.mount: Deactivated successfully. May 13 00:41:26.700287 env[1193]: time="2025-05-13T00:41:26.700230657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.702633 env[1193]: time="2025-05-13T00:41:26.702590455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.704157 env[1193]: time="2025-05-13T00:41:26.704098475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.705361 env[1193]: time="2025-05-13T00:41:26.705333563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.705801 env[1193]: time="2025-05-13T00:41:26.705770382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 00:41:26.706584 env[1193]: time="2025-05-13T00:41:26.706533184Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:41:27.166690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939581165.mount: Deactivated successfully. May 13 00:41:29.223650 env[1193]: time="2025-05-13T00:41:29.223582015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.226033 env[1193]: time="2025-05-13T00:41:29.225994352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.228151 env[1193]: time="2025-05-13T00:41:29.228120862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.230136 env[1193]: time="2025-05-13T00:41:29.230083585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.230823 env[1193]: time="2025-05-13T00:41:29.230788097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:41:29.231303 env[1193]: time="2025-05-13T00:41:29.231268438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:41:29.720107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976658382.mount: Deactivated successfully. May 13 00:41:29.725889 env[1193]: time="2025-05-13T00:41:29.725843693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.727759 env[1193]: time="2025-05-13T00:41:29.727730464Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.729746 env[1193]: time="2025-05-13T00:41:29.729705760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.732526 env[1193]: time="2025-05-13T00:41:29.732500114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.733079 env[1193]: time="2025-05-13T00:41:29.733038734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:41:29.733583 env[1193]: time="2025-05-13T00:41:29.733538031Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 00:41:30.325166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363191756.mount: Deactivated successfully. May 13 00:41:33.303595 env[1193]: time="2025-05-13T00:41:33.303522361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.305388 env[1193]: time="2025-05-13T00:41:33.305333259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.307001 env[1193]: time="2025-05-13T00:41:33.306957777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.309658 env[1193]: time="2025-05-13T00:41:33.309626635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.310390 env[1193]: time="2025-05-13T00:41:33.310355883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 00:41:35.420244 systemd[1]: Stopped kubelet.service. May 13 00:41:35.422017 systemd[1]: Starting kubelet.service... May 13 00:41:35.445730 systemd[1]: Reloading. May 13 00:41:35.511047 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-05-13T00:41:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:35.511071 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-05-13T00:41:35Z" level=info msg="torcx already run" May 13 00:41:35.774884 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:35.774907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:35.794941 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:35.885967 systemd[1]: Started kubelet.service. May 13 00:41:35.887793 systemd[1]: Stopping kubelet.service... May 13 00:41:35.888088 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:41:35.888286 systemd[1]: Stopped kubelet.service. May 13 00:41:35.889916 systemd[1]: Starting kubelet.service... May 13 00:41:35.959805 systemd[1]: Started kubelet.service. May 13 00:41:35.991712 kubelet[1544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:35.992109 kubelet[1544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:41:35.992109 kubelet[1544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:35.993129 kubelet[1544]: I0513 00:41:35.993088 1544 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:41:36.331871 kubelet[1544]: I0513 00:41:36.331817 1544 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:41:36.331871 kubelet[1544]: I0513 00:41:36.331849 1544 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:41:36.332111 kubelet[1544]: I0513 00:41:36.332090 1544 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:41:36.350069 kubelet[1544]: I0513 00:41:36.350041 1544 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:41:36.351185 kubelet[1544]: E0513 00:41:36.351139 1544 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:36.358363 kubelet[1544]: E0513 00:41:36.358314 1544 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:41:36.358363 kubelet[1544]: I0513 00:41:36.358356 1544 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:41:36.362994 kubelet[1544]: I0513 00:41:36.362961 1544 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:41:36.363848 kubelet[1544]: I0513 00:41:36.363827 1544 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:41:36.363993 kubelet[1544]: I0513 00:41:36.363959 1544 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:41:36.364170 kubelet[1544]: I0513 00:41:36.363986 1544 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:41:36.364290 kubelet[1544]: I0513 00:41:36.364181 1544 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:41:36.364290 kubelet[1544]: I0513 00:41:36.364194 1544 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:41:36.364359 kubelet[1544]: I0513 00:41:36.364296 1544 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:36.368407 kubelet[1544]: I0513 00:41:36.368359 1544 kubelet.go:408] "Attempting to sync node with API server" May 13 00:41:36.368407 kubelet[1544]: I0513 00:41:36.368382 1544 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:41:36.368640 kubelet[1544]: I0513 00:41:36.368436 1544 kubelet.go:314] "Adding apiserver pod source" May 13 00:41:36.368640 kubelet[1544]: I0513 00:41:36.368453 1544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:41:36.369290 kubelet[1544]: W0513 00:41:36.369231 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:36.369337 kubelet[1544]: E0513 00:41:36.369309 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:36.370579 kubelet[1544]: W0513 00:41:36.370534 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:36.370633 kubelet[1544]: E0513 00:41:36.370584 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:36.378980 kubelet[1544]: I0513 00:41:36.378931 1544 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:41:36.382500 kubelet[1544]: I0513 00:41:36.382485 1544 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:41:36.383063 kubelet[1544]: W0513 00:41:36.383038 1544 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:41:36.383541 kubelet[1544]: I0513 00:41:36.383523 1544 server.go:1269] "Started kubelet" May 13 00:41:36.385784 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:41:36.385879 kubelet[1544]: I0513 00:41:36.385861 1544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:41:36.390444 kubelet[1544]: I0513 00:41:36.390028 1544 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:41:36.390687 kubelet[1544]: I0513 00:41:36.390661 1544 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:41:36.391735 kubelet[1544]: I0513 00:41:36.391708 1544 server.go:460] "Adding debug handlers to kubelet server" May 13 00:41:36.391910 kubelet[1544]: E0513 00:41:36.391878 1544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:36.393347 kubelet[1544]: I0513 00:41:36.393309 1544 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:41:36.393472 kubelet[1544]: I0513 00:41:36.393457 1544 reconciler.go:26] "Reconciler: start to sync state" May 13 00:41:36.394233 kubelet[1544]: E0513 00:41:36.394194 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" May 13 00:41:36.394367 kubelet[1544]: I0513 00:41:36.394345 1544 factory.go:221] Registration of the systemd container factory successfully May 13 00:41:36.394767 kubelet[1544]: W0513 00:41:36.394451 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:36.394767 kubelet[1544]: I0513 00:41:36.394497 1544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:41:36.394767 kubelet[1544]: E0513 00:41:36.394515 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:36.395483 kubelet[1544]: I0513 00:41:36.395423 1544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:41:36.395526 kubelet[1544]: I0513 00:41:36.395461 1544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:41:36.395716 kubelet[1544]: I0513 00:41:36.395700 1544 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:41:36.396079 kubelet[1544]: I0513 00:41:36.396058 1544 factory.go:221] Registration of the containerd container factory successfully May 13 00:41:36.400571 kubelet[1544]: E0513 00:41:36.398346 1544 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef5ee92044b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:41:36.383501491 +0000 UTC m=+0.420429287,LastTimestamp:2025-05-13 00:41:36.383501491 +0000 UTC m=+0.420429287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:41:36.403867 kubelet[1544]: E0513 00:41:36.403836 1544 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:41:36.409691 kubelet[1544]: I0513 00:41:36.409657 1544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:41:36.411160 kubelet[1544]: I0513 00:41:36.411145 1544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:41:36.411262 kubelet[1544]: I0513 00:41:36.411249 1544 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:41:36.411342 kubelet[1544]: I0513 00:41:36.411329 1544 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:41:36.411462 kubelet[1544]: E0513 00:41:36.411443 1544 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:41:36.411960 kubelet[1544]: W0513 00:41:36.411939 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:36.412059 kubelet[1544]: E0513 00:41:36.412038 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:36.412348 kubelet[1544]: I0513 00:41:36.412335 1544 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:41:36.412420 kubelet[1544]: I0513 00:41:36.412406 1544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:41:36.412497 kubelet[1544]: I0513 00:41:36.412485 1544 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:36.492617 kubelet[1544]: E0513 00:41:36.492582 1544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:36.512005 kubelet[1544]: E0513 00:41:36.511962 1544 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:41:36.593285 kubelet[1544]: E0513 00:41:36.593195 1544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:36.594724 kubelet[1544]: E0513 00:41:36.594668 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" May 13 00:41:36.619789 kubelet[1544]: I0513 00:41:36.619736 1544 policy_none.go:49] "None policy: Start" May 13 00:41:36.620758 kubelet[1544]: I0513 00:41:36.620738 1544 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:41:36.620817 kubelet[1544]: I0513 00:41:36.620764 1544 state_mem.go:35] "Initializing new in-memory state store" May 13 00:41:36.627929 systemd[1]: Created slice kubepods.slice. May 13 00:41:36.631628 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:41:36.633940 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:41:36.640369 kubelet[1544]: I0513 00:41:36.640327 1544 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:41:36.640503 kubelet[1544]: I0513 00:41:36.640486 1544 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:41:36.640544 kubelet[1544]: I0513 00:41:36.640503 1544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:41:36.641164 kubelet[1544]: I0513 00:41:36.640741 1544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:41:36.642066 kubelet[1544]: E0513 00:41:36.642035 1544 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:41:36.719178 systemd[1]: Created slice kubepods-burstable-podbf78f8cf60ab739b5eaf7aace6292c05.slice. May 13 00:41:36.736163 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 00:41:36.738682 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 00:41:36.741480 kubelet[1544]: I0513 00:41:36.741456 1544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:41:36.741864 kubelet[1544]: E0513 00:41:36.741824 1544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 13 00:41:36.795126 kubelet[1544]: I0513 00:41:36.795091 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:36.795205 kubelet[1544]: I0513 00:41:36.795125 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:36.795205 kubelet[1544]: I0513 00:41:36.795143 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:36.795205 kubelet[1544]: I0513 00:41:36.795160 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf78f8cf60ab739b5eaf7aace6292c05-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf78f8cf60ab739b5eaf7aace6292c05\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:36.795205 kubelet[1544]: I0513 00:41:36.795192 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf78f8cf60ab739b5eaf7aace6292c05-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf78f8cf60ab739b5eaf7aace6292c05\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:36.795295 kubelet[1544]: I0513 00:41:36.795211 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:36.795295 kubelet[1544]: I0513 00:41:36.795242 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:36.795339 kubelet[1544]: I0513 00:41:36.795284 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:41:36.795380 kubelet[1544]: I0513 00:41:36.795356 1544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf78f8cf60ab739b5eaf7aace6292c05-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf78f8cf60ab739b5eaf7aace6292c05\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:36.942792 kubelet[1544]: I0513 00:41:36.942712 1544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:41:36.943044 kubelet[1544]: E0513 00:41:36.943013 1544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 13 00:41:36.995506 kubelet[1544]: E0513 00:41:36.995466 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" May 13 00:41:37.034847 kubelet[1544]: E0513 00:41:37.034806 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:37.035427 env[1193]: time="2025-05-13T00:41:37.035379844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf78f8cf60ab739b5eaf7aace6292c05,Namespace:kube-system,Attempt:0,}" May 13 00:41:37.038539 kubelet[1544]: E0513 00:41:37.038518 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:37.038989 env[1193]: time="2025-05-13T00:41:37.038945534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 00:41:37.040034 kubelet[1544]: E0513 00:41:37.040017 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:37.040324 env[1193]: time="2025-05-13T00:41:37.040296830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 00:41:37.344956 kubelet[1544]: I0513 00:41:37.344922 1544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:41:37.345376 kubelet[1544]: E0513 00:41:37.345335 1544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 13 00:41:37.371212 kubelet[1544]: W0513 00:41:37.371172 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:37.371359 kubelet[1544]: E0513 00:41:37.371228 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:37.431108 kubelet[1544]: W0513 00:41:37.431039 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:37.431108 kubelet[1544]: E0513 00:41:37.431108 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:37.645096 kubelet[1544]: W0513 00:41:37.644968 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:37.645096 kubelet[1544]: E0513 00:41:37.645021 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:37.663371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746172541.mount: Deactivated successfully. May 13 00:41:37.672905 kubelet[1544]: W0513 00:41:37.672852 1544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 13 00:41:37.672996 kubelet[1544]: E0513 00:41:37.672913 1544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:37.796245 kubelet[1544]: E0513 00:41:37.796183 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" May 13 00:41:37.866105 env[1193]: time="2025-05-13T00:41:37.866033425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.868831 env[1193]: time="2025-05-13T00:41:37.868779949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.869963 env[1193]: time="2025-05-13T00:41:37.869934455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.871952 env[1193]: time="2025-05-13T00:41:37.871900214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.873919 env[1193]: time="2025-05-13T00:41:37.873865141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.874951 env[1193]: time="2025-05-13T00:41:37.874911815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.876449 env[1193]: time="2025-05-13T00:41:37.876411529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.877839 env[1193]: time="2025-05-13T00:41:37.877786048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.879350 env[1193]: time="2025-05-13T00:41:37.879318123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.881262 env[1193]: time="2025-05-13T00:41:37.881236783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.882791 env[1193]: time="2025-05-13T00:41:37.882754882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.887271 env[1193]: time="2025-05-13T00:41:37.887213007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.897720 env[1193]: time="2025-05-13T00:41:37.897577375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:37.897887 env[1193]: time="2025-05-13T00:41:37.897623452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:37.897887 env[1193]: time="2025-05-13T00:41:37.897635424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:37.898172 env[1193]: time="2025-05-13T00:41:37.897776239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb9f805705e2259d75e71205a2429dd070c457835f1759ba7440059513cf8218 pid=1586 runtime=io.containerd.runc.v2 May 13 00:41:37.914257 systemd[1]: Started cri-containerd-cb9f805705e2259d75e71205a2429dd070c457835f1759ba7440059513cf8218.scope. May 13 00:41:37.917614 env[1193]: time="2025-05-13T00:41:37.917495711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:37.917614 env[1193]: time="2025-05-13T00:41:37.917528833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:37.917614 env[1193]: time="2025-05-13T00:41:37.917539793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:37.917830 env[1193]: time="2025-05-13T00:41:37.917645021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/588377c8f037e5c2fd0bfb1c0594c7458eaa36a4d27712688790c7b94712fbbb pid=1618 runtime=io.containerd.runc.v2 May 13 00:41:37.920574 env[1193]: time="2025-05-13T00:41:37.919514238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:37.920574 env[1193]: time="2025-05-13T00:41:37.919584711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:37.920574 env[1193]: time="2025-05-13T00:41:37.919603265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:37.920574 env[1193]: time="2025-05-13T00:41:37.919824861Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/627f8445947c0b32dccc3f4f5c4ab24ae0e02e39fcbcd36e7d46ac42b9b77502 pid=1623 runtime=io.containerd.runc.v2 May 13 00:41:37.934734 systemd[1]: Started cri-containerd-588377c8f037e5c2fd0bfb1c0594c7458eaa36a4d27712688790c7b94712fbbb.scope. May 13 00:41:37.941800 systemd[1]: Started cri-containerd-627f8445947c0b32dccc3f4f5c4ab24ae0e02e39fcbcd36e7d46ac42b9b77502.scope. May 13 00:41:37.953925 env[1193]: time="2025-05-13T00:41:37.953113234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb9f805705e2259d75e71205a2429dd070c457835f1759ba7440059513cf8218\"" May 13 00:41:37.954068 kubelet[1544]: E0513 00:41:37.953980 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:37.955368 env[1193]: time="2025-05-13T00:41:37.955340143Z" level=info msg="CreateContainer within sandbox \"cb9f805705e2259d75e71205a2429dd070c457835f1759ba7440059513cf8218\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:41:37.974628 env[1193]: time="2025-05-13T00:41:37.973844585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"588377c8f037e5c2fd0bfb1c0594c7458eaa36a4d27712688790c7b94712fbbb\"" May 13 00:41:37.974776 kubelet[1544]: E0513 00:41:37.974442 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:37.976811 env[1193]: time="2025-05-13T00:41:37.976770876Z" level=info msg="CreateContainer within sandbox \"588377c8f037e5c2fd0bfb1c0594c7458eaa36a4d27712688790c7b94712fbbb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:41:37.980494 env[1193]: time="2025-05-13T00:41:37.980451633Z" level=info msg="CreateContainer within sandbox \"cb9f805705e2259d75e71205a2429dd070c457835f1759ba7440059513cf8218\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32ca086f1394b7ccf3fcf60ed6d83329d9c0f40354e7c503861677de1f28e127\"" May 13 00:41:37.980929 env[1193]: time="2025-05-13T00:41:37.980903801Z" level=info msg="StartContainer for \"32ca086f1394b7ccf3fcf60ed6d83329d9c0f40354e7c503861677de1f28e127\"" May 13 00:41:37.987106 env[1193]: time="2025-05-13T00:41:37.987048592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf78f8cf60ab739b5eaf7aace6292c05,Namespace:kube-system,Attempt:0,} returns sandbox id \"627f8445947c0b32dccc3f4f5c4ab24ae0e02e39fcbcd36e7d46ac42b9b77502\"" May 13 00:41:37.987977 kubelet[1544]: E0513 00:41:37.987828 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:37.989176 env[1193]: time="2025-05-13T00:41:37.989151908Z" level=info msg="CreateContainer within sandbox \"627f8445947c0b32dccc3f4f5c4ab24ae0e02e39fcbcd36e7d46ac42b9b77502\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:41:37.999017 systemd[1]: Started cri-containerd-32ca086f1394b7ccf3fcf60ed6d83329d9c0f40354e7c503861677de1f28e127.scope. May 13 00:41:38.147003 kubelet[1544]: I0513 00:41:38.146963 1544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:41:38.147345 kubelet[1544]: E0513 00:41:38.147251 1544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 13 00:41:38.184054 env[1193]: time="2025-05-13T00:41:38.183911942Z" level=info msg="StartContainer for \"32ca086f1394b7ccf3fcf60ed6d83329d9c0f40354e7c503861677de1f28e127\" returns successfully" May 13 00:41:38.200620 env[1193]: time="2025-05-13T00:41:38.200564279Z" level=info msg="CreateContainer within sandbox \"588377c8f037e5c2fd0bfb1c0594c7458eaa36a4d27712688790c7b94712fbbb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d36c9d0b5ac1e22beb72eb5ff518cb2dcaae92b343985ff63b907bcd20880044\"" May 13 00:41:38.201173 env[1193]: time="2025-05-13T00:41:38.201127125Z" level=info msg="StartContainer for \"d36c9d0b5ac1e22beb72eb5ff518cb2dcaae92b343985ff63b907bcd20880044\"" May 13 00:41:38.204778 env[1193]: time="2025-05-13T00:41:38.204714486Z" level=info msg="CreateContainer within sandbox \"627f8445947c0b32dccc3f4f5c4ab24ae0e02e39fcbcd36e7d46ac42b9b77502\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4172b450dbd13fd384e2e7f19706fa470b20d5e48d09a503f320a03198036e6c\"" May 13 00:41:38.205174 env[1193]: time="2025-05-13T00:41:38.205144784Z" level=info msg="StartContainer for \"4172b450dbd13fd384e2e7f19706fa470b20d5e48d09a503f320a03198036e6c\"" May 13 00:41:38.214986 systemd[1]: Started cri-containerd-d36c9d0b5ac1e22beb72eb5ff518cb2dcaae92b343985ff63b907bcd20880044.scope. May 13 00:41:38.223395 systemd[1]: Started cri-containerd-4172b450dbd13fd384e2e7f19706fa470b20d5e48d09a503f320a03198036e6c.scope. May 13 00:41:38.265116 env[1193]: time="2025-05-13T00:41:38.265057295Z" level=info msg="StartContainer for \"d36c9d0b5ac1e22beb72eb5ff518cb2dcaae92b343985ff63b907bcd20880044\" returns successfully" May 13 00:41:38.278042 env[1193]: time="2025-05-13T00:41:38.277991615Z" level=info msg="StartContainer for \"4172b450dbd13fd384e2e7f19706fa470b20d5e48d09a503f320a03198036e6c\" returns successfully" May 13 00:41:38.416861 kubelet[1544]: E0513 00:41:38.416827 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:38.418344 kubelet[1544]: E0513 00:41:38.418320 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:38.419635 kubelet[1544]: E0513 00:41:38.419615 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.421844 kubelet[1544]: E0513 00:41:39.421807 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.439365 kubelet[1544]: E0513 00:41:39.439314 1544 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:41:39.654096 kubelet[1544]: E0513 00:41:39.654064 1544 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:41:39.749135 kubelet[1544]: I0513 00:41:39.749015 1544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:41:39.754856 kubelet[1544]: I0513 00:41:39.754821 1544 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:41:40.371791 kubelet[1544]: I0513 00:41:40.371745 1544 apiserver.go:52] "Watching apiserver" May 13 00:41:40.393993 kubelet[1544]: I0513 00:41:40.393957 1544 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:41:40.501742 kubelet[1544]: E0513 00:41:40.501703 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:41.397098 systemd[1]: Reloading. May 13 00:41:41.423525 kubelet[1544]: E0513 00:41:41.423503 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:41.459083 /usr/lib/systemd/system-generators/torcx-generator[1841]: time="2025-05-13T00:41:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:41.459109 /usr/lib/systemd/system-generators/torcx-generator[1841]: time="2025-05-13T00:41:41Z" level=info msg="torcx already run" May 13 00:41:41.519105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:41.519120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:41.535885 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:41.624863 systemd[1]: Stopping kubelet.service... May 13 00:41:41.647924 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:41:41.648143 systemd[1]: Stopped kubelet.service. May 13 00:41:41.649439 systemd[1]: Starting kubelet.service... May 13 00:41:41.724675 systemd[1]: Started kubelet.service. May 13 00:41:41.761480 kubelet[1885]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:41.761480 kubelet[1885]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:41:41.761480 kubelet[1885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:41.761866 kubelet[1885]: I0513 00:41:41.761531 1885 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:41:41.766478 kubelet[1885]: I0513 00:41:41.766450 1885 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:41:41.766478 kubelet[1885]: I0513 00:41:41.766469 1885 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:41:41.766663 kubelet[1885]: I0513 00:41:41.766639 1885 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:41:41.767684 kubelet[1885]: I0513 00:41:41.767664 1885 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:41:41.773883 kubelet[1885]: I0513 00:41:41.773840 1885 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:41:41.776350 kubelet[1885]: E0513 00:41:41.776316 1885 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:41:41.776350 kubelet[1885]: I0513 00:41:41.776347 1885 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:41:41.779463 kubelet[1885]: I0513 00:41:41.779433 1885 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:41:41.779682 kubelet[1885]: I0513 00:41:41.779638 1885 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:41:41.779888 kubelet[1885]: I0513 00:41:41.779856 1885 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:41:41.780099 kubelet[1885]: I0513 00:41:41.779889 1885 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:41:41.780190 kubelet[1885]: I0513 00:41:41.780098 1885 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:41:41.780190 kubelet[1885]: I0513 00:41:41.780145 1885 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:41:41.780190 kubelet[1885]: I0513 00:41:41.780180 1885 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:41.780291 kubelet[1885]: I0513 00:41:41.780279 1885 kubelet.go:408] "Attempting to sync node with API server" May 13 00:41:41.780318 kubelet[1885]: I0513 00:41:41.780294 1885 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:41:41.780318 kubelet[1885]: I0513 00:41:41.780316 1885 kubelet.go:314] "Adding apiserver pod source" May 13 00:41:41.780371 kubelet[1885]: I0513 00:41:41.780329 1885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:41:41.781941 kubelet[1885]: I0513 00:41:41.781448 1885 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:41:41.782127 kubelet[1885]: I0513 00:41:41.782109 1885 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:41:41.783342 kubelet[1885]: I0513 00:41:41.783325 1885 server.go:1269] "Started kubelet" May 13 00:41:41.783655 kubelet[1885]: I0513 00:41:41.783527 1885 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:41:41.784667 kubelet[1885]: I0513 00:41:41.784651 1885 server.go:460] "Adding debug handlers to kubelet server" May 13 00:41:41.786818 kubelet[1885]: I0513 00:41:41.785380 1885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:41:41.790048 kubelet[1885]: E0513 00:41:41.790035 1885 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:41:41.790147 kubelet[1885]: I0513 00:41:41.783619 1885 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:41:41.790331 kubelet[1885]: I0513 00:41:41.790319 1885 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:41:41.791479 kubelet[1885]: I0513 00:41:41.785486 1885 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:41:41.791640 kubelet[1885]: I0513 00:41:41.791629 1885 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:41:41.792017 kubelet[1885]: I0513 00:41:41.792003 1885 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:41:41.792198 kubelet[1885]: I0513 00:41:41.792186 1885 reconciler.go:26] "Reconciler: start to sync state" May 13 00:41:41.792294 kubelet[1885]: E0513 00:41:41.792274 1885 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:41.793485 kubelet[1885]: I0513 00:41:41.793470 1885 factory.go:221] Registration of the systemd container factory successfully May 13 00:41:41.793587 kubelet[1885]: I0513 00:41:41.793567 1885 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:41:41.794481 kubelet[1885]: I0513 00:41:41.794467 1885 factory.go:221] Registration of the containerd container factory successfully May 13 00:41:41.804400 kubelet[1885]: I0513 00:41:41.804377 1885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:41:41.805336 kubelet[1885]: I0513 00:41:41.805323 1885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:41:41.805415 kubelet[1885]: I0513 00:41:41.805402 1885 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:41:41.805494 kubelet[1885]: I0513 00:41:41.805480 1885 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:41:41.805629 kubelet[1885]: E0513 00:41:41.805597 1885 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:41:41.833825 kubelet[1885]: I0513 00:41:41.833750 1885 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:41:41.833825 kubelet[1885]: I0513 00:41:41.833768 1885 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:41:41.833825 kubelet[1885]: I0513 00:41:41.833786 1885 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:41.834263 kubelet[1885]: I0513 00:41:41.833921 1885 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:41:41.834263 kubelet[1885]: I0513 00:41:41.833932 1885 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:41:41.834263 kubelet[1885]: I0513 00:41:41.833951 1885 policy_none.go:49] "None policy: Start" May 13 00:41:41.834930 kubelet[1885]: I0513 00:41:41.834693 1885 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:41:41.834930 kubelet[1885]: I0513 00:41:41.834745 1885 state_mem.go:35] "Initializing new in-memory state store" May 13 00:41:41.835191 kubelet[1885]: I0513 00:41:41.834953 1885 state_mem.go:75] "Updated machine memory state" May 13 00:41:41.839537 kubelet[1885]: I0513 00:41:41.839492 1885 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:41:41.839710 kubelet[1885]: I0513 00:41:41.839674 1885 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:41:41.839781 kubelet[1885]: I0513 00:41:41.839697 1885 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:41:41.840231 kubelet[1885]: I0513 00:41:41.840180 1885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:41:41.916891 kubelet[1885]: E0513 00:41:41.916782 1885 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:41.946209 kubelet[1885]: I0513 00:41:41.946185 1885 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:41:41.992529 kubelet[1885]: I0513 00:41:41.992484 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:41.992529 kubelet[1885]: I0513 00:41:41.992523 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:41.992715 kubelet[1885]: I0513 00:41:41.992542 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf78f8cf60ab739b5eaf7aace6292c05-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf78f8cf60ab739b5eaf7aace6292c05\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:41.992715 kubelet[1885]: I0513 00:41:41.992594 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf78f8cf60ab739b5eaf7aace6292c05-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf78f8cf60ab739b5eaf7aace6292c05\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:41.992715 kubelet[1885]: I0513 00:41:41.992611 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:41.992715 kubelet[1885]: I0513 00:41:41.992626 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:41.992715 kubelet[1885]: I0513 00:41:41.992640 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:41:41.992856 kubelet[1885]: I0513 00:41:41.992653 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf78f8cf60ab739b5eaf7aace6292c05-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf78f8cf60ab739b5eaf7aace6292c05\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:41.992856 kubelet[1885]: I0513 00:41:41.992668 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:42.131914 kubelet[1885]: I0513 00:41:42.131872 1885 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 00:41:42.132121 kubelet[1885]: I0513 00:41:42.131966 1885 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:41:42.216827 kubelet[1885]: E0513 00:41:42.216702 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:42.216827 kubelet[1885]: E0513 00:41:42.216759 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:42.217980 kubelet[1885]: E0513 00:41:42.217934 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:42.781255 kubelet[1885]: I0513 00:41:42.781218 1885 apiserver.go:52] "Watching apiserver" May 13 00:41:42.792504 kubelet[1885]: I0513 00:41:42.792474 1885 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:41:42.814297 kubelet[1885]: E0513 00:41:42.814274 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:42.814448 kubelet[1885]: E0513 00:41:42.814307 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:42.814564 kubelet[1885]: E0513 00:41:42.814520 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:42.831658 kubelet[1885]: I0513 00:41:42.831595 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.831576481 podStartE2EDuration="1.831576481s" podCreationTimestamp="2025-05-13 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:42.83131466 +0000 UTC m=+1.103351912" watchObservedRunningTime="2025-05-13 00:41:42.831576481 +0000 UTC m=+1.103613743" May 13 00:41:42.839955 kubelet[1885]: I0513 00:41:42.839830 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.839813517 podStartE2EDuration="2.839813517s" podCreationTimestamp="2025-05-13 00:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:42.839627939 +0000 UTC m=+1.111665221" watchObservedRunningTime="2025-05-13 00:41:42.839813517 +0000 UTC m=+1.111850769" May 13 00:41:42.846011 kubelet[1885]: I0513 00:41:42.845952 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8459391809999999 podStartE2EDuration="1.845939181s" podCreationTimestamp="2025-05-13 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:42.84574648 +0000 UTC m=+1.117783732" watchObservedRunningTime="2025-05-13 00:41:42.845939181 +0000 UTC m=+1.117976433" May 13 00:41:43.538707 sudo[1295]: pam_unix(sudo:session): session closed for user root May 13 00:41:43.539982 sshd[1292]: pam_unix(sshd:session): session closed for user core May 13 00:41:43.542101 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:46376.service: Deactivated successfully. May 13 00:41:43.542718 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:41:43.542841 systemd[1]: session-5.scope: Consumed 3.130s CPU time. May 13 00:41:43.543444 systemd-logind[1184]: Session 5 logged out. Waiting for processes to exit. May 13 00:41:43.544202 systemd-logind[1184]: Removed session 5. May 13 00:41:43.816452 kubelet[1885]: E0513 00:41:43.816417 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:43.817078 kubelet[1885]: E0513 00:41:43.817054 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:45.787207 kubelet[1885]: E0513 00:41:45.787158 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:46.585308 kubelet[1885]: I0513 00:41:46.585251 1885 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:41:46.586114 env[1193]: time="2025-05-13T00:41:46.585966375Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:41:46.586677 kubelet[1885]: I0513 00:41:46.586625 1885 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:41:47.497639 systemd[1]: Created slice kubepods-besteffort-pod0c4aacd5_4f34_4b5b_9a1f_67aef35fea90.slice. May 13 00:41:47.521379 systemd[1]: Created slice kubepods-burstable-podff5b8057_b570_4d23_acf0_66c7fca7f180.slice. May 13 00:41:47.532778 kubelet[1885]: I0513 00:41:47.532724 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ff5b8057-b570-4d23-acf0-66c7fca7f180-cni-plugin\") pod \"kube-flannel-ds-l5clr\" (UID: \"ff5b8057-b570-4d23-acf0-66c7fca7f180\") " pod="kube-flannel/kube-flannel-ds-l5clr" May 13 00:41:47.532778 kubelet[1885]: I0513 00:41:47.532769 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c4aacd5-4f34-4b5b-9a1f-67aef35fea90-xtables-lock\") pod \"kube-proxy-9s5xn\" (UID: \"0c4aacd5-4f34-4b5b-9a1f-67aef35fea90\") " pod="kube-system/kube-proxy-9s5xn" May 13 00:41:47.532778 kubelet[1885]: I0513 00:41:47.532791 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c4aacd5-4f34-4b5b-9a1f-67aef35fea90-lib-modules\") pod \"kube-proxy-9s5xn\" (UID: \"0c4aacd5-4f34-4b5b-9a1f-67aef35fea90\") " pod="kube-system/kube-proxy-9s5xn" May 13 00:41:47.533273 kubelet[1885]: I0513 00:41:47.532809 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52bnw\" (UniqueName: \"kubernetes.io/projected/0c4aacd5-4f34-4b5b-9a1f-67aef35fea90-kube-api-access-52bnw\") pod \"kube-proxy-9s5xn\" (UID: \"0c4aacd5-4f34-4b5b-9a1f-67aef35fea90\") " pod="kube-system/kube-proxy-9s5xn" May 13 00:41:47.533273 kubelet[1885]: I0513 00:41:47.532828 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ff5b8057-b570-4d23-acf0-66c7fca7f180-run\") pod \"kube-flannel-ds-l5clr\" (UID: \"ff5b8057-b570-4d23-acf0-66c7fca7f180\") " pod="kube-flannel/kube-flannel-ds-l5clr" May 13 00:41:47.533273 kubelet[1885]: I0513 00:41:47.532844 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ff5b8057-b570-4d23-acf0-66c7fca7f180-flannel-cfg\") pod \"kube-flannel-ds-l5clr\" (UID: \"ff5b8057-b570-4d23-acf0-66c7fca7f180\") " pod="kube-flannel/kube-flannel-ds-l5clr" May 13 00:41:47.533273 kubelet[1885]: I0513 00:41:47.532862 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtsvs\" (UniqueName: \"kubernetes.io/projected/ff5b8057-b570-4d23-acf0-66c7fca7f180-kube-api-access-xtsvs\") pod \"kube-flannel-ds-l5clr\" (UID: \"ff5b8057-b570-4d23-acf0-66c7fca7f180\") " pod="kube-flannel/kube-flannel-ds-l5clr" May 13 00:41:47.533273 kubelet[1885]: I0513 00:41:47.532881 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c4aacd5-4f34-4b5b-9a1f-67aef35fea90-kube-proxy\") pod \"kube-proxy-9s5xn\" (UID: \"0c4aacd5-4f34-4b5b-9a1f-67aef35fea90\") " pod="kube-system/kube-proxy-9s5xn" May 13 00:41:47.533428 kubelet[1885]: I0513 00:41:47.532898 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff5b8057-b570-4d23-acf0-66c7fca7f180-xtables-lock\") pod \"kube-flannel-ds-l5clr\" (UID: \"ff5b8057-b570-4d23-acf0-66c7fca7f180\") " pod="kube-flannel/kube-flannel-ds-l5clr" May 13 00:41:47.533428 kubelet[1885]: I0513 00:41:47.532919 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ff5b8057-b570-4d23-acf0-66c7fca7f180-cni\") pod \"kube-flannel-ds-l5clr\" (UID: \"ff5b8057-b570-4d23-acf0-66c7fca7f180\") " pod="kube-flannel/kube-flannel-ds-l5clr" May 13 00:41:47.649722 kubelet[1885]: I0513 00:41:47.649648 1885 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 00:41:47.822607 kubelet[1885]: E0513 00:41:47.820097 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:47.822799 env[1193]: time="2025-05-13T00:41:47.821111988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s5xn,Uid:0c4aacd5-4f34-4b5b-9a1f-67aef35fea90,Namespace:kube-system,Attempt:0,}" May 13 00:41:47.824364 kubelet[1885]: E0513 00:41:47.824031 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:47.825905 env[1193]: time="2025-05-13T00:41:47.825595608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-l5clr,Uid:ff5b8057-b570-4d23-acf0-66c7fca7f180,Namespace:kube-flannel,Attempt:0,}" May 13 00:41:47.933926 env[1193]: time="2025-05-13T00:41:47.932985603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:47.933926 env[1193]: time="2025-05-13T00:41:47.933740772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:47.933926 env[1193]: time="2025-05-13T00:41:47.933758525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:47.942111 env[1193]: time="2025-05-13T00:41:47.937315278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b042347553c0fd7fe0c7a79146074468a3c3fdf3272cc5777da9f1cc7b4d2d2 pid=1956 runtime=io.containerd.runc.v2 May 13 00:41:48.143849 env[1193]: time="2025-05-13T00:41:48.140870014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:48.143849 env[1193]: time="2025-05-13T00:41:48.140963203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:48.143849 env[1193]: time="2025-05-13T00:41:48.140996196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:48.143849 env[1193]: time="2025-05-13T00:41:48.142814669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4 pid=1974 runtime=io.containerd.runc.v2 May 13 00:41:48.172512 systemd[1]: Started cri-containerd-1b042347553c0fd7fe0c7a79146074468a3c3fdf3272cc5777da9f1cc7b4d2d2.scope. May 13 00:41:48.195437 systemd[1]: Started cri-containerd-cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4.scope. May 13 00:41:48.252986 env[1193]: time="2025-05-13T00:41:48.250151413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s5xn,Uid:0c4aacd5-4f34-4b5b-9a1f-67aef35fea90,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b042347553c0fd7fe0c7a79146074468a3c3fdf3272cc5777da9f1cc7b4d2d2\"" May 13 00:41:48.253195 kubelet[1885]: E0513 00:41:48.250899 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:48.267782 env[1193]: time="2025-05-13T00:41:48.267710748Z" level=info msg="CreateContainer within sandbox \"1b042347553c0fd7fe0c7a79146074468a3c3fdf3272cc5777da9f1cc7b4d2d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:41:48.310665 env[1193]: time="2025-05-13T00:41:48.310602918Z" level=info msg="CreateContainer within sandbox \"1b042347553c0fd7fe0c7a79146074468a3c3fdf3272cc5777da9f1cc7b4d2d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8afdbd63042f2f051780733579e0fac324dbd687644cf60fd512548c134d1e13\"" May 13 00:41:48.313965 env[1193]: time="2025-05-13T00:41:48.311930280Z" level=info msg="StartContainer for \"8afdbd63042f2f051780733579e0fac324dbd687644cf60fd512548c134d1e13\"" May 13 00:41:48.322753 env[1193]: time="2025-05-13T00:41:48.322698006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-l5clr,Uid:ff5b8057-b570-4d23-acf0-66c7fca7f180,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\"" May 13 00:41:48.325445 kubelet[1885]: E0513 00:41:48.323943 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:48.326780 env[1193]: time="2025-05-13T00:41:48.326740028Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 00:41:48.364547 systemd[1]: Started cri-containerd-8afdbd63042f2f051780733579e0fac324dbd687644cf60fd512548c134d1e13.scope. May 13 00:41:48.441535 env[1193]: time="2025-05-13T00:41:48.441330627Z" level=info msg="StartContainer for \"8afdbd63042f2f051780733579e0fac324dbd687644cf60fd512548c134d1e13\" returns successfully" May 13 00:41:48.831154 kubelet[1885]: E0513 00:41:48.831109 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:48.844009 kubelet[1885]: I0513 00:41:48.843936 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9s5xn" podStartSLOduration=1.8439107190000001 podStartE2EDuration="1.843910719s" podCreationTimestamp="2025-05-13 00:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:48.843402095 +0000 UTC m=+7.115439347" watchObservedRunningTime="2025-05-13 00:41:48.843910719 +0000 UTC m=+7.115947961" May 13 00:41:50.186457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605431035.mount: Deactivated successfully. May 13 00:41:50.243184 env[1193]: time="2025-05-13T00:41:50.243096653Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:50.245326 env[1193]: time="2025-05-13T00:41:50.245281999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:50.248331 env[1193]: time="2025-05-13T00:41:50.248271371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:50.250181 env[1193]: time="2025-05-13T00:41:50.250119893Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:50.250801 env[1193]: time="2025-05-13T00:41:50.250752172Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 13 00:41:50.253113 env[1193]: time="2025-05-13T00:41:50.253071543Z" level=info msg="CreateContainer within sandbox \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 00:41:50.267967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543072355.mount: Deactivated successfully. May 13 00:41:50.270467 env[1193]: time="2025-05-13T00:41:50.270410404Z" level=info msg="CreateContainer within sandbox \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438\"" May 13 00:41:50.271265 env[1193]: time="2025-05-13T00:41:50.271204681Z" level=info msg="StartContainer for \"11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438\"" May 13 00:41:50.295219 systemd[1]: Started cri-containerd-11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438.scope. May 13 00:41:50.379919 systemd[1]: cri-containerd-11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438.scope: Deactivated successfully. May 13 00:41:50.381904 env[1193]: time="2025-05-13T00:41:50.381839479Z" level=info msg="StartContainer for \"11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438\" returns successfully" May 13 00:41:50.512456 env[1193]: time="2025-05-13T00:41:50.512296611Z" level=info msg="shim disconnected" id=11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438 May 13 00:41:50.512775 env[1193]: time="2025-05-13T00:41:50.512727004Z" level=warning msg="cleaning up after shim disconnected" id=11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438 namespace=k8s.io May 13 00:41:50.512775 env[1193]: time="2025-05-13T00:41:50.512752172Z" level=info msg="cleaning up dead shim" May 13 00:41:50.540500 env[1193]: time="2025-05-13T00:41:50.539903933Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:41:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2242 runtime=io.containerd.runc.v2\n" May 13 00:41:50.847121 kubelet[1885]: E0513 00:41:50.846827 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:50.851862 env[1193]: time="2025-05-13T00:41:50.848455112Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 00:41:51.051328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11407e9f1cef473526d3ba695cd3a6e3b6ec14a22b3ba591981ac19ccbc79438-rootfs.mount: Deactivated successfully. May 13 00:41:51.509720 kubelet[1885]: E0513 00:41:51.509659 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:51.853330 kubelet[1885]: E0513 00:41:51.853260 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:52.769607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077795737.mount: Deactivated successfully. May 13 00:41:52.884102 kubelet[1885]: E0513 00:41:52.884045 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.667238 kubelet[1885]: E0513 00:41:53.666943 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.865392 env[1193]: time="2025-05-13T00:41:53.865344354Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.867824 env[1193]: time="2025-05-13T00:41:53.867772960Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.869723 env[1193]: time="2025-05-13T00:41:53.869692796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.871268 env[1193]: time="2025-05-13T00:41:53.871239573Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.871844 env[1193]: time="2025-05-13T00:41:53.871817032Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 13 00:41:53.874035 env[1193]: time="2025-05-13T00:41:53.873996644Z" level=info msg="CreateContainer within sandbox \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:41:53.886652 env[1193]: time="2025-05-13T00:41:53.886608236Z" level=info msg="CreateContainer within sandbox \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37\"" May 13 00:41:53.888234 env[1193]: time="2025-05-13T00:41:53.887138175Z" level=info msg="StartContainer for \"c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37\"" May 13 00:41:53.905141 systemd[1]: Started cri-containerd-c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37.scope. May 13 00:41:53.925657 systemd[1]: cri-containerd-c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37.scope: Deactivated successfully. May 13 00:41:53.926887 env[1193]: time="2025-05-13T00:41:53.926855155Z" level=info msg="StartContainer for \"c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37\" returns successfully" May 13 00:41:53.927020 kubelet[1885]: I0513 00:41:53.927001 1885 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:41:53.952384 systemd[1]: Created slice kubepods-burstable-podc70c07ce_63e4_43ff_b4ea_7779aee190fb.slice. May 13 00:41:53.956743 systemd[1]: Created slice kubepods-burstable-pod9ea34660_c01f_44e5_81e2_89f7069dcedd.slice. May 13 00:41:54.086861 kubelet[1885]: I0513 00:41:54.086810 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4gp\" (UniqueName: \"kubernetes.io/projected/9ea34660-c01f-44e5-81e2-89f7069dcedd-kube-api-access-sd4gp\") pod \"coredns-6f6b679f8f-fgntm\" (UID: \"9ea34660-c01f-44e5-81e2-89f7069dcedd\") " pod="kube-system/coredns-6f6b679f8f-fgntm" May 13 00:41:54.086861 kubelet[1885]: I0513 00:41:54.086854 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szss2\" (UniqueName: \"kubernetes.io/projected/c70c07ce-63e4-43ff-b4ea-7779aee190fb-kube-api-access-szss2\") pod \"coredns-6f6b679f8f-52lfk\" (UID: \"c70c07ce-63e4-43ff-b4ea-7779aee190fb\") " pod="kube-system/coredns-6f6b679f8f-52lfk" May 13 00:41:54.087067 kubelet[1885]: I0513 00:41:54.086897 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ea34660-c01f-44e5-81e2-89f7069dcedd-config-volume\") pod \"coredns-6f6b679f8f-fgntm\" (UID: \"9ea34660-c01f-44e5-81e2-89f7069dcedd\") " pod="kube-system/coredns-6f6b679f8f-fgntm" May 13 00:41:54.087067 kubelet[1885]: I0513 00:41:54.086913 1885 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c70c07ce-63e4-43ff-b4ea-7779aee190fb-config-volume\") pod \"coredns-6f6b679f8f-52lfk\" (UID: \"c70c07ce-63e4-43ff-b4ea-7779aee190fb\") " pod="kube-system/coredns-6f6b679f8f-52lfk" May 13 00:41:54.250004 env[1193]: time="2025-05-13T00:41:54.249882464Z" level=info msg="shim disconnected" id=c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37 May 13 00:41:54.250004 env[1193]: time="2025-05-13T00:41:54.249934522Z" level=warning msg="cleaning up after shim disconnected" id=c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37 namespace=k8s.io May 13 00:41:54.250004 env[1193]: time="2025-05-13T00:41:54.249944863Z" level=info msg="cleaning up dead shim" May 13 00:41:54.254884 kubelet[1885]: E0513 00:41:54.254835 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.256566 env[1193]: time="2025-05-13T00:41:54.255267361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-52lfk,Uid:c70c07ce-63e4-43ff-b4ea-7779aee190fb,Namespace:kube-system,Attempt:0,}" May 13 00:41:54.257606 env[1193]: time="2025-05-13T00:41:54.257535346Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:41:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2299 runtime=io.containerd.runc.v2\n" May 13 00:41:54.260265 kubelet[1885]: E0513 00:41:54.259709 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.260508 env[1193]: time="2025-05-13T00:41:54.260156144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgntm,Uid:9ea34660-c01f-44e5-81e2-89f7069dcedd,Namespace:kube-system,Attempt:0,}" May 13 00:41:54.295733 env[1193]: time="2025-05-13T00:41:54.295646079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-52lfk,Uid:c70c07ce-63e4-43ff-b4ea-7779aee190fb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b125a6501f3f1fcabc7593fa2b4daaccd650b299be38f1c847dc5a510decdee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:41:54.296006 kubelet[1885]: E0513 00:41:54.295932 1885 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b125a6501f3f1fcabc7593fa2b4daaccd650b299be38f1c847dc5a510decdee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:41:54.296206 kubelet[1885]: E0513 00:41:54.296019 1885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b125a6501f3f1fcabc7593fa2b4daaccd650b299be38f1c847dc5a510decdee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-52lfk" May 13 00:41:54.296206 kubelet[1885]: E0513 00:41:54.296043 1885 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b125a6501f3f1fcabc7593fa2b4daaccd650b299be38f1c847dc5a510decdee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-52lfk" May 13 00:41:54.296206 kubelet[1885]: E0513 00:41:54.296104 1885 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-52lfk_kube-system(c70c07ce-63e4-43ff-b4ea-7779aee190fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-52lfk_kube-system(c70c07ce-63e4-43ff-b4ea-7779aee190fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b125a6501f3f1fcabc7593fa2b4daaccd650b299be38f1c847dc5a510decdee\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-52lfk" podUID="c70c07ce-63e4-43ff-b4ea-7779aee190fb" May 13 00:41:54.299427 env[1193]: time="2025-05-13T00:41:54.299362512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgntm,Uid:9ea34660-c01f-44e5-81e2-89f7069dcedd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ecf0bc0ff7e04583f5c1c293ce06af9739c6588be501065166c66b1fe2e6970\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:41:54.299645 kubelet[1885]: E0513 00:41:54.299608 1885 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf0bc0ff7e04583f5c1c293ce06af9739c6588be501065166c66b1fe2e6970\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:41:54.299762 kubelet[1885]: E0513 00:41:54.299657 1885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf0bc0ff7e04583f5c1c293ce06af9739c6588be501065166c66b1fe2e6970\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-fgntm" May 13 00:41:54.299762 kubelet[1885]: E0513 00:41:54.299672 1885 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf0bc0ff7e04583f5c1c293ce06af9739c6588be501065166c66b1fe2e6970\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-fgntm" May 13 00:41:54.299762 kubelet[1885]: E0513 00:41:54.299713 1885 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fgntm_kube-system(9ea34660-c01f-44e5-81e2-89f7069dcedd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fgntm_kube-system(9ea34660-c01f-44e5-81e2-89f7069dcedd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ecf0bc0ff7e04583f5c1c293ce06af9739c6588be501065166c66b1fe2e6970\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-fgntm" podUID="9ea34660-c01f-44e5-81e2-89f7069dcedd" May 13 00:41:54.886325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3ff21c8d426e06c7a2304235b2392aa9e01de382152bddb4f11728328c94a37-rootfs.mount: Deactivated successfully. May 13 00:41:54.889445 kubelet[1885]: E0513 00:41:54.889408 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.891038 env[1193]: time="2025-05-13T00:41:54.890997808Z" level=info msg="CreateContainer within sandbox \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 00:41:55.252939 env[1193]: time="2025-05-13T00:41:55.252800408Z" level=info msg="CreateContainer within sandbox \"cf3445a02eb50102beb3ebe91ed7b1bc7f50cf12043ac45fe54fd66c1eb490e4\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"72435c3aaa87dc3e286e0252600210e42c7f8520db335914b90c2afa17cc6a71\"" May 13 00:41:55.253827 env[1193]: time="2025-05-13T00:41:55.253777385Z" level=info msg="StartContainer for \"72435c3aaa87dc3e286e0252600210e42c7f8520db335914b90c2afa17cc6a71\"" May 13 00:41:55.272122 systemd[1]: Started cri-containerd-72435c3aaa87dc3e286e0252600210e42c7f8520db335914b90c2afa17cc6a71.scope. May 13 00:41:55.313749 env[1193]: time="2025-05-13T00:41:55.313705346Z" level=info msg="StartContainer for \"72435c3aaa87dc3e286e0252600210e42c7f8520db335914b90c2afa17cc6a71\" returns successfully" May 13 00:41:55.791386 kubelet[1885]: E0513 00:41:55.791352 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:55.883803 systemd[1]: run-containerd-runc-k8s.io-72435c3aaa87dc3e286e0252600210e42c7f8520db335914b90c2afa17cc6a71-runc.PyxugX.mount: Deactivated successfully. May 13 00:41:55.893204 kubelet[1885]: E0513 00:41:55.893160 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:56.375601 systemd-networkd[1019]: flannel.1: Link UP May 13 00:41:56.375611 systemd-networkd[1019]: flannel.1: Gained carrier May 13 00:41:56.899101 kubelet[1885]: E0513 00:41:56.898777 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:57.884850 systemd-networkd[1019]: flannel.1: Gained IPv6LL May 13 00:41:59.164770 update_engine[1187]: I0513 00:41:59.164686 1187 update_attempter.cc:509] Updating boot flags... May 13 00:42:06.200962 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:54860.service. May 13 00:42:06.233252 sshd[2519]: Accepted publickey for core from 10.0.0.1 port 54860 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:06.235452 sshd[2519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:06.239344 systemd-logind[1184]: New session 6 of user core. May 13 00:42:06.240407 systemd[1]: Started session-6.scope. May 13 00:42:06.406101 sshd[2519]: pam_unix(sshd:session): session closed for user core May 13 00:42:06.409071 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:54860.service: Deactivated successfully. May 13 00:42:06.409914 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:42:06.410739 systemd-logind[1184]: Session 6 logged out. Waiting for processes to exit. May 13 00:42:06.411696 systemd-logind[1184]: Removed session 6. May 13 00:42:07.806539 kubelet[1885]: E0513 00:42:07.806494 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:07.806972 kubelet[1885]: E0513 00:42:07.806589 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:07.807039 env[1193]: time="2025-05-13T00:42:07.806963749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgntm,Uid:9ea34660-c01f-44e5-81e2-89f7069dcedd,Namespace:kube-system,Attempt:0,}" May 13 00:42:07.807624 env[1193]: time="2025-05-13T00:42:07.807421193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-52lfk,Uid:c70c07ce-63e4-43ff-b4ea-7779aee190fb,Namespace:kube-system,Attempt:0,}" May 13 00:42:08.228370 systemd-networkd[1019]: cni0: Link UP May 13 00:42:08.228379 systemd-networkd[1019]: cni0: Gained carrier May 13 00:42:08.231946 systemd-networkd[1019]: cni0: Lost carrier May 13 00:42:08.242981 systemd-networkd[1019]: veth42255510: Link UP May 13 00:42:08.246723 kernel: cni0: port 1(veth42255510) entered blocking state May 13 00:42:08.246800 kernel: cni0: port 1(veth42255510) entered disabled state May 13 00:42:08.251463 kernel: device veth42255510 entered promiscuous mode May 13 00:42:08.251593 kernel: cni0: port 1(veth42255510) entered blocking state May 13 00:42:08.251625 kernel: cni0: port 1(veth42255510) entered forwarding state May 13 00:42:08.252622 kernel: cni0: port 1(veth42255510) entered disabled state May 13 00:42:08.260281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth42255510: link becomes ready May 13 00:42:08.260418 kernel: cni0: port 1(veth42255510) entered blocking state May 13 00:42:08.260450 kernel: cni0: port 1(veth42255510) entered forwarding state May 13 00:42:08.261977 systemd-networkd[1019]: veth42255510: Gained carrier May 13 00:42:08.262336 systemd-networkd[1019]: cni0: Gained carrier May 13 00:42:08.263916 env[1193]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000016928), "name":"cbr0", "type":"bridge"} May 13 00:42:08.263916 env[1193]: delegateAdd: netconf sent to delegate plugin: May 13 00:42:08.288106 systemd-networkd[1019]: vethd1aebf55: Link UP May 13 00:42:08.291341 kernel: cni0: port 2(vethd1aebf55) entered blocking state May 13 00:42:08.291443 kernel: cni0: port 2(vethd1aebf55) entered disabled state May 13 00:42:08.291469 kernel: device vethd1aebf55 entered promiscuous mode May 13 00:42:08.293314 kernel: cni0: port 2(vethd1aebf55) entered blocking state May 13 00:42:08.293366 kernel: cni0: port 2(vethd1aebf55) entered forwarding state May 13 00:42:08.300949 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd1aebf55: link becomes ready May 13 00:42:08.300712 systemd-networkd[1019]: vethd1aebf55: Gained carrier May 13 00:42:08.302788 env[1193]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} May 13 00:42:08.302788 env[1193]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000016928), "name":"cbr0", "type":"bridge"} May 13 00:42:08.302788 env[1193]: delegateAdd: netconf sent to delegate plugin: May 13 00:42:08.307472 env[1193]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:42:08.307391433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:08.307472 env[1193]: time="2025-05-13T00:42:08.307425838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:08.307472 env[1193]: time="2025-05-13T00:42:08.307434454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:08.307755 env[1193]: time="2025-05-13T00:42:08.307586161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae8c7a36a04e4143859670f7f66bc19a9e1e181e472a0334b5e44b1d88b133b4 pid=2635 runtime=io.containerd.runc.v2 May 13 00:42:08.324619 systemd[1]: Started cri-containerd-ae8c7a36a04e4143859670f7f66bc19a9e1e181e472a0334b5e44b1d88b133b4.scope. May 13 00:42:08.450158 systemd-resolved[1132]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:08.470750 env[1193]: time="2025-05-13T00:42:08.470679136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgntm,Uid:9ea34660-c01f-44e5-81e2-89f7069dcedd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae8c7a36a04e4143859670f7f66bc19a9e1e181e472a0334b5e44b1d88b133b4\"" May 13 00:42:08.471488 kubelet[1885]: E0513 00:42:08.471450 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.473281 env[1193]: time="2025-05-13T00:42:08.473242674Z" level=info msg="CreateContainer within sandbox \"ae8c7a36a04e4143859670f7f66bc19a9e1e181e472a0334b5e44b1d88b133b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:08.483921 env[1193]: time="2025-05-13T00:42:08.483319600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:08.483921 env[1193]: time="2025-05-13T00:42:08.483353705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:08.483921 env[1193]: time="2025-05-13T00:42:08.483363404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:08.483921 env[1193]: time="2025-05-13T00:42:08.483484321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ada67115f183f92c40e8e0a8c68f3e82b04c31cc174eb2d1f595bc2c2dd7012 pid=2677 runtime=io.containerd.runc.v2 May 13 00:42:08.493318 systemd[1]: Started cri-containerd-4ada67115f183f92c40e8e0a8c68f3e82b04c31cc174eb2d1f595bc2c2dd7012.scope. May 13 00:42:08.504097 systemd-resolved[1132]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:08.522710 env[1193]: time="2025-05-13T00:42:08.522653000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-52lfk,Uid:c70c07ce-63e4-43ff-b4ea-7779aee190fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ada67115f183f92c40e8e0a8c68f3e82b04c31cc174eb2d1f595bc2c2dd7012\"" May 13 00:42:08.523572 kubelet[1885]: E0513 00:42:08.523446 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.525402 env[1193]: time="2025-05-13T00:42:08.525373894Z" level=info msg="CreateContainer within sandbox \"4ada67115f183f92c40e8e0a8c68f3e82b04c31cc174eb2d1f595bc2c2dd7012\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:09.159957 env[1193]: time="2025-05-13T00:42:09.159884200Z" level=info msg="CreateContainer within sandbox \"ae8c7a36a04e4143859670f7f66bc19a9e1e181e472a0334b5e44b1d88b133b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81275bff28a4e2a6d265d462b306f9fe87363f01f9a8409a2cfcbddb273af73b\"" May 13 00:42:09.160469 env[1193]: time="2025-05-13T00:42:09.160431462Z" level=info msg="StartContainer for \"81275bff28a4e2a6d265d462b306f9fe87363f01f9a8409a2cfcbddb273af73b\"" May 13 00:42:09.178826 systemd[1]: run-containerd-runc-k8s.io-81275bff28a4e2a6d265d462b306f9fe87363f01f9a8409a2cfcbddb273af73b-runc.W0LEqQ.mount: Deactivated successfully. May 13 00:42:09.181626 systemd[1]: Started cri-containerd-81275bff28a4e2a6d265d462b306f9fe87363f01f9a8409a2cfcbddb273af73b.scope. May 13 00:42:09.184888 env[1193]: time="2025-05-13T00:42:09.184833888Z" level=info msg="CreateContainer within sandbox \"4ada67115f183f92c40e8e0a8c68f3e82b04c31cc174eb2d1f595bc2c2dd7012\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6eaf098e587fa502417b31700d4acfb875349c7b5f84935d9438feb48a9038e5\"" May 13 00:42:09.185597 env[1193]: time="2025-05-13T00:42:09.185541202Z" level=info msg="StartContainer for \"6eaf098e587fa502417b31700d4acfb875349c7b5f84935d9438feb48a9038e5\"" May 13 00:42:09.200896 systemd[1]: Started cri-containerd-6eaf098e587fa502417b31700d4acfb875349c7b5f84935d9438feb48a9038e5.scope. May 13 00:42:09.218859 env[1193]: time="2025-05-13T00:42:09.218805155Z" level=info msg="StartContainer for \"81275bff28a4e2a6d265d462b306f9fe87363f01f9a8409a2cfcbddb273af73b\" returns successfully" May 13 00:42:09.228117 env[1193]: time="2025-05-13T00:42:09.228064943Z" level=info msg="StartContainer for \"6eaf098e587fa502417b31700d4acfb875349c7b5f84935d9438feb48a9038e5\" returns successfully" May 13 00:42:09.660761 systemd-networkd[1019]: cni0: Gained IPv6LL May 13 00:42:09.916724 systemd-networkd[1019]: veth42255510: Gained IPv6LL May 13 00:42:09.920848 kubelet[1885]: E0513 00:42:09.920465 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:09.921889 kubelet[1885]: E0513 00:42:09.921854 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:09.933672 kubelet[1885]: I0513 00:42:09.933594 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-l5clr" podStartSLOduration=17.386835142 podStartE2EDuration="22.933578186s" podCreationTimestamp="2025-05-13 00:41:47 +0000 UTC" firstStartedPulling="2025-05-13 00:41:48.32618233 +0000 UTC m=+6.598219592" lastFinishedPulling="2025-05-13 00:41:53.872925384 +0000 UTC m=+12.144962636" observedRunningTime="2025-05-13 00:41:55.903648853 +0000 UTC m=+14.175686105" watchObservedRunningTime="2025-05-13 00:42:09.933578186 +0000 UTC m=+28.205615439" May 13 00:42:09.933895 kubelet[1885]: I0513 00:42:09.933721 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-52lfk" podStartSLOduration=22.93371796 podStartE2EDuration="22.93371796s" podCreationTimestamp="2025-05-13 00:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:09.931795203 +0000 UTC m=+28.203832465" watchObservedRunningTime="2025-05-13 00:42:09.93371796 +0000 UTC m=+28.205755202" May 13 00:42:10.364771 systemd-networkd[1019]: vethd1aebf55: Gained IPv6LL May 13 00:42:10.923836 kubelet[1885]: E0513 00:42:10.923804 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:10.924269 kubelet[1885]: E0513 00:42:10.923804 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:11.409837 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:49840.service. May 13 00:42:11.444395 sshd[2800]: Accepted publickey for core from 10.0.0.1 port 49840 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:11.445964 sshd[2800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:11.449836 systemd-logind[1184]: New session 7 of user core. May 13 00:42:11.450998 systemd[1]: Started session-7.scope. May 13 00:42:11.568088 sshd[2800]: pam_unix(sshd:session): session closed for user core May 13 00:42:11.570388 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:49840.service: Deactivated successfully. May 13 00:42:11.571325 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:42:11.572247 systemd-logind[1184]: Session 7 logged out. Waiting for processes to exit. May 13 00:42:11.573061 systemd-logind[1184]: Removed session 7. May 13 00:42:11.925146 kubelet[1885]: E0513 00:42:11.925116 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:11.925522 kubelet[1885]: E0513 00:42:11.925220 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:16.572663 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:37876.service. May 13 00:42:16.601120 sshd[2859]: Accepted publickey for core from 10.0.0.1 port 37876 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:16.602325 sshd[2859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:16.605925 systemd-logind[1184]: New session 8 of user core. May 13 00:42:16.606667 systemd[1]: Started session-8.scope. May 13 00:42:16.704535 sshd[2859]: pam_unix(sshd:session): session closed for user core May 13 00:42:16.706603 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:37876.service: Deactivated successfully. May 13 00:42:16.707262 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:42:16.707910 systemd-logind[1184]: Session 8 logged out. Waiting for processes to exit. May 13 00:42:16.708541 systemd-logind[1184]: Removed session 8. May 13 00:42:21.709466 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:37880.service. May 13 00:42:21.740052 sshd[2896]: Accepted publickey for core from 10.0.0.1 port 37880 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:21.741436 sshd[2896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:21.745342 systemd-logind[1184]: New session 9 of user core. May 13 00:42:21.746452 systemd[1]: Started session-9.scope. May 13 00:42:21.861788 sshd[2896]: pam_unix(sshd:session): session closed for user core May 13 00:42:21.865410 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:37880.service: Deactivated successfully. May 13 00:42:21.866179 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:42:21.866881 systemd-logind[1184]: Session 9 logged out. Waiting for processes to exit. May 13 00:42:21.868508 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:37896.service. May 13 00:42:21.869716 systemd-logind[1184]: Removed session 9. May 13 00:42:21.900876 sshd[2910]: Accepted publickey for core from 10.0.0.1 port 37896 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:21.902293 sshd[2910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:21.906921 systemd-logind[1184]: New session 10 of user core. May 13 00:42:21.907766 systemd[1]: Started session-10.scope. May 13 00:42:22.067448 sshd[2910]: pam_unix(sshd:session): session closed for user core May 13 00:42:22.070208 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:37904.service. May 13 00:42:22.072927 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:37896.service: Deactivated successfully. May 13 00:42:22.073707 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:42:22.074337 systemd-logind[1184]: Session 10 logged out. Waiting for processes to exit. May 13 00:42:22.075286 systemd-logind[1184]: Removed session 10. May 13 00:42:22.107227 sshd[2922]: Accepted publickey for core from 10.0.0.1 port 37904 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:22.108629 sshd[2922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:22.112160 systemd-logind[1184]: New session 11 of user core. May 13 00:42:22.113489 systemd[1]: Started session-11.scope. May 13 00:42:22.218991 sshd[2922]: pam_unix(sshd:session): session closed for user core May 13 00:42:22.221271 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:37904.service: Deactivated successfully. May 13 00:42:22.222140 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:42:22.223148 systemd-logind[1184]: Session 11 logged out. Waiting for processes to exit. May 13 00:42:22.224040 systemd-logind[1184]: Removed session 11. May 13 00:42:27.223154 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:51832.service. May 13 00:42:27.250668 sshd[2957]: Accepted publickey for core from 10.0.0.1 port 51832 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:27.252018 sshd[2957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:27.256294 systemd-logind[1184]: New session 12 of user core. May 13 00:42:27.256652 systemd[1]: Started session-12.scope. May 13 00:42:27.360967 sshd[2957]: pam_unix(sshd:session): session closed for user core May 13 00:42:27.363401 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:51832.service: Deactivated successfully. May 13 00:42:27.364235 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:42:27.365014 systemd-logind[1184]: Session 12 logged out. Waiting for processes to exit. May 13 00:42:27.365874 systemd-logind[1184]: Removed session 12. May 13 00:42:32.365246 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:51846.service. May 13 00:42:32.393568 sshd[2991]: Accepted publickey for core from 10.0.0.1 port 51846 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:32.394781 sshd[2991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:32.398309 systemd-logind[1184]: New session 13 of user core. May 13 00:42:32.399020 systemd[1]: Started session-13.scope. May 13 00:42:32.504595 sshd[2991]: pam_unix(sshd:session): session closed for user core May 13 00:42:32.508099 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:51846.service: Deactivated successfully. May 13 00:42:32.508851 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:42:32.509411 systemd-logind[1184]: Session 13 logged out. Waiting for processes to exit. May 13 00:42:32.510634 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:51848.service. May 13 00:42:32.511410 systemd-logind[1184]: Removed session 13. May 13 00:42:32.538896 sshd[3005]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:32.540424 sshd[3005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:32.544403 systemd-logind[1184]: New session 14 of user core. May 13 00:42:32.545520 systemd[1]: Started session-14.scope. May 13 00:42:32.725388 sshd[3005]: pam_unix(sshd:session): session closed for user core May 13 00:42:32.728817 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:51848.service: Deactivated successfully. May 13 00:42:32.729456 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:42:32.730110 systemd-logind[1184]: Session 14 logged out. Waiting for processes to exit. May 13 00:42:32.731247 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:51864.service. May 13 00:42:32.732063 systemd-logind[1184]: Removed session 14. May 13 00:42:32.761849 sshd[3016]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:32.762977 sshd[3016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:32.766791 systemd-logind[1184]: New session 15 of user core. May 13 00:42:32.767542 systemd[1]: Started session-15.scope. May 13 00:42:34.180027 sshd[3016]: pam_unix(sshd:session): session closed for user core May 13 00:42:34.183356 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:51868.service. May 13 00:42:34.184946 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:51864.service: Deactivated successfully. May 13 00:42:34.185721 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:42:34.186414 systemd-logind[1184]: Session 15 logged out. Waiting for processes to exit. May 13 00:42:34.187164 systemd-logind[1184]: Removed session 15. May 13 00:42:34.220781 sshd[3032]: Accepted publickey for core from 10.0.0.1 port 51868 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:34.222003 sshd[3032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:34.225383 systemd-logind[1184]: New session 16 of user core. May 13 00:42:34.226204 systemd[1]: Started session-16.scope. May 13 00:42:34.431587 sshd[3032]: pam_unix(sshd:session): session closed for user core May 13 00:42:34.434349 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:51868.service: Deactivated successfully. May 13 00:42:34.434826 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:42:34.435774 systemd-logind[1184]: Session 16 logged out. Waiting for processes to exit. May 13 00:42:34.437007 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:51872.service. May 13 00:42:34.438508 systemd-logind[1184]: Removed session 16. May 13 00:42:34.466837 sshd[3046]: Accepted publickey for core from 10.0.0.1 port 51872 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:34.467795 sshd[3046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:34.470965 systemd-logind[1184]: New session 17 of user core. May 13 00:42:34.471723 systemd[1]: Started session-17.scope. May 13 00:42:34.568434 sshd[3046]: pam_unix(sshd:session): session closed for user core May 13 00:42:34.570841 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:51872.service: Deactivated successfully. May 13 00:42:34.571491 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:42:34.572138 systemd-logind[1184]: Session 17 logged out. Waiting for processes to exit. May 13 00:42:34.572853 systemd-logind[1184]: Removed session 17. May 13 00:42:39.573133 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:36114.service. May 13 00:42:39.603289 sshd[3080]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:39.604587 sshd[3080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:39.607943 systemd-logind[1184]: New session 18 of user core. May 13 00:42:39.608952 systemd[1]: Started session-18.scope. May 13 00:42:39.707023 sshd[3080]: pam_unix(sshd:session): session closed for user core May 13 00:42:39.709135 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:36114.service: Deactivated successfully. May 13 00:42:39.709861 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:42:39.710620 systemd-logind[1184]: Session 18 logged out. Waiting for processes to exit. May 13 00:42:39.711500 systemd-logind[1184]: Removed session 18. May 13 00:42:44.711267 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:36130.service. May 13 00:42:44.738992 sshd[3120]: Accepted publickey for core from 10.0.0.1 port 36130 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:44.739992 sshd[3120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:44.743470 systemd-logind[1184]: New session 19 of user core. May 13 00:42:44.744179 systemd[1]: Started session-19.scope. May 13 00:42:44.842878 sshd[3120]: pam_unix(sshd:session): session closed for user core May 13 00:42:44.845304 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:36130.service: Deactivated successfully. May 13 00:42:44.846232 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:42:44.846847 systemd-logind[1184]: Session 19 logged out. Waiting for processes to exit. May 13 00:42:44.847507 systemd-logind[1184]: Removed session 19. May 13 00:42:49.847169 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:51472.service. May 13 00:42:49.877194 sshd[3156]: Accepted publickey for core from 10.0.0.1 port 51472 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:49.878210 sshd[3156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:49.881236 systemd-logind[1184]: New session 20 of user core. May 13 00:42:49.881914 systemd[1]: Started session-20.scope. May 13 00:42:49.977079 sshd[3156]: pam_unix(sshd:session): session closed for user core May 13 00:42:49.979194 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:51472.service: Deactivated successfully. May 13 00:42:49.979950 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:42:49.980623 systemd-logind[1184]: Session 20 logged out. Waiting for processes to exit. May 13 00:42:49.981365 systemd-logind[1184]: Removed session 20. May 13 00:42:50.806437 kubelet[1885]: E0513 00:42:50.806398 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:53.806082 kubelet[1885]: E0513 00:42:53.806038 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:54.981398 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:51486.service. May 13 00:42:55.011044 sshd[3190]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:55.012068 sshd[3190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:55.015220 systemd-logind[1184]: New session 21 of user core. May 13 00:42:55.016189 systemd[1]: Started session-21.scope. May 13 00:42:55.115106 sshd[3190]: pam_unix(sshd:session): session closed for user core May 13 00:42:55.117277 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:51486.service: Deactivated successfully. May 13 00:42:55.118001 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:42:55.118488 systemd-logind[1184]: Session 21 logged out. Waiting for processes to exit. May 13 00:42:55.119101 systemd-logind[1184]: Removed session 21.