May 13 00:43:31.897369 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:43:31.897395 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:43:31.897423 kernel: BIOS-provided physical RAM map: May 13 00:43:31.897431 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:43:31.897438 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:43:31.897446 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:43:31.897455 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:43:31.897463 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:43:31.897473 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:43:31.897481 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:43:31.897488 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:43:31.897496 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:43:31.897504 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:43:31.897512 kernel: NX (Execute Disable) protection: active May 13 00:43:31.897523 kernel: SMBIOS 2.8 present. May 13 00:43:31.897532 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:43:31.897540 kernel: Hypervisor detected: KVM May 13 00:43:31.897548 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:43:31.897556 kernel: kvm-clock: cpu 0, msr 3e196001, primary cpu clock May 13 00:43:31.897564 kernel: kvm-clock: using sched offset of 2510844127 cycles May 13 00:43:31.897573 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:43:31.897581 kernel: tsc: Detected 2794.746 MHz processor May 13 00:43:31.897590 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:43:31.897601 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:43:31.897609 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:43:31.897618 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:43:31.897626 kernel: Using GB pages for direct mapping May 13 00:43:31.897644 kernel: ACPI: Early table checksum verification disabled May 13 00:43:31.897652 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:43:31.897661 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897669 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897677 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897688 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:43:31.897697 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897705 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897714 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897723 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:43:31.897732 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:43:31.897740 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:43:31.897749 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:43:31.897762 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:43:31.897772 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:43:31.897781 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:43:31.897790 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:43:31.897798 kernel: No NUMA configuration found May 13 00:43:31.897807 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:43:31.897818 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:43:31.897827 kernel: Zone ranges: May 13 00:43:31.897836 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:43:31.897845 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:43:31.897853 kernel: Normal empty May 13 00:43:31.897862 kernel: Movable zone start for each node May 13 00:43:31.897871 kernel: Early memory node ranges May 13 00:43:31.897879 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:43:31.897888 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:43:31.897899 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:43:31.897907 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:43:31.897916 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:43:31.897925 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:43:31.897934 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:43:31.897943 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:43:31.897952 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:43:31.897961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:43:31.897970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:43:31.897979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:43:31.897990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:43:31.897998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:43:31.898008 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:43:31.898017 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:43:31.898026 kernel: TSC deadline timer available May 13 00:43:31.898036 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:43:31.898045 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:43:31.898054 kernel: kvm-guest: setup PV sched yield May 13 00:43:31.898063 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:43:31.898074 kernel: Booting paravirtualized kernel on KVM May 13 00:43:31.898083 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:43:31.898092 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:43:31.898101 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:43:31.898110 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:43:31.898119 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:43:31.898128 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:43:31.898136 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 13 00:43:31.898145 kernel: kvm-guest: PV spinlocks enabled May 13 00:43:31.898156 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:43:31.898165 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:43:31.898174 kernel: Policy zone: DMA32 May 13 00:43:31.898185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:43:31.898195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:43:31.898204 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:43:31.898213 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:43:31.898222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:43:31.898235 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 13 00:43:31.898246 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:43:31.898257 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:43:31.898266 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:43:31.898275 kernel: rcu: Hierarchical RCU implementation. May 13 00:43:31.898285 kernel: rcu: RCU event tracing is enabled. May 13 00:43:31.898294 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:43:31.898303 kernel: Rude variant of Tasks RCU enabled. May 13 00:43:31.898312 kernel: Tracing variant of Tasks RCU enabled. May 13 00:43:31.898325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:43:31.898334 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:43:31.898344 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:43:31.898354 kernel: random: crng init done May 13 00:43:31.898362 kernel: Console: colour VGA+ 80x25 May 13 00:43:31.898371 kernel: printk: console [ttyS0] enabled May 13 00:43:31.898381 kernel: ACPI: Core revision 20210730 May 13 00:43:31.898391 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:43:31.898426 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:43:31.898439 kernel: x2apic enabled May 13 00:43:31.898448 kernel: Switched APIC routing to physical x2apic. May 13 00:43:31.898456 kernel: kvm-guest: setup PV IPIs May 13 00:43:31.898465 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:43:31.898475 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:43:31.898484 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 00:43:31.898494 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:43:31.898503 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:43:31.898513 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:43:31.898531 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:43:31.898541 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:43:31.898551 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:43:31.898561 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:43:31.898570 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:43:31.898580 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:43:31.898590 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:43:31.898599 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:43:31.898610 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:43:31.898622 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:43:31.898642 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:43:31.898652 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:43:31.898662 kernel: Freeing SMP alternatives memory: 32K May 13 00:43:31.898671 kernel: pid_max: default: 32768 minimum: 301 May 13 00:43:31.898681 kernel: LSM: Security Framework initializing May 13 00:43:31.898690 kernel: SELinux: Initializing. May 13 00:43:31.898700 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:43:31.898713 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:43:31.898723 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:43:31.898733 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:43:31.898743 kernel: ... version: 0 May 13 00:43:31.898753 kernel: ... bit width: 48 May 13 00:43:31.898762 kernel: ... generic registers: 6 May 13 00:43:31.898772 kernel: ... value mask: 0000ffffffffffff May 13 00:43:31.898782 kernel: ... max period: 00007fffffffffff May 13 00:43:31.898792 kernel: ... fixed-purpose events: 0 May 13 00:43:31.898804 kernel: ... event mask: 000000000000003f May 13 00:43:31.898814 kernel: signal: max sigframe size: 1776 May 13 00:43:31.898838 kernel: rcu: Hierarchical SRCU implementation. May 13 00:43:31.898868 kernel: smp: Bringing up secondary CPUs ... May 13 00:43:31.898882 kernel: x86: Booting SMP configuration: May 13 00:43:31.898891 kernel: .... node #0, CPUs: #1 May 13 00:43:31.898902 kernel: kvm-clock: cpu 1, msr 3e196041, secondary cpu clock May 13 00:43:31.898911 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:43:31.898920 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 13 00:43:31.898933 kernel: #2 May 13 00:43:31.898943 kernel: kvm-clock: cpu 2, msr 3e196081, secondary cpu clock May 13 00:43:31.898952 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:43:31.898961 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 13 00:43:31.898970 kernel: #3 May 13 00:43:31.898980 kernel: kvm-clock: cpu 3, msr 3e1960c1, secondary cpu clock May 13 00:43:31.898989 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:43:31.898999 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 13 00:43:31.899008 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:43:31.899020 kernel: smpboot: Max logical packages: 1 May 13 00:43:31.899030 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 00:43:31.899039 kernel: devtmpfs: initialized May 13 00:43:31.899049 kernel: x86/mm: Memory block size: 128MB May 13 00:43:31.899065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:43:31.899075 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:43:31.899085 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:43:31.899095 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:43:31.899104 kernel: audit: initializing netlink subsys (disabled) May 13 00:43:31.899116 kernel: audit: type=2000 audit(1747097011.427:1): state=initialized audit_enabled=0 res=1 May 13 00:43:31.899125 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:43:31.899135 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:43:31.899144 kernel: cpuidle: using governor menu May 13 00:43:31.899154 kernel: ACPI: bus type PCI registered May 13 00:43:31.899163 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:43:31.899173 kernel: dca service started, version 1.12.1 May 13 00:43:31.899183 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:43:31.899192 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:43:31.899204 kernel: PCI: Using configuration type 1 for base access May 13 00:43:31.899214 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:43:31.899224 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:43:31.899233 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:43:31.899243 kernel: ACPI: Added _OSI(Module Device) May 13 00:43:31.899252 kernel: ACPI: Added _OSI(Processor Device) May 13 00:43:31.899262 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:43:31.899272 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:43:31.899281 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:43:31.899292 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:43:31.899302 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:43:31.899312 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:43:31.899320 kernel: ACPI: Interpreter enabled May 13 00:43:31.899330 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:43:31.899339 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:43:31.899350 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:43:31.899359 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:43:31.899369 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:43:31.899553 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:43:31.899674 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:43:31.899776 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:43:31.899790 kernel: PCI host bridge to bus 0000:00 May 13 00:43:31.899894 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:43:31.899984 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:43:31.900076 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:43:31.900172 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:43:31.900264 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:43:31.900363 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:43:31.900484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:43:31.900605 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:43:31.900734 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:43:31.900848 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:43:31.900988 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:43:31.901138 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:43:31.901250 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:43:31.901365 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:43:31.901503 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:43:31.901609 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:43:31.901726 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:43:31.901845 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:43:31.901953 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:43:31.902055 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:43:31.902159 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:43:31.902274 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:43:31.902376 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:43:31.902548 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:43:31.907271 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:43:31.907365 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:43:31.907460 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:43:31.907531 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:43:31.907610 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:43:31.907691 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:43:31.907762 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:43:31.907836 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:43:31.907903 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:43:31.907913 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:43:31.907921 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:43:31.907928 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:43:31.907935 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:43:31.907943 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:43:31.907950 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:43:31.907957 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:43:31.907964 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:43:31.907971 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:43:31.907978 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:43:31.907985 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:43:31.907992 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:43:31.907999 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:43:31.908008 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:43:31.908015 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:43:31.908022 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:43:31.908029 kernel: iommu: Default domain type: Translated May 13 00:43:31.908036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:43:31.908104 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:43:31.908173 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:43:31.908242 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:43:31.908254 kernel: vgaarb: loaded May 13 00:43:31.908261 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:43:31.908268 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:43:31.908275 kernel: PTP clock support registered May 13 00:43:31.908282 kernel: PCI: Using ACPI for IRQ routing May 13 00:43:31.908289 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:43:31.908296 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:43:31.908303 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:43:31.908310 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:43:31.908319 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:43:31.908326 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:43:31.908333 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:43:31.908340 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:43:31.908347 kernel: pnp: PnP ACPI init May 13 00:43:31.908437 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:43:31.908449 kernel: pnp: PnP ACPI: found 6 devices May 13 00:43:31.908456 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:43:31.908463 kernel: NET: Registered PF_INET protocol family May 13 00:43:31.908472 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:43:31.908480 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:43:31.908487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:43:31.908494 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:43:31.908501 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:43:31.908508 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:43:31.908515 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:43:31.908522 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:43:31.908530 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:43:31.908537 kernel: NET: Registered PF_XDP protocol family May 13 00:43:31.908606 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:43:31.908679 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:43:31.908740 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:43:31.908801 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:43:31.908860 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:43:31.908920 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:43:31.908929 kernel: PCI: CLS 0 bytes, default 64 May 13 00:43:31.908938 kernel: Initialise system trusted keyrings May 13 00:43:31.908945 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:43:31.908952 kernel: Key type asymmetric registered May 13 00:43:31.908959 kernel: Asymmetric key parser 'x509' registered May 13 00:43:31.908966 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:43:31.908973 kernel: io scheduler mq-deadline registered May 13 00:43:31.908980 kernel: io scheduler kyber registered May 13 00:43:31.908987 kernel: io scheduler bfq registered May 13 00:43:31.908994 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:43:31.909003 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:43:31.909010 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:43:31.909017 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:43:31.909024 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:43:31.909032 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:43:31.909039 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:43:31.909046 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:43:31.909053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:43:31.909060 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:43:31.909130 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:43:31.909193 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:43:31.909257 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:43:31 UTC (1747097011) May 13 00:43:31.909322 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:43:31.909331 kernel: NET: Registered PF_INET6 protocol family May 13 00:43:31.909338 kernel: Segment Routing with IPv6 May 13 00:43:31.909345 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:43:31.909352 kernel: NET: Registered PF_PACKET protocol family May 13 00:43:31.909361 kernel: Key type dns_resolver registered May 13 00:43:31.909368 kernel: IPI shorthand broadcast: enabled May 13 00:43:31.909375 kernel: sched_clock: Marking stable (431001367, 101986777)->(548325533, -15337389) May 13 00:43:31.909382 kernel: registered taskstats version 1 May 13 00:43:31.909389 kernel: Loading compiled-in X.509 certificates May 13 00:43:31.909396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:43:31.909414 kernel: Key type .fscrypt registered May 13 00:43:31.909421 kernel: Key type fscrypt-provisioning registered May 13 00:43:31.909428 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:43:31.909437 kernel: ima: Allocated hash algorithm: sha1 May 13 00:43:31.909444 kernel: ima: No architecture policies found May 13 00:43:31.909450 kernel: clk: Disabling unused clocks May 13 00:43:31.909457 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:43:31.909464 kernel: Write protecting the kernel read-only data: 28672k May 13 00:43:31.909471 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:43:31.909478 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:43:31.909485 kernel: Run /init as init process May 13 00:43:31.909492 kernel: with arguments: May 13 00:43:31.909500 kernel: /init May 13 00:43:31.909507 kernel: with environment: May 13 00:43:31.909514 kernel: HOME=/ May 13 00:43:31.909520 kernel: TERM=linux May 13 00:43:31.909527 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:43:31.909536 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:43:31.909545 systemd[1]: Detected virtualization kvm. May 13 00:43:31.909555 systemd[1]: Detected architecture x86-64. May 13 00:43:31.909562 systemd[1]: Running in initrd. May 13 00:43:31.909569 systemd[1]: No hostname configured, using default hostname. May 13 00:43:31.909576 systemd[1]: Hostname set to . May 13 00:43:31.909584 systemd[1]: Initializing machine ID from VM UUID. May 13 00:43:31.909591 systemd[1]: Queued start job for default target initrd.target. May 13 00:43:31.909599 systemd[1]: Started systemd-ask-password-console.path. May 13 00:43:31.909606 systemd[1]: Reached target cryptsetup.target. May 13 00:43:31.909613 systemd[1]: Reached target paths.target. May 13 00:43:31.909622 systemd[1]: Reached target slices.target. May 13 00:43:31.909644 systemd[1]: Reached target swap.target. May 13 00:43:31.909653 systemd[1]: Reached target timers.target. May 13 00:43:31.909661 systemd[1]: Listening on iscsid.socket. May 13 00:43:31.909668 systemd[1]: Listening on iscsiuio.socket. May 13 00:43:31.909678 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:43:31.909685 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:43:31.909693 systemd[1]: Listening on systemd-journald.socket. May 13 00:43:31.909701 systemd[1]: Listening on systemd-networkd.socket. May 13 00:43:31.909708 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:43:31.909716 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:43:31.909723 systemd[1]: Reached target sockets.target. May 13 00:43:31.909731 systemd[1]: Starting kmod-static-nodes.service... May 13 00:43:31.909738 systemd[1]: Finished network-cleanup.service. May 13 00:43:31.909747 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:43:31.909755 systemd[1]: Starting systemd-journald.service... May 13 00:43:31.909763 systemd[1]: Starting systemd-modules-load.service... May 13 00:43:31.909770 systemd[1]: Starting systemd-resolved.service... May 13 00:43:31.909778 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:43:31.909786 systemd[1]: Finished kmod-static-nodes.service. May 13 00:43:31.909793 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:43:31.909801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:43:31.909812 systemd-journald[196]: Journal started May 13 00:43:31.909855 systemd-journald[196]: Runtime Journal (/run/log/journal/99e18d499e1b40c28e4dff6c1d9ce57f) is 6.0M, max 48.5M, 42.5M free. May 13 00:43:31.896098 systemd-modules-load[197]: Inserted module 'overlay' May 13 00:43:31.939092 systemd[1]: Started systemd-journald.service. May 13 00:43:31.939121 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:43:31.939140 kernel: Bridge firewalling registered May 13 00:43:31.929105 systemd-resolved[198]: Positive Trust Anchors: May 13 00:43:31.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.929131 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:43:31.945716 kernel: audit: type=1130 audit(1747097011.940:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.929170 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:43:31.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.931343 systemd-modules-load[197]: Inserted module 'br_netfilter' May 13 00:43:31.957441 kernel: audit: type=1130 audit(1747097011.953:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.932210 systemd-resolved[198]: Defaulting to hostname 'linux'. May 13 00:43:31.960291 kernel: SCSI subsystem initialized May 13 00:43:31.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.941327 systemd[1]: Started systemd-resolved.service. May 13 00:43:31.964161 kernel: audit: type=1130 audit(1747097011.959:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.953792 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:43:31.960637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:43:31.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.970440 kernel: audit: type=1130 audit(1747097011.965:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.970892 systemd[1]: Reached target nss-lookup.target. May 13 00:43:31.976260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:43:31.976284 kernel: device-mapper: uevent: version 1.0.3 May 13 00:43:31.976293 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:43:31.977227 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:43:31.979430 systemd-modules-load[197]: Inserted module 'dm_multipath' May 13 00:43:31.980764 systemd[1]: Finished systemd-modules-load.service. May 13 00:43:31.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.983671 systemd[1]: Starting systemd-sysctl.service... May 13 00:43:31.986458 kernel: audit: type=1130 audit(1747097011.982:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.988998 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:43:31.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.992047 systemd[1]: Starting dracut-cmdline.service... May 13 00:43:31.995503 kernel: audit: type=1130 audit(1747097011.990:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:31.996802 systemd[1]: Finished systemd-sysctl.service. May 13 00:43:31.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.002298 dracut-cmdline[220]: dracut-dracut-053 May 13 00:43:32.003290 kernel: audit: type=1130 audit(1747097011.998:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.005100 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:43:32.062431 kernel: Loading iSCSI transport class v2.0-870. May 13 00:43:32.078426 kernel: iscsi: registered transport (tcp) May 13 00:43:32.099426 kernel: iscsi: registered transport (qla4xxx) May 13 00:43:32.099452 kernel: QLogic iSCSI HBA Driver May 13 00:43:32.122906 systemd[1]: Finished dracut-cmdline.service. May 13 00:43:32.127716 kernel: audit: type=1130 audit(1747097012.122:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.124103 systemd[1]: Starting dracut-pre-udev.service... May 13 00:43:32.170433 kernel: raid6: avx2x4 gen() 30590 MB/s May 13 00:43:32.187424 kernel: raid6: avx2x4 xor() 8333 MB/s May 13 00:43:32.204423 kernel: raid6: avx2x2 gen() 32081 MB/s May 13 00:43:32.221425 kernel: raid6: avx2x2 xor() 19008 MB/s May 13 00:43:32.238424 kernel: raid6: avx2x1 gen() 26170 MB/s May 13 00:43:32.255425 kernel: raid6: avx2x1 xor() 15109 MB/s May 13 00:43:32.272429 kernel: raid6: sse2x4 gen() 14652 MB/s May 13 00:43:32.289424 kernel: raid6: sse2x4 xor() 7576 MB/s May 13 00:43:32.306427 kernel: raid6: sse2x2 gen() 16237 MB/s May 13 00:43:32.323425 kernel: raid6: sse2x2 xor() 9726 MB/s May 13 00:43:32.340426 kernel: raid6: sse2x1 gen() 12153 MB/s May 13 00:43:32.357833 kernel: raid6: sse2x1 xor() 7686 MB/s May 13 00:43:32.357860 kernel: raid6: using algorithm avx2x2 gen() 32081 MB/s May 13 00:43:32.357872 kernel: raid6: .... xor() 19008 MB/s, rmw enabled May 13 00:43:32.358572 kernel: raid6: using avx2x2 recovery algorithm May 13 00:43:32.370422 kernel: xor: automatically using best checksumming function avx May 13 00:43:32.459435 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:43:32.467828 systemd[1]: Finished dracut-pre-udev.service. May 13 00:43:32.472773 kernel: audit: type=1130 audit(1747097012.467:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.472000 audit: BPF prog-id=7 op=LOAD May 13 00:43:32.472000 audit: BPF prog-id=8 op=LOAD May 13 00:43:32.473172 systemd[1]: Starting systemd-udevd.service... May 13 00:43:32.491192 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 13 00:43:32.496074 systemd[1]: Started systemd-udevd.service. May 13 00:43:32.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.498486 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:43:32.509627 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 13 00:43:32.533832 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:43:32.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.535378 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:43:32.567045 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:43:32.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:32.604762 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:43:32.626519 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:43:32.626545 kernel: libata version 3.00 loaded. May 13 00:43:32.626562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:43:32.626571 kernel: GPT:9289727 != 19775487 May 13 00:43:32.626580 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:43:32.626588 kernel: GPT:9289727 != 19775487 May 13 00:43:32.626597 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:43:32.626607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:43:32.626623 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:43:32.637010 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:43:32.637024 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:43:32.637112 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:43:32.637205 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:43:32.637216 kernel: scsi host0: ahci May 13 00:43:32.637302 kernel: AES CTR mode by8 optimization enabled May 13 00:43:32.637315 kernel: scsi host1: ahci May 13 00:43:32.637414 kernel: scsi host2: ahci May 13 00:43:32.637504 kernel: scsi host3: ahci May 13 00:43:32.637588 kernel: scsi host4: ahci May 13 00:43:32.637686 kernel: scsi host5: ahci May 13 00:43:32.637766 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:43:32.637776 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:43:32.637788 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:43:32.637796 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:43:32.637805 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:43:32.637814 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:43:32.641076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:43:32.679746 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) May 13 00:43:32.681789 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:43:32.689106 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:43:32.696377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:43:32.701081 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:43:32.703527 systemd[1]: Starting disk-uuid.service... May 13 00:43:32.711913 disk-uuid[518]: Primary Header is updated. May 13 00:43:32.711913 disk-uuid[518]: Secondary Entries is updated. May 13 00:43:32.711913 disk-uuid[518]: Secondary Header is updated. May 13 00:43:32.715422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:43:32.717426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:43:32.948736 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:43:32.948806 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:43:32.948816 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:43:32.952523 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:43:32.952544 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:43:32.952553 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:43:32.952562 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:43:32.953426 kernel: ata3.00: applying bridge limits May 13 00:43:32.954644 kernel: ata3.00: configured for UDMA/100 May 13 00:43:32.955427 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:43:32.991479 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:43:33.009081 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:43:33.009096 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:43:33.721869 disk-uuid[519]: The operation has completed successfully. May 13 00:43:33.723621 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:43:33.741303 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:43:33.741522 systemd[1]: Finished disk-uuid.service. May 13 00:43:33.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.760216 systemd[1]: Starting verity-setup.service... May 13 00:43:33.772443 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:43:33.792318 systemd[1]: Found device dev-mapper-usr.device. May 13 00:43:33.794282 systemd[1]: Finished verity-setup.service. May 13 00:43:33.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.796640 systemd[1]: Mounting sysusr-usr.mount... May 13 00:43:33.859241 systemd[1]: Mounted sysusr-usr.mount. May 13 00:43:33.860768 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:43:33.860831 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:43:33.862858 systemd[1]: Starting ignition-setup.service... May 13 00:43:33.865102 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:43:33.873336 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:43:33.873386 kernel: BTRFS info (device vda6): using free space tree May 13 00:43:33.873412 kernel: BTRFS info (device vda6): has skinny extents May 13 00:43:33.882976 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:43:33.891449 systemd[1]: Finished ignition-setup.service. May 13 00:43:33.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.892989 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:43:33.929926 ignition[644]: Ignition 2.14.0 May 13 00:43:33.930646 ignition[644]: Stage: fetch-offline May 13 00:43:33.930684 ignition[644]: no configs at "/usr/lib/ignition/base.d" May 13 00:43:33.930692 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:43:33.930777 ignition[644]: parsed url from cmdline: "" May 13 00:43:33.930780 ignition[644]: no config URL provided May 13 00:43:33.930785 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:43:33.930791 ignition[644]: no config at "/usr/lib/ignition/user.ign" May 13 00:43:33.930805 ignition[644]: op(1): [started] loading QEMU firmware config module May 13 00:43:33.930809 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:43:33.933940 ignition[644]: op(1): [finished] loading QEMU firmware config module May 13 00:43:33.940476 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:43:33.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.942000 audit: BPF prog-id=9 op=LOAD May 13 00:43:33.943065 systemd[1]: Starting systemd-networkd.service... May 13 00:43:33.953608 ignition[644]: parsing config with SHA512: a34eff8fe5ff433140e5f20c5b5133f576d93941de5e12c6a80f0b242ba8a9cbcf9b4158127fadc5f0d0ce941c0f087937e1da44fec1b7cbb28c422fe2233679 May 13 00:43:33.960417 unknown[644]: fetched base config from "system" May 13 00:43:33.960434 unknown[644]: fetched user config from "qemu" May 13 00:43:33.961017 ignition[644]: fetch-offline: fetch-offline passed May 13 00:43:33.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.962029 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:43:33.961094 ignition[644]: Ignition finished successfully May 13 00:43:33.981090 systemd-networkd[717]: lo: Link UP May 13 00:43:33.981101 systemd-networkd[717]: lo: Gained carrier May 13 00:43:33.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.981509 systemd-networkd[717]: Enumeration completed May 13 00:43:33.981621 systemd[1]: Started systemd-networkd.service. May 13 00:43:33.981713 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:43:33.982595 systemd-networkd[717]: eth0: Link UP May 13 00:43:33.982599 systemd-networkd[717]: eth0: Gained carrier May 13 00:43:33.983332 systemd[1]: Reached target network.target. May 13 00:43:33.984934 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:43:33.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:33.985656 systemd[1]: Starting ignition-kargs.service... May 13 00:43:33.996640 iscsid[723]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:43:33.996640 iscsid[723]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 13 00:43:33.996640 iscsid[723]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:43:33.996640 iscsid[723]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:43:33.996640 iscsid[723]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:43:33.996640 iscsid[723]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:43:33.996640 iscsid[723]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:43:33.987371 systemd[1]: Starting iscsiuio.service... May 13 00:43:33.991517 systemd[1]: Started iscsiuio.service. May 13 00:43:33.993448 systemd[1]: Starting iscsid.service... May 13 00:43:33.993846 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:43:33.997573 systemd[1]: Started iscsid.service. May 13 00:43:34.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.012768 systemd[1]: Starting dracut-initqueue.service... May 13 00:43:34.015914 ignition[719]: Ignition 2.14.0 May 13 00:43:34.015927 ignition[719]: Stage: kargs May 13 00:43:34.016011 ignition[719]: no configs at "/usr/lib/ignition/base.d" May 13 00:43:34.016019 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:43:34.016904 ignition[719]: kargs: kargs passed May 13 00:43:34.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.018419 systemd[1]: Finished ignition-kargs.service. May 13 00:43:34.016936 ignition[719]: Ignition finished successfully May 13 00:43:34.020003 systemd[1]: Starting ignition-disks.service... May 13 00:43:34.026736 ignition[730]: Ignition 2.14.0 May 13 00:43:34.026746 ignition[730]: Stage: disks May 13 00:43:34.026847 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 13 00:43:34.026859 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:43:34.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.028867 systemd[1]: Finished ignition-disks.service. May 13 00:43:34.028007 ignition[730]: disks: disks passed May 13 00:43:34.029810 systemd[1]: Reached target initrd-root-device.target. May 13 00:43:34.028048 ignition[730]: Ignition finished successfully May 13 00:43:34.031619 systemd[1]: Reached target local-fs-pre.target. May 13 00:43:34.032954 systemd[1]: Reached target local-fs.target. May 13 00:43:34.034283 systemd[1]: Reached target sysinit.target. May 13 00:43:34.035798 systemd[1]: Reached target basic.target. May 13 00:43:34.042388 systemd[1]: Finished dracut-initqueue.service. May 13 00:43:34.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.044000 systemd[1]: Reached target remote-fs-pre.target. May 13 00:43:34.044382 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:43:34.045928 systemd[1]: Reached target remote-fs.target. May 13 00:43:34.049134 systemd[1]: Starting dracut-pre-mount.service... May 13 00:43:34.056931 systemd[1]: Finished dracut-pre-mount.service. May 13 00:43:34.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.058200 systemd[1]: Starting systemd-fsck-root.service... May 13 00:43:34.065842 systemd-resolved[198]: Detected conflict on linux IN A 10.0.0.77 May 13 00:43:34.065860 systemd-resolved[198]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 13 00:43:34.067234 systemd-fsck[750]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:43:34.071914 systemd[1]: Finished systemd-fsck-root.service. May 13 00:43:34.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.073259 systemd[1]: Mounting sysroot.mount... May 13 00:43:34.080432 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:43:34.080991 systemd[1]: Mounted sysroot.mount. May 13 00:43:34.081711 systemd[1]: Reached target initrd-root-fs.target. May 13 00:43:34.083923 systemd[1]: Mounting sysroot-usr.mount... May 13 00:43:34.085048 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:43:34.085087 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:43:34.085111 systemd[1]: Reached target ignition-diskful.target. May 13 00:43:34.086958 systemd[1]: Mounted sysroot-usr.mount. May 13 00:43:34.089365 systemd[1]: Starting initrd-setup-root.service... May 13 00:43:34.096270 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:43:34.100385 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory May 13 00:43:34.104068 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:43:34.107617 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:43:34.136883 systemd[1]: Finished initrd-setup-root.service. May 13 00:43:34.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.138504 systemd[1]: Starting ignition-mount.service... May 13 00:43:34.139673 systemd[1]: Starting sysroot-boot.service... May 13 00:43:34.143875 bash[801]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:43:34.152664 ignition[802]: INFO : Ignition 2.14.0 May 13 00:43:34.152664 ignition[802]: INFO : Stage: mount May 13 00:43:34.154256 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:43:34.154256 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:43:34.154256 ignition[802]: INFO : mount: mount passed May 13 00:43:34.154256 ignition[802]: INFO : Ignition finished successfully May 13 00:43:34.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.154962 systemd[1]: Finished ignition-mount.service. May 13 00:43:34.162190 systemd[1]: Finished sysroot-boot.service. May 13 00:43:34.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:34.805594 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:43:34.813428 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) May 13 00:43:34.813480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:43:34.815663 kernel: BTRFS info (device vda6): using free space tree May 13 00:43:34.815692 kernel: BTRFS info (device vda6): has skinny extents May 13 00:43:34.820204 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:43:34.822952 systemd[1]: Starting ignition-files.service... May 13 00:43:34.840337 ignition[832]: INFO : Ignition 2.14.0 May 13 00:43:34.840337 ignition[832]: INFO : Stage: files May 13 00:43:34.842364 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:43:34.842364 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:43:34.842364 ignition[832]: DEBUG : files: compiled without relabeling support, skipping May 13 00:43:34.846754 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:43:34.846754 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:43:34.850113 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:43:34.851649 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:43:34.853706 unknown[832]: wrote ssh authorized keys file for user: core May 13 00:43:34.855026 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:43:34.856681 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:43:34.856681 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:43:34.856681 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:43:34.856681 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:43:34.914305 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:43:35.003697 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:43:35.003697 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:43:35.008545 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:43:35.533737 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:43:35.877536 systemd-networkd[717]: eth0: Gained IPv6LL May 13 00:43:35.900737 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:43:35.900737 ignition[832]: INFO : files: op(c): [started] processing unit "containerd.service" May 13 00:43:35.905199 ignition[832]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:43:35.905199 ignition[832]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:43:35.905199 ignition[832]: INFO : files: op(c): [finished] processing unit "containerd.service" May 13 00:43:35.905199 ignition[832]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 13 00:43:35.905199 ignition[832]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:43:35.905199 ignition[832]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:43:35.905199 ignition[832]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 13 00:43:35.905199 ignition[832]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 13 00:43:35.921686 ignition[832]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:43:35.924249 ignition[832]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:43:35.924249 ignition[832]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 13 00:43:35.924249 ignition[832]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 00:43:35.930221 ignition[832]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:43:35.930221 ignition[832]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:43:35.930221 ignition[832]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:43:35.953765 ignition[832]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:43:35.955481 ignition[832]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:43:35.955481 ignition[832]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:43:35.955481 ignition[832]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:43:35.955481 ignition[832]: INFO : files: files passed May 13 00:43:35.955481 ignition[832]: INFO : Ignition finished successfully May 13 00:43:35.964224 systemd[1]: Finished ignition-files.service. May 13 00:43:35.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:35.965650 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:43:35.966437 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:43:35.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:35.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:35.967053 systemd[1]: Starting ignition-quench.service... May 13 00:43:35.973792 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:43:35.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:35.970368 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:43:35.979939 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:43:35.970493 systemd[1]: Finished ignition-quench.service. May 13 00:43:35.973908 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:43:35.976753 systemd[1]: Reached target ignition-complete.target. May 13 00:43:35.978704 systemd[1]: Starting initrd-parse-etc.service... May 13 00:43:35.992448 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:43:35.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:35.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:35.992562 systemd[1]: Finished initrd-parse-etc.service. May 13 00:43:35.994863 systemd[1]: Reached target initrd-fs.target. May 13 00:43:35.995873 systemd[1]: Reached target initrd.target. May 13 00:43:35.997711 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:43:35.998653 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:43:36.010355 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:43:36.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.012886 systemd[1]: Starting initrd-cleanup.service... May 13 00:43:36.021395 systemd[1]: Stopped target nss-lookup.target. May 13 00:43:36.022517 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:43:36.024455 systemd[1]: Stopped target timers.target. May 13 00:43:36.026388 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:43:36.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.026505 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:43:36.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.028414 systemd[1]: Stopped target initrd.target. May 13 00:43:36.030272 systemd[1]: Stopped target basic.target. May 13 00:43:36.071887 ignition[873]: INFO : Ignition 2.14.0 May 13 00:43:36.071887 ignition[873]: INFO : Stage: umount May 13 00:43:36.071887 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:43:36.071887 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:43:36.071887 ignition[873]: INFO : umount: umount passed May 13 00:43:36.071887 ignition[873]: INFO : Ignition finished successfully May 13 00:43:36.071000 audit: BPF prog-id=6 op=UNLOAD May 13 00:43:36.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.030819 systemd[1]: Stopped target ignition-complete.target. May 13 00:43:36.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.031031 systemd[1]: Stopped target ignition-diskful.target. May 13 00:43:36.031197 systemd[1]: Stopped target initrd-root-device.target. May 13 00:43:36.031392 systemd[1]: Stopped target remote-fs.target. May 13 00:43:36.031811 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:43:36.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.032030 systemd[1]: Stopped target sysinit.target. May 13 00:43:36.032205 systemd[1]: Stopped target local-fs.target. May 13 00:43:36.032369 systemd[1]: Stopped target local-fs-pre.target. May 13 00:43:36.032751 systemd[1]: Stopped target swap.target. May 13 00:43:36.032948 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:43:36.033044 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:43:36.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.033237 systemd[1]: Stopped target cryptsetup.target. May 13 00:43:36.033375 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:43:36.033460 systemd[1]: Stopped dracut-initqueue.service. May 13 00:43:36.033871 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:43:36.033960 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:43:36.034129 systemd[1]: Stopped target paths.target. May 13 00:43:36.034249 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:43:36.037463 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:43:36.037640 systemd[1]: Stopped target slices.target. May 13 00:43:36.037831 systemd[1]: Stopped target sockets.target. May 13 00:43:36.038046 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:43:36.038108 systemd[1]: Closed iscsid.socket. May 13 00:43:36.038284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:43:36.038362 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:43:36.038717 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:43:36.038789 systemd[1]: Stopped ignition-files.service. May 13 00:43:36.039706 systemd[1]: Stopping ignition-mount.service... May 13 00:43:36.040096 systemd[1]: Stopping iscsiuio.service... May 13 00:43:36.040275 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:43:36.040382 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:43:36.041390 systemd[1]: Stopping sysroot-boot.service... May 13 00:43:36.041755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:43:36.041902 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:43:36.042096 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:43:36.042203 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:43:36.044333 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:43:36.044414 systemd[1]: Stopped iscsiuio.service. May 13 00:43:36.045555 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:43:36.045613 systemd[1]: Finished initrd-cleanup.service. May 13 00:43:36.048785 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:43:36.048813 systemd[1]: Closed iscsiuio.socket. May 13 00:43:36.051160 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:43:36.051262 systemd[1]: Stopped ignition-mount.service. May 13 00:43:36.051930 systemd[1]: Stopped target network.target. May 13 00:43:36.052025 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:43:36.052061 systemd[1]: Stopped ignition-disks.service. May 13 00:43:36.052301 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:43:36.052332 systemd[1]: Stopped ignition-kargs.service. May 13 00:43:36.052611 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:43:36.052645 systemd[1]: Stopped ignition-setup.service. May 13 00:43:36.052948 systemd[1]: Stopping systemd-networkd.service... May 13 00:43:36.053134 systemd[1]: Stopping systemd-resolved.service... May 13 00:43:36.058620 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:43:36.068198 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:43:36.068320 systemd[1]: Stopped systemd-resolved.service. May 13 00:43:36.071092 systemd-networkd[717]: eth0: DHCPv6 lease lost May 13 00:43:36.115000 audit: BPF prog-id=9 op=UNLOAD May 13 00:43:36.072266 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:43:36.072360 systemd[1]: Stopped systemd-networkd.service. May 13 00:43:36.075512 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:43:36.075581 systemd[1]: Closed systemd-networkd.socket. May 13 00:43:36.077990 systemd[1]: Stopping network-cleanup.service... May 13 00:43:36.079708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:43:36.079823 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:43:36.081838 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:43:36.081900 systemd[1]: Stopped systemd-sysctl.service. May 13 00:43:36.084011 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:43:36.084053 systemd[1]: Stopped systemd-modules-load.service. May 13 00:43:36.085039 systemd[1]: Stopping systemd-udevd.service... May 13 00:43:36.086280 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:43:36.090147 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:43:36.090289 systemd[1]: Stopped network-cleanup.service. May 13 00:43:36.096763 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:43:36.096884 systemd[1]: Stopped systemd-udevd.service. May 13 00:43:36.099897 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:43:36.099937 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:43:36.102005 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:43:36.102055 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:43:36.105173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:43:36.105239 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:43:36.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.164699 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:43:36.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.164773 systemd[1]: Stopped dracut-cmdline.service. May 13 00:43:36.166842 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:43:36.166885 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:43:36.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.171587 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:43:36.177092 kernel: kauditd_printk_skb: 57 callbacks suppressed May 13 00:43:36.177126 kernel: audit: type=1131 audit(1747097016.170:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.175940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:43:36.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.177027 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:43:36.184307 kernel: audit: type=1131 audit(1747097016.179:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.179722 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:43:36.179825 systemd[1]: Stopped sysroot-boot.service. May 13 00:43:36.190601 kernel: audit: type=1131 audit(1747097016.186:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.190930 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:43:36.191052 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:43:36.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.194749 systemd[1]: Reached target initrd-switch-root.target. May 13 00:43:36.203420 kernel: audit: type=1130 audit(1747097016.194:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.203457 kernel: audit: type=1131 audit(1747097016.194:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.203351 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:43:36.203467 systemd[1]: Stopped initrd-setup-root.service. May 13 00:43:36.209993 kernel: audit: type=1131 audit(1747097016.205:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:36.206599 systemd[1]: Starting initrd-switch-root.service... May 13 00:43:36.214291 systemd[1]: Switching root. May 13 00:43:36.216000 audit: BPF prog-id=8 op=UNLOAD May 13 00:43:36.216000 audit: BPF prog-id=7 op=UNLOAD May 13 00:43:36.218612 kernel: audit: type=1334 audit(1747097016.216:74): prog-id=8 op=UNLOAD May 13 00:43:36.218647 kernel: audit: type=1334 audit(1747097016.216:75): prog-id=7 op=UNLOAD May 13 00:43:36.218661 kernel: audit: type=1334 audit(1747097016.217:76): prog-id=5 op=UNLOAD May 13 00:43:36.217000 audit: BPF prog-id=5 op=UNLOAD May 13 00:43:36.219800 kernel: audit: type=1334 audit(1747097016.219:77): prog-id=4 op=UNLOAD May 13 00:43:36.219000 audit: BPF prog-id=4 op=UNLOAD May 13 00:43:36.220000 audit: BPF prog-id=3 op=UNLOAD May 13 00:43:36.243923 iscsid[723]: iscsid shutting down. May 13 00:43:36.244876 systemd-journald[196]: Received SIGTERM from PID 1 (n/a). May 13 00:43:36.244944 systemd-journald[196]: Journal stopped May 13 00:43:38.961657 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:43:38.961718 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:43:38.961733 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:43:38.961745 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:43:38.961757 kernel: SELinux: policy capability open_perms=1 May 13 00:43:38.961769 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:43:38.961781 kernel: SELinux: policy capability always_check_network=0 May 13 00:43:38.961793 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:43:38.961808 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:43:38.961820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:43:38.961832 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:43:38.961848 systemd[1]: Successfully loaded SELinux policy in 47.977ms. May 13 00:43:38.961871 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.491ms. May 13 00:43:38.961886 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:43:38.961900 systemd[1]: Detected virtualization kvm. May 13 00:43:38.961914 systemd[1]: Detected architecture x86-64. May 13 00:43:38.961929 systemd[1]: Detected first boot. May 13 00:43:38.961943 systemd[1]: Initializing machine ID from VM UUID. May 13 00:43:38.961957 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:43:38.961974 systemd[1]: Populated /etc with preset unit settings. May 13 00:43:38.961988 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:43:38.962005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:43:38.962024 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:43:38.962039 systemd[1]: Queued start job for default target multi-user.target. May 13 00:43:38.962052 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:43:38.962066 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:43:38.962080 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:43:38.962094 systemd[1]: Created slice system-getty.slice. May 13 00:43:38.962107 systemd[1]: Created slice system-modprobe.slice. May 13 00:43:38.962120 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:43:38.962136 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:43:38.962150 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:43:38.962166 systemd[1]: Created slice user.slice. May 13 00:43:38.962179 systemd[1]: Started systemd-ask-password-console.path. May 13 00:43:38.962193 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:43:38.962210 systemd[1]: Set up automount boot.automount. May 13 00:43:38.962226 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:43:38.962241 systemd[1]: Reached target integritysetup.target. May 13 00:43:38.962254 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:43:38.962268 systemd[1]: Reached target remote-fs.target. May 13 00:43:38.962281 systemd[1]: Reached target slices.target. May 13 00:43:38.962294 systemd[1]: Reached target swap.target. May 13 00:43:38.962308 systemd[1]: Reached target torcx.target. May 13 00:43:38.962321 systemd[1]: Reached target veritysetup.target. May 13 00:43:38.962335 systemd[1]: Listening on systemd-coredump.socket. May 13 00:43:38.962349 systemd[1]: Listening on systemd-initctl.socket. May 13 00:43:38.962362 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:43:38.962378 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:43:38.962393 systemd[1]: Listening on systemd-journald.socket. May 13 00:43:38.962423 systemd[1]: Listening on systemd-networkd.socket. May 13 00:43:38.962437 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:43:38.962450 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:43:38.962464 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:43:38.962488 systemd[1]: Mounting dev-hugepages.mount... May 13 00:43:38.962503 systemd[1]: Mounting dev-mqueue.mount... May 13 00:43:38.962516 systemd[1]: Mounting media.mount... May 13 00:43:38.962533 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:38.962547 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:43:38.962561 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:43:38.962575 systemd[1]: Mounting tmp.mount... May 13 00:43:38.962588 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:43:38.962604 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:43:38.962618 systemd[1]: Starting kmod-static-nodes.service... May 13 00:43:38.962632 systemd[1]: Starting modprobe@configfs.service... May 13 00:43:38.962645 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:43:38.962661 systemd[1]: Starting modprobe@drm.service... May 13 00:43:38.962674 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:43:38.962688 systemd[1]: Starting modprobe@fuse.service... May 13 00:43:38.962702 systemd[1]: Starting modprobe@loop.service... May 13 00:43:38.962717 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:43:38.962732 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:43:38.962746 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 00:43:38.962760 systemd[1]: Starting systemd-journald.service... May 13 00:43:38.962773 kernel: fuse: init (API version 7.34) May 13 00:43:38.962791 systemd[1]: Starting systemd-modules-load.service... May 13 00:43:38.962804 kernel: loop: module loaded May 13 00:43:38.962818 systemd[1]: Starting systemd-network-generator.service... May 13 00:43:38.962832 systemd[1]: Starting systemd-remount-fs.service... May 13 00:43:38.962847 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:43:38.962861 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:38.962878 systemd-journald[1023]: Journal started May 13 00:43:38.962927 systemd-journald[1023]: Runtime Journal (/run/log/journal/99e18d499e1b40c28e4dff6c1d9ce57f) is 6.0M, max 48.5M, 42.5M free. May 13 00:43:38.868000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:43:38.868000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:43:38.959000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:43:38.959000 audit[1023]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc8853ad30 a2=4000 a3=7ffc8853adcc items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:43:38.959000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:43:38.967422 systemd[1]: Started systemd-journald.service. May 13 00:43:38.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.966175 systemd[1]: Mounted dev-hugepages.mount. May 13 00:43:38.967091 systemd[1]: Mounted dev-mqueue.mount. May 13 00:43:38.967951 systemd[1]: Mounted media.mount. May 13 00:43:38.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.969652 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:43:38.970572 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:43:38.971459 systemd[1]: Mounted tmp.mount. May 13 00:43:38.972537 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:43:38.973998 systemd[1]: Finished kmod-static-nodes.service. May 13 00:43:38.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.975155 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:43:38.975384 systemd[1]: Finished modprobe@configfs.service. May 13 00:43:38.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.976593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:43:38.976804 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:43:38.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.977962 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:43:38.978150 systemd[1]: Finished modprobe@drm.service. May 13 00:43:38.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.979271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:43:38.979478 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:43:38.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.980896 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:43:38.981140 systemd[1]: Finished modprobe@fuse.service. May 13 00:43:38.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.982186 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:43:38.982356 systemd[1]: Finished modprobe@loop.service. May 13 00:43:38.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.983607 systemd[1]: Finished systemd-modules-load.service. May 13 00:43:38.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.985066 systemd[1]: Finished systemd-network-generator.service. May 13 00:43:38.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.986764 systemd[1]: Finished systemd-remount-fs.service. May 13 00:43:38.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:38.988455 systemd[1]: Reached target network-pre.target. May 13 00:43:38.991038 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:43:38.992890 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:43:38.994233 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:43:38.995814 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:43:38.997982 systemd[1]: Starting systemd-journal-flush.service... May 13 00:43:38.999181 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:43:39.000156 systemd[1]: Starting systemd-random-seed.service... May 13 00:43:39.001294 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:43:39.002528 systemd[1]: Starting systemd-sysctl.service... May 13 00:43:39.004813 systemd[1]: Starting systemd-sysusers.service... May 13 00:43:39.006554 systemd-journald[1023]: Time spent on flushing to /var/log/journal/99e18d499e1b40c28e4dff6c1d9ce57f is 20.528ms for 1035 entries. May 13 00:43:39.006554 systemd-journald[1023]: System Journal (/var/log/journal/99e18d499e1b40c28e4dff6c1d9ce57f) is 8.0M, max 195.6M, 187.6M free. May 13 00:43:39.041676 systemd-journald[1023]: Received client request to flush runtime journal. May 13 00:43:39.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.008034 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:43:39.010127 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:43:39.042195 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:43:39.011588 systemd[1]: Finished systemd-random-seed.service. May 13 00:43:39.012626 systemd[1]: Reached target first-boot-complete.target. May 13 00:43:39.020733 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:43:39.023239 systemd[1]: Starting systemd-udev-settle.service... May 13 00:43:39.027726 systemd[1]: Finished systemd-sysctl.service. May 13 00:43:39.035979 systemd[1]: Finished systemd-sysusers.service. May 13 00:43:39.038080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:43:39.042773 systemd[1]: Finished systemd-journal-flush.service. May 13 00:43:39.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.054583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:43:39.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.442845 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:43:39.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.445351 systemd[1]: Starting systemd-udevd.service... May 13 00:43:39.461310 systemd-udevd[1068]: Using default interface naming scheme 'v252'. May 13 00:43:39.474001 systemd[1]: Started systemd-udevd.service. May 13 00:43:39.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.478294 systemd[1]: Starting systemd-networkd.service... May 13 00:43:39.484078 systemd[1]: Starting systemd-userdbd.service... May 13 00:43:39.511913 systemd[1]: Found device dev-ttyS0.device. May 13 00:43:39.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.517036 systemd[1]: Started systemd-userdbd.service. May 13 00:43:39.542055 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:43:39.556443 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:43:39.560712 systemd-networkd[1080]: lo: Link UP May 13 00:43:39.560972 systemd-networkd[1080]: lo: Gained carrier May 13 00:43:39.561386 systemd-networkd[1080]: Enumeration completed May 13 00:43:39.561590 systemd[1]: Started systemd-networkd.service. May 13 00:43:39.562036 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:43:39.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.560000 audit[1088]: AVC avc: denied { confidentiality } for pid=1088 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:43:39.560000 audit[1088]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56249c1e0c10 a1=338ac a2=7f59e93ccbc5 a3=5 items=110 ppid=1068 pid=1088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:43:39.560000 audit: CWD cwd="/" May 13 00:43:39.560000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=1 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=2 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=3 name=(null) inode=14481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=4 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=5 name=(null) inode=14482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=6 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=7 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=8 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=9 name=(null) inode=14484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=10 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=11 name=(null) inode=14485 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=12 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=13 name=(null) inode=14486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=14 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=15 name=(null) inode=14487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=16 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=17 name=(null) inode=14488 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=18 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=19 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=20 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=21 name=(null) inode=14490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=22 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=23 name=(null) inode=14491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=24 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=25 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=26 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=27 name=(null) inode=14493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.564427 kernel: ACPI: button: Power Button [PWRF] May 13 00:43:39.560000 audit: PATH item=28 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=29 name=(null) inode=14494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=30 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=31 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=32 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=33 name=(null) inode=14496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=34 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=35 name=(null) inode=14497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=36 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=37 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=38 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=39 name=(null) inode=14499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=40 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=41 name=(null) inode=14500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=42 name=(null) inode=14480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=43 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=44 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=45 name=(null) inode=14502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=46 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=47 name=(null) inode=14503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=48 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=49 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=50 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=51 name=(null) inode=14505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=52 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=53 name=(null) inode=14506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=55 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=56 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=57 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=58 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=59 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=60 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=61 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=62 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=63 name=(null) inode=14511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=64 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=65 name=(null) inode=14512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=66 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=67 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=68 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=69 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=70 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=71 name=(null) inode=14515 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=72 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=73 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=74 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=75 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=76 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=77 name=(null) inode=14518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=78 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=79 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=80 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=81 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=82 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=83 name=(null) inode=14521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=84 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=85 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=86 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=87 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=88 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=89 name=(null) inode=14524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=90 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=91 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=92 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=93 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=94 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=95 name=(null) inode=14527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=96 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=97 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=98 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=99 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=100 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=101 name=(null) inode=14530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=102 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=103 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=104 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=105 name=(null) inode=14532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=106 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=107 name=(null) inode=14533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PATH item=109 name=(null) inode=14534 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:43:39.560000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:43:39.564735 systemd-networkd[1080]: eth0: Link UP May 13 00:43:39.564741 systemd-networkd[1080]: eth0: Gained carrier May 13 00:43:39.569211 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:43:39.570506 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:43:39.570613 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:43:39.577586 systemd-networkd[1080]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:43:39.580425 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:43:39.595423 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:43:39.645021 kernel: kvm: Nested Virtualization enabled May 13 00:43:39.645113 kernel: SVM: kvm: Nested Paging enabled May 13 00:43:39.646367 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:43:39.646410 kernel: SVM: Virtual GIF supported May 13 00:43:39.663423 kernel: EDAC MC: Ver: 3.0.0 May 13 00:43:39.689057 systemd[1]: Finished systemd-udev-settle.service. May 13 00:43:39.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.691473 systemd[1]: Starting lvm2-activation-early.service... May 13 00:43:39.699303 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:43:39.723393 systemd[1]: Finished lvm2-activation-early.service. May 13 00:43:39.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.724465 systemd[1]: Reached target cryptsetup.target. May 13 00:43:39.726347 systemd[1]: Starting lvm2-activation.service... May 13 00:43:39.730440 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:43:39.756362 systemd[1]: Finished lvm2-activation.service. May 13 00:43:39.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.757364 systemd[1]: Reached target local-fs-pre.target. May 13 00:43:39.758198 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:43:39.758214 systemd[1]: Reached target local-fs.target. May 13 00:43:39.759007 systemd[1]: Reached target machines.target. May 13 00:43:39.760911 systemd[1]: Starting ldconfig.service... May 13 00:43:39.761867 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:43:39.761907 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:39.762832 systemd[1]: Starting systemd-boot-update.service... May 13 00:43:39.764892 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:43:39.767164 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:43:39.769202 systemd[1]: Starting systemd-sysext.service... May 13 00:43:39.770561 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) May 13 00:43:39.772040 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:43:39.778967 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:43:39.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.780089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:43:39.784137 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:43:39.784328 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:43:39.795437 kernel: loop0: detected capacity change from 0 to 210664 May 13 00:43:39.812830 systemd-fsck[1122]: fsck.fat 4.2 (2021-01-31) May 13 00:43:39.812830 systemd-fsck[1122]: /dev/vda1: 790 files, 120692/258078 clusters May 13 00:43:39.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:39.814635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:43:39.817723 systemd[1]: Mounting boot.mount... May 13 00:43:39.829140 systemd[1]: Mounted boot.mount. May 13 00:43:40.048018 systemd[1]: Finished systemd-boot-update.service. May 13 00:43:40.048470 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:43:40.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.059493 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:43:40.060321 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:43:40.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.068430 kernel: loop1: detected capacity change from 0 to 210664 May 13 00:43:40.072948 (sd-sysext)[1131]: Using extensions 'kubernetes'. May 13 00:43:40.073572 (sd-sysext)[1131]: Merged extensions into '/usr'. May 13 00:43:40.090219 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.091615 systemd[1]: Mounting usr-share-oem.mount... May 13 00:43:40.092795 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:43:40.094637 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:43:40.097034 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:43:40.099328 systemd[1]: Starting modprobe@loop.service... May 13 00:43:40.100362 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:43:40.100562 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:40.100969 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.105026 systemd[1]: Mounted usr-share-oem.mount. May 13 00:43:40.106344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:43:40.106592 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:43:40.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.108037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:43:40.108215 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:43:40.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.109705 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:43:40.109902 systemd[1]: Finished modprobe@loop.service. May 13 00:43:40.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.111123 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:43:40.111303 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:43:40.111468 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:43:40.112996 systemd[1]: Finished systemd-sysext.service. May 13 00:43:40.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.115235 systemd[1]: Starting ensure-sysext.service... May 13 00:43:40.116988 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:43:40.118523 systemd[1]: Finished ldconfig.service. May 13 00:43:40.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.122992 systemd[1]: Reloading. May 13 00:43:40.127527 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:43:40.128620 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:43:40.130074 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:43:40.173218 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-05-13T00:43:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:43:40.173674 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-05-13T00:43:40Z" level=info msg="torcx already run" May 13 00:43:40.263689 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:43:40.263715 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:43:40.288293 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:43:40.354604 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:43:40.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.358276 systemd[1]: Starting audit-rules.service... May 13 00:43:40.360686 systemd[1]: Starting clean-ca-certificates.service... May 13 00:43:40.362761 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:43:40.365460 systemd[1]: Starting systemd-resolved.service... May 13 00:43:40.368031 systemd[1]: Starting systemd-timesyncd.service... May 13 00:43:40.370349 systemd[1]: Starting systemd-update-utmp.service... May 13 00:43:40.372248 systemd[1]: Finished clean-ca-certificates.service. May 13 00:43:40.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.375651 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:43:40.377000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:43:40.380159 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.380478 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:43:40.381797 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:43:40.384076 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:43:40.386503 systemd[1]: Starting modprobe@loop.service... May 13 00:43:40.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.387597 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:43:40.387812 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:40.388004 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:43:40.388162 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.390077 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:43:40.392330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:43:40.392541 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:43:40.394499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:43:40.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.394684 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:43:40.397188 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:43:40.397698 systemd[1]: Finished modprobe@loop.service. May 13 00:43:40.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:43:40.401000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:43:40.401000 audit[1243]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc4235e2c0 a2=420 a3=0 items=0 ppid=1214 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:43:40.401000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:43:40.401854 augenrules[1243]: No rules May 13 00:43:40.402781 systemd[1]: Finished systemd-update-utmp.service. May 13 00:43:40.404645 systemd[1]: Finished audit-rules.service. May 13 00:43:40.407053 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.407335 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:43:40.409361 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:43:40.411878 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:43:40.414728 systemd[1]: Starting modprobe@loop.service... May 13 00:43:40.415965 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:43:40.416106 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:40.417665 systemd[1]: Starting systemd-update-done.service... May 13 00:43:40.419272 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:43:40.419420 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.420847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:43:40.421044 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:43:40.423242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:43:40.423450 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:43:40.425426 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:43:40.425760 systemd[1]: Finished modprobe@loop.service. May 13 00:43:40.427581 systemd[1]: Finished systemd-update-done.service. May 13 00:43:40.429219 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:43:40.429336 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:43:40.433030 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.433837 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:43:40.436128 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:43:40.438660 systemd[1]: Starting modprobe@drm.service... May 13 00:43:40.441083 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:43:40.445765 systemd[1]: Starting modprobe@loop.service... May 13 00:43:40.447636 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:43:40.447783 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:40.449209 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:43:40.450474 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:43:40.450578 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:43:40.451638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:43:40.451811 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:43:40.452347 systemd-resolved[1219]: Positive Trust Anchors: May 13 00:43:40.452362 systemd-resolved[1219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:43:40.452412 systemd-resolved[1219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:43:40.453307 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:43:40.453493 systemd[1]: Finished modprobe@drm.service. May 13 00:43:40.454917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:43:40.455059 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:43:40.456627 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:43:40.456774 systemd[1]: Finished modprobe@loop.service. May 13 00:43:40.458364 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:43:40.458600 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:43:40.461117 systemd[1]: Finished ensure-sysext.service. May 13 00:43:40.462825 systemd-resolved[1219]: Defaulting to hostname 'linux'. May 13 00:43:40.464511 systemd[1]: Started systemd-resolved.service. May 13 00:43:40.465674 systemd[1]: Reached target network.target. May 13 00:43:40.466549 systemd[1]: Reached target nss-lookup.target. May 13 00:43:40.476121 systemd[1]: Started systemd-timesyncd.service. May 13 00:43:40.477355 systemd[1]: Reached target sysinit.target. May 13 00:43:40.478555 systemd-timesyncd[1222]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:43:40.478609 systemd-timesyncd[1222]: Initial clock synchronization to Tue 2025-05-13 00:43:40.855735 UTC. May 13 00:43:40.478681 systemd[1]: Started motdgen.path. May 13 00:43:40.479659 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:43:40.480896 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:43:40.481849 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:43:40.481875 systemd[1]: Reached target paths.target. May 13 00:43:40.482665 systemd[1]: Reached target time-set.target. May 13 00:43:40.483653 systemd[1]: Started logrotate.timer. May 13 00:43:40.484528 systemd[1]: Started mdadm.timer. May 13 00:43:40.485196 systemd[1]: Reached target timers.target. May 13 00:43:40.486292 systemd[1]: Listening on dbus.socket. May 13 00:43:40.488128 systemd[1]: Starting docker.socket... May 13 00:43:40.489799 systemd[1]: Listening on sshd.socket. May 13 00:43:40.490635 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:40.490930 systemd[1]: Listening on docker.socket. May 13 00:43:40.491722 systemd[1]: Reached target sockets.target. May 13 00:43:40.492519 systemd[1]: Reached target basic.target. May 13 00:43:40.493467 systemd[1]: System is tainted: cgroupsv1 May 13 00:43:40.493508 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:43:40.493526 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:43:40.494391 systemd[1]: Starting containerd.service... May 13 00:43:40.496018 systemd[1]: Starting dbus.service... May 13 00:43:40.497772 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:43:40.499793 systemd[1]: Starting extend-filesystems.service... May 13 00:43:40.500718 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:43:40.501743 systemd[1]: Starting motdgen.service... May 13 00:43:40.503389 systemd[1]: Starting prepare-helm.service... May 13 00:43:40.504071 jq[1277]: false May 13 00:43:40.505237 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:43:40.506928 systemd[1]: Starting sshd-keygen.service... May 13 00:43:40.509345 systemd[1]: Starting systemd-logind.service... May 13 00:43:40.510162 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:43:40.510214 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:43:40.511177 systemd[1]: Starting update-engine.service... May 13 00:43:40.515230 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:43:40.517574 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:43:40.517807 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:43:40.518483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:43:40.518680 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:43:40.522076 jq[1294]: true May 13 00:43:40.529602 tar[1296]: linux-amd64/helm May 13 00:43:40.530558 jq[1303]: true May 13 00:43:40.530745 extend-filesystems[1278]: Found loop1 May 13 00:43:40.530745 extend-filesystems[1278]: Found sr0 May 13 00:43:40.530745 extend-filesystems[1278]: Found vda May 13 00:43:40.530745 extend-filesystems[1278]: Found vda1 May 13 00:43:40.534520 extend-filesystems[1278]: Found vda2 May 13 00:43:40.534520 extend-filesystems[1278]: Found vda3 May 13 00:43:40.534520 extend-filesystems[1278]: Found usr May 13 00:43:40.534520 extend-filesystems[1278]: Found vda4 May 13 00:43:40.534520 extend-filesystems[1278]: Found vda6 May 13 00:43:40.534520 extend-filesystems[1278]: Found vda7 May 13 00:43:40.534520 extend-filesystems[1278]: Found vda9 May 13 00:43:40.534520 extend-filesystems[1278]: Checking size of /dev/vda9 May 13 00:43:40.561757 extend-filesystems[1278]: Resized partition /dev/vda9 May 13 00:43:40.536125 systemd[1]: Started dbus.service. May 13 00:43:40.535919 dbus-daemon[1275]: [system] SELinux support is enabled May 13 00:43:40.564199 extend-filesystems[1326]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:43:40.568620 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:43:40.538963 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:43:40.538982 systemd[1]: Reached target system-config.target. May 13 00:43:40.540413 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:43:40.540426 systemd[1]: Reached target user-config.target. May 13 00:43:40.545015 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:43:40.545242 systemd[1]: Finished motdgen.service. May 13 00:43:40.581573 update_engine[1288]: I0513 00:43:40.581369 1288 main.cc:92] Flatcar Update Engine starting May 13 00:43:40.588805 update_engine[1288]: I0513 00:43:40.583295 1288 update_check_scheduler.cc:74] Next update check in 11m38s May 13 00:43:40.583274 systemd[1]: Started update-engine.service. May 13 00:43:40.587654 systemd[1]: Started locksmithd.service. May 13 00:43:40.589026 env[1298]: time="2025-05-13T00:43:40.588906721Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:43:40.594379 systemd-logind[1287]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:43:40.595445 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:43:40.595474 systemd-logind[1287]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:43:40.595686 systemd-logind[1287]: New seat seat0. May 13 00:43:40.601033 systemd[1]: Started systemd-logind.service. May 13 00:43:40.618903 env[1298]: time="2025-05-13T00:43:40.609799465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:43:40.618903 env[1298]: time="2025-05-13T00:43:40.618846311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:43:40.623794 extend-filesystems[1326]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:43:40.623794 extend-filesystems[1326]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:43:40.623794 extend-filesystems[1326]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622017301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622049191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622278792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622292598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622303358Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622311683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622368510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622574496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622696996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:43:40.628083 env[1298]: time="2025-05-13T00:43:40.622709840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:43:40.619844 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:43:40.628325 extend-filesystems[1278]: Resized filesystem in /dev/vda9 May 13 00:43:40.629358 env[1298]: time="2025-05-13T00:43:40.622746629Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:43:40.629358 env[1298]: time="2025-05-13T00:43:40.622756207Z" level=info msg="metadata content store policy set" policy=shared May 13 00:43:40.620059 systemd[1]: Finished extend-filesystems.service. May 13 00:43:40.630760 bash[1335]: Updated "/home/core/.ssh/authorized_keys" May 13 00:43:40.631481 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.634893302Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.634928348Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.634940220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.634972731Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.634986056Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.635007396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.635018507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.635030559Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635041 env[1298]: time="2025-05-13T00:43:40.635042141Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635335 env[1298]: time="2025-05-13T00:43:40.635056528Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635335 env[1298]: time="2025-05-13T00:43:40.635068481Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635335 env[1298]: time="2025-05-13T00:43:40.635079401Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:43:40.635335 env[1298]: time="2025-05-13T00:43:40.635175431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:43:40.635335 env[1298]: time="2025-05-13T00:43:40.635241235Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:43:40.635635 env[1298]: time="2025-05-13T00:43:40.635554182Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:43:40.635635 env[1298]: time="2025-05-13T00:43:40.635581042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635635 env[1298]: time="2025-05-13T00:43:40.635592334Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:43:40.635635 env[1298]: time="2025-05-13T00:43:40.635639622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635651695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635663217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635673436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635683455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635695096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635705075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635714893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:43:40.635780 env[1298]: time="2025-05-13T00:43:40.635726435Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635825731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635841551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635853484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635863603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635876557Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635885704Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635904038Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:43:40.636013 env[1298]: time="2025-05-13T00:43:40.635940377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:43:40.636249 env[1298]: time="2025-05-13T00:43:40.636105526Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:43:40.636249 env[1298]: time="2025-05-13T00:43:40.636155110Z" level=info msg="Connect containerd service" May 13 00:43:40.636249 env[1298]: time="2025-05-13T00:43:40.636187701Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:43:40.637118 env[1298]: time="2025-05-13T00:43:40.636741299Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:43:40.639066 env[1298]: time="2025-05-13T00:43:40.639041756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:43:40.639278 env[1298]: time="2025-05-13T00:43:40.639246210Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:43:40.639488 systemd[1]: Started containerd.service. May 13 00:43:40.641048 env[1298]: time="2025-05-13T00:43:40.641029957Z" level=info msg="containerd successfully booted in 0.058440s" May 13 00:43:40.642360 env[1298]: time="2025-05-13T00:43:40.641862639Z" level=info msg="Start subscribing containerd event" May 13 00:43:40.642360 env[1298]: time="2025-05-13T00:43:40.641935346Z" level=info msg="Start recovering state" May 13 00:43:40.642360 env[1298]: time="2025-05-13T00:43:40.642010707Z" level=info msg="Start event monitor" May 13 00:43:40.642360 env[1298]: time="2025-05-13T00:43:40.642030404Z" level=info msg="Start snapshots syncer" May 13 00:43:40.642360 env[1298]: time="2025-05-13T00:43:40.642045943Z" level=info msg="Start cni network conf syncer for default" May 13 00:43:40.642360 env[1298]: time="2025-05-13T00:43:40.642059078Z" level=info msg="Start streaming server" May 13 00:43:40.653984 locksmithd[1336]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:43:40.893991 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:43:40.912000 systemd[1]: Finished sshd-keygen.service. May 13 00:43:40.914318 systemd[1]: Starting issuegen.service... May 13 00:43:40.919660 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:43:40.919869 systemd[1]: Finished issuegen.service. May 13 00:43:40.921968 systemd[1]: Starting systemd-user-sessions.service... May 13 00:43:40.928701 systemd[1]: Finished systemd-user-sessions.service. May 13 00:43:40.930553 tar[1296]: linux-amd64/LICENSE May 13 00:43:40.930553 tar[1296]: linux-amd64/README.md May 13 00:43:40.930888 systemd[1]: Started getty@tty1.service. May 13 00:43:40.932696 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:43:40.933731 systemd[1]: Reached target getty.target. May 13 00:43:40.934762 systemd-networkd[1080]: eth0: Gained IPv6LL May 13 00:43:40.937137 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:43:40.939107 systemd[1]: Reached target network-online.target. May 13 00:43:40.941917 systemd[1]: Starting kubelet.service... May 13 00:43:40.943655 systemd[1]: Finished prepare-helm.service. May 13 00:43:41.524710 systemd[1]: Started kubelet.service. May 13 00:43:41.526176 systemd[1]: Reached target multi-user.target. May 13 00:43:41.528502 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:43:41.535184 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:43:41.535543 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:43:41.539572 systemd[1]: Startup finished in 5.221s (kernel) + 5.243s (userspace) = 10.464s. May 13 00:43:41.982478 kubelet[1377]: E0513 00:43:41.982369 1377 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:43:41.984152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:43:41.984285 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:43:44.737658 systemd[1]: Created slice system-sshd.slice. May 13 00:43:44.738705 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:56656.service. May 13 00:43:44.781612 sshd[1388]: Accepted publickey for core from 10.0.0.1 port 56656 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:44.783090 sshd[1388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:44.791112 systemd-logind[1287]: New session 1 of user core. May 13 00:43:44.792105 systemd[1]: Created slice user-500.slice. May 13 00:43:44.793022 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:43:44.801151 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:43:44.802484 systemd[1]: Starting user@500.service... May 13 00:43:44.805071 (systemd)[1393]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:44.870399 systemd[1393]: Queued start job for default target default.target. May 13 00:43:44.870653 systemd[1393]: Reached target paths.target. May 13 00:43:44.870674 systemd[1393]: Reached target sockets.target. May 13 00:43:44.870689 systemd[1393]: Reached target timers.target. May 13 00:43:44.870703 systemd[1393]: Reached target basic.target. May 13 00:43:44.870744 systemd[1393]: Reached target default.target. May 13 00:43:44.870771 systemd[1393]: Startup finished in 61ms. May 13 00:43:44.870862 systemd[1]: Started user@500.service. May 13 00:43:44.871923 systemd[1]: Started session-1.scope. May 13 00:43:44.921612 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:56664.service. May 13 00:43:44.962066 sshd[1402]: Accepted publickey for core from 10.0.0.1 port 56664 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:44.963191 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:44.966815 systemd-logind[1287]: New session 2 of user core. May 13 00:43:44.967723 systemd[1]: Started session-2.scope. May 13 00:43:45.024886 sshd[1402]: pam_unix(sshd:session): session closed for user core May 13 00:43:45.027605 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:56668.service. May 13 00:43:45.028146 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:56664.service: Deactivated successfully. May 13 00:43:45.029621 systemd-logind[1287]: Session 2 logged out. Waiting for processes to exit. May 13 00:43:45.029765 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:43:45.030841 systemd-logind[1287]: Removed session 2. May 13 00:43:45.066233 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 56668 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:45.067353 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:45.070652 systemd-logind[1287]: New session 3 of user core. May 13 00:43:45.071503 systemd[1]: Started session-3.scope. May 13 00:43:45.122518 sshd[1408]: pam_unix(sshd:session): session closed for user core May 13 00:43:45.125258 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:56672.service. May 13 00:43:45.125815 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:56668.service: Deactivated successfully. May 13 00:43:45.126920 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:43:45.127379 systemd-logind[1287]: Session 3 logged out. Waiting for processes to exit. May 13 00:43:45.128236 systemd-logind[1287]: Removed session 3. May 13 00:43:45.164011 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:45.164791 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:45.167603 systemd-logind[1287]: New session 4 of user core. May 13 00:43:45.168233 systemd[1]: Started session-4.scope. May 13 00:43:45.221155 sshd[1415]: pam_unix(sshd:session): session closed for user core May 13 00:43:45.223231 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:56686.service. May 13 00:43:45.223612 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:56672.service: Deactivated successfully. May 13 00:43:45.224442 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:43:45.224525 systemd-logind[1287]: Session 4 logged out. Waiting for processes to exit. May 13 00:43:45.225177 systemd-logind[1287]: Removed session 4. May 13 00:43:45.262183 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:45.263163 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:45.265942 systemd-logind[1287]: New session 5 of user core. May 13 00:43:45.266601 systemd[1]: Started session-5.scope. May 13 00:43:45.321384 sudo[1427]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:43:45.321603 sudo[1427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:43:45.339766 systemd[1]: Starting docker.service... May 13 00:43:45.371251 env[1439]: time="2025-05-13T00:43:45.371203506Z" level=info msg="Starting up" May 13 00:43:45.372704 env[1439]: time="2025-05-13T00:43:45.372661167Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:43:45.372704 env[1439]: time="2025-05-13T00:43:45.372689162Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:43:45.372788 env[1439]: time="2025-05-13T00:43:45.372713624Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:43:45.372788 env[1439]: time="2025-05-13T00:43:45.372725207Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:43:45.374436 env[1439]: time="2025-05-13T00:43:45.374396629Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:43:45.374436 env[1439]: time="2025-05-13T00:43:45.374413916Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:43:45.374536 env[1439]: time="2025-05-13T00:43:45.374454090Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:43:45.374536 env[1439]: time="2025-05-13T00:43:45.374464056Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:43:45.844238 env[1439]: time="2025-05-13T00:43:45.844194289Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 00:43:45.844238 env[1439]: time="2025-05-13T00:43:45.844218763Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 00:43:45.844461 env[1439]: time="2025-05-13T00:43:45.844341426Z" level=info msg="Loading containers: start." May 13 00:43:45.945453 kernel: Initializing XFRM netlink socket May 13 00:43:45.973122 env[1439]: time="2025-05-13T00:43:45.973081082Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:43:46.020352 systemd-networkd[1080]: docker0: Link UP May 13 00:43:46.035322 env[1439]: time="2025-05-13T00:43:46.035292581Z" level=info msg="Loading containers: done." May 13 00:43:46.045827 env[1439]: time="2025-05-13T00:43:46.045781632Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:43:46.045983 env[1439]: time="2025-05-13T00:43:46.045954891Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:43:46.046066 env[1439]: time="2025-05-13T00:43:46.046042552Z" level=info msg="Daemon has completed initialization" May 13 00:43:46.060672 systemd[1]: Started docker.service. May 13 00:43:46.066069 env[1439]: time="2025-05-13T00:43:46.066015976Z" level=info msg="API listen on /run/docker.sock" May 13 00:43:46.732809 env[1298]: time="2025-05-13T00:43:46.732735137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:43:47.351976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807917929.mount: Deactivated successfully. May 13 00:43:49.381508 env[1298]: time="2025-05-13T00:43:49.381429282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:49.398972 env[1298]: time="2025-05-13T00:43:49.398879587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:49.407774 env[1298]: time="2025-05-13T00:43:49.407600784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:49.411346 env[1298]: time="2025-05-13T00:43:49.411270252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:49.412284 env[1298]: time="2025-05-13T00:43:49.412227044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:43:49.423701 env[1298]: time="2025-05-13T00:43:49.423650460Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:43:51.817180 env[1298]: time="2025-05-13T00:43:51.815852244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:51.819867 env[1298]: time="2025-05-13T00:43:51.819788423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:51.826913 env[1298]: time="2025-05-13T00:43:51.822193462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:51.826913 env[1298]: time="2025-05-13T00:43:51.825901269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:51.827154 env[1298]: time="2025-05-13T00:43:51.827039882Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:43:51.846809 env[1298]: time="2025-05-13T00:43:51.846742140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:43:52.053114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:43:52.053382 systemd[1]: Stopped kubelet.service. May 13 00:43:52.055323 systemd[1]: Starting kubelet.service... May 13 00:43:52.174023 systemd[1]: Started kubelet.service. May 13 00:43:52.408682 kubelet[1600]: E0513 00:43:52.408588 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:43:52.412802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:43:52.413007 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:43:53.561967 env[1298]: time="2025-05-13T00:43:53.561887651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:53.564495 env[1298]: time="2025-05-13T00:43:53.564402094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:53.567046 env[1298]: time="2025-05-13T00:43:53.567003404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:53.569437 env[1298]: time="2025-05-13T00:43:53.569332437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:53.570289 env[1298]: time="2025-05-13T00:43:53.570223508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:43:53.581790 env[1298]: time="2025-05-13T00:43:53.581739487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:43:54.806184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790907979.mount: Deactivated successfully. May 13 00:43:55.675937 env[1298]: time="2025-05-13T00:43:55.675856636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:55.678120 env[1298]: time="2025-05-13T00:43:55.678047112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:55.679615 env[1298]: time="2025-05-13T00:43:55.679566694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:55.681082 env[1298]: time="2025-05-13T00:43:55.681047756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:55.681463 env[1298]: time="2025-05-13T00:43:55.681398390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:43:55.694171 env[1298]: time="2025-05-13T00:43:55.694110216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:43:56.177822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221110136.mount: Deactivated successfully. May 13 00:43:58.285289 env[1298]: time="2025-05-13T00:43:58.285222695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:58.289164 env[1298]: time="2025-05-13T00:43:58.289089463Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:58.293502 env[1298]: time="2025-05-13T00:43:58.292927729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:58.297353 env[1298]: time="2025-05-13T00:43:58.296864748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:58.297353 env[1298]: time="2025-05-13T00:43:58.297255018Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:43:58.316362 env[1298]: time="2025-05-13T00:43:58.316311806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:43:58.991751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717173134.mount: Deactivated successfully. May 13 00:43:59.002874 env[1298]: time="2025-05-13T00:43:59.002173323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:59.006253 env[1298]: time="2025-05-13T00:43:59.006129931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:59.010989 env[1298]: time="2025-05-13T00:43:59.008940684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:59.011585 env[1298]: time="2025-05-13T00:43:59.011479222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:59.012385 env[1298]: time="2025-05-13T00:43:59.012245959Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:43:59.028462 env[1298]: time="2025-05-13T00:43:59.027503814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:43:59.644065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607911425.mount: Deactivated successfully. May 13 00:44:02.553102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:44:02.553337 systemd[1]: Stopped kubelet.service. May 13 00:44:02.554814 systemd[1]: Starting kubelet.service... May 13 00:44:02.696352 systemd[1]: Started kubelet.service. May 13 00:44:02.740804 kubelet[1643]: E0513 00:44:02.740743 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:44:02.742746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:44:02.742931 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:44:04.517229 env[1298]: time="2025-05-13T00:44:04.517130372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:04.519651 env[1298]: time="2025-05-13T00:44:04.519593688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:04.521573 env[1298]: time="2025-05-13T00:44:04.521511413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:04.526676 env[1298]: time="2025-05-13T00:44:04.526621148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:04.527587 env[1298]: time="2025-05-13T00:44:04.527537312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:44:06.870560 systemd[1]: Stopped kubelet.service. May 13 00:44:06.872792 systemd[1]: Starting kubelet.service... May 13 00:44:06.889553 systemd[1]: Reloading. May 13 00:44:06.973382 /usr/lib/systemd/system-generators/torcx-generator[1753]: time="2025-05-13T00:44:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:44:06.973491 /usr/lib/systemd/system-generators/torcx-generator[1753]: time="2025-05-13T00:44:06Z" level=info msg="torcx already run" May 13 00:44:07.235814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:44:07.235843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:44:07.263053 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:44:07.374801 systemd[1]: Started kubelet.service. May 13 00:44:07.376889 systemd[1]: Stopping kubelet.service... May 13 00:44:07.377292 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:44:07.377718 systemd[1]: Stopped kubelet.service. May 13 00:44:07.379965 systemd[1]: Starting kubelet.service... May 13 00:44:07.466512 systemd[1]: Started kubelet.service. May 13 00:44:07.524540 kubelet[1814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:44:07.524540 kubelet[1814]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:44:07.524540 kubelet[1814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:44:07.524540 kubelet[1814]: I0513 00:44:07.524124 1814 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:44:07.819367 kubelet[1814]: I0513 00:44:07.819310 1814 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:44:07.819367 kubelet[1814]: I0513 00:44:07.819358 1814 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:44:07.819367 kubelet[1814]: I0513 00:44:07.819657 1814 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:44:07.852818 kubelet[1814]: I0513 00:44:07.851085 1814 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:44:07.854061 kubelet[1814]: E0513 00:44:07.853170 1814 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.874689 kubelet[1814]: I0513 00:44:07.873134 1814 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:44:07.879378 kubelet[1814]: I0513 00:44:07.879281 1814 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:44:07.879641 kubelet[1814]: I0513 00:44:07.879364 1814 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:44:07.881065 kubelet[1814]: I0513 00:44:07.881018 1814 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:44:07.881065 kubelet[1814]: I0513 00:44:07.881056 1814 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:44:07.881253 kubelet[1814]: I0513 00:44:07.881225 1814 state_mem.go:36] "Initialized new in-memory state store" May 13 00:44:07.882263 kubelet[1814]: I0513 00:44:07.882224 1814 kubelet.go:400] "Attempting to sync node with API server" May 13 00:44:07.882263 kubelet[1814]: I0513 00:44:07.882258 1814 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:44:07.882372 kubelet[1814]: I0513 00:44:07.882287 1814 kubelet.go:312] "Adding apiserver pod source" May 13 00:44:07.883038 kubelet[1814]: I0513 00:44:07.882979 1814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:44:07.883111 kubelet[1814]: W0513 00:44:07.882964 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.884015 kubelet[1814]: E0513 00:44:07.883975 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.885517 kubelet[1814]: W0513 00:44:07.885460 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.885802 kubelet[1814]: E0513 00:44:07.885769 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.892933 kubelet[1814]: I0513 00:44:07.892867 1814 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:44:07.899833 kubelet[1814]: I0513 00:44:07.899619 1814 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:44:07.900005 kubelet[1814]: W0513 00:44:07.899888 1814 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:44:07.901116 kubelet[1814]: I0513 00:44:07.901089 1814 server.go:1264] "Started kubelet" May 13 00:44:07.901687 kubelet[1814]: I0513 00:44:07.901205 1814 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:44:07.906025 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:44:07.906123 kubelet[1814]: I0513 00:44:07.905541 1814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:44:07.907715 kubelet[1814]: I0513 00:44:07.906937 1814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:44:07.907715 kubelet[1814]: I0513 00:44:07.907293 1814 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:44:07.908589 kubelet[1814]: I0513 00:44:07.908551 1814 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:44:07.909470 kubelet[1814]: I0513 00:44:07.909442 1814 server.go:455] "Adding debug handlers to kubelet server" May 13 00:44:07.911397 kubelet[1814]: E0513 00:44:07.911364 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" May 13 00:44:07.911544 kubelet[1814]: I0513 00:44:07.911516 1814 factory.go:221] Registration of the systemd container factory successfully May 13 00:44:07.911605 kubelet[1814]: I0513 00:44:07.911589 1814 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:44:07.912201 kubelet[1814]: I0513 00:44:07.912179 1814 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:44:07.912879 kubelet[1814]: W0513 00:44:07.912558 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.912964 kubelet[1814]: E0513 00:44:07.912897 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.913651 kubelet[1814]: I0513 00:44:07.913631 1814 reconciler.go:26] "Reconciler: start to sync state" May 13 00:44:07.914536 kubelet[1814]: I0513 00:44:07.914515 1814 factory.go:221] Registration of the containerd container factory successfully May 13 00:44:07.922027 kubelet[1814]: E0513 00:44:07.921819 1814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef823046b5a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:44:07.901058473 +0000 UTC m=+0.429845299,LastTimestamp:2025-05-13 00:44:07.901058473 +0000 UTC m=+0.429845299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:44:07.922299 kubelet[1814]: E0513 00:44:07.922212 1814 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:44:07.939743 kubelet[1814]: I0513 00:44:07.939581 1814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:44:07.940583 kubelet[1814]: I0513 00:44:07.940346 1814 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:44:07.940583 kubelet[1814]: I0513 00:44:07.940364 1814 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:44:07.940583 kubelet[1814]: I0513 00:44:07.940383 1814 state_mem.go:36] "Initialized new in-memory state store" May 13 00:44:07.943049 kubelet[1814]: I0513 00:44:07.942953 1814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:44:07.943049 kubelet[1814]: I0513 00:44:07.943013 1814 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:44:07.943049 kubelet[1814]: I0513 00:44:07.943055 1814 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:44:07.953815 kubelet[1814]: E0513 00:44:07.943120 1814 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:44:07.953815 kubelet[1814]: W0513 00:44:07.944262 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:07.953815 kubelet[1814]: E0513 00:44:07.944298 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:08.010314 kubelet[1814]: I0513 00:44:08.010181 1814 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:44:08.010897 kubelet[1814]: E0513 00:44:08.010758 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" May 13 00:44:08.044472 kubelet[1814]: E0513 00:44:08.044353 1814 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:44:08.114735 kubelet[1814]: E0513 00:44:08.112343 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" May 13 00:44:08.214259 kubelet[1814]: I0513 00:44:08.214069 1814 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:44:08.214896 kubelet[1814]: E0513 00:44:08.214856 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" May 13 00:44:08.245152 kubelet[1814]: E0513 00:44:08.245086 1814 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:44:08.513264 kubelet[1814]: E0513 00:44:08.513116 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" May 13 00:44:08.618568 kubelet[1814]: I0513 00:44:08.618504 1814 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:44:08.624719 kubelet[1814]: E0513 00:44:08.624641 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" May 13 00:44:08.645568 kubelet[1814]: E0513 00:44:08.645499 1814 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:44:08.800146 kubelet[1814]: W0513 00:44:08.799845 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:08.800146 kubelet[1814]: E0513 00:44:08.799899 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:08.878160 kubelet[1814]: I0513 00:44:08.877960 1814 policy_none.go:49] "None policy: Start" May 13 00:44:08.880454 kubelet[1814]: I0513 00:44:08.880390 1814 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:44:08.880693 kubelet[1814]: I0513 00:44:08.880653 1814 state_mem.go:35] "Initializing new in-memory state store" May 13 00:44:08.929862 kubelet[1814]: I0513 00:44:08.929820 1814 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:44:08.930316 kubelet[1814]: I0513 00:44:08.930258 1814 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:44:08.930540 kubelet[1814]: I0513 00:44:08.930525 1814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:44:08.936456 kubelet[1814]: E0513 00:44:08.933748 1814 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:44:08.945089 kubelet[1814]: W0513 00:44:08.944897 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:08.945089 kubelet[1814]: E0513 00:44:08.945024 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:09.313866 kubelet[1814]: E0513 00:44:09.313701 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" May 13 00:44:09.426906 kubelet[1814]: I0513 00:44:09.426708 1814 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:44:09.427303 kubelet[1814]: E0513 00:44:09.427204 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" May 13 00:44:09.427677 kubelet[1814]: W0513 00:44:09.427566 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:09.427677 kubelet[1814]: E0513 00:44:09.427641 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:09.446360 kubelet[1814]: I0513 00:44:09.446166 1814 topology_manager.go:215] "Topology Admit Handler" podUID="fb05e315079c85554975244b04de582a" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:44:09.451687 kubelet[1814]: I0513 00:44:09.451220 1814 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:44:09.452458 kubelet[1814]: I0513 00:44:09.452439 1814 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:44:09.500676 kubelet[1814]: W0513 00:44:09.500581 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:09.500676 kubelet[1814]: E0513 00:44:09.500672 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:09.525490 kubelet[1814]: I0513 00:44:09.525076 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:09.525490 kubelet[1814]: I0513 00:44:09.525155 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb05e315079c85554975244b04de582a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb05e315079c85554975244b04de582a\") " pod="kube-system/kube-apiserver-localhost" May 13 00:44:09.525490 kubelet[1814]: I0513 00:44:09.525199 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb05e315079c85554975244b04de582a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb05e315079c85554975244b04de582a\") " pod="kube-system/kube-apiserver-localhost" May 13 00:44:09.525490 kubelet[1814]: I0513 00:44:09.525221 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb05e315079c85554975244b04de582a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fb05e315079c85554975244b04de582a\") " pod="kube-system/kube-apiserver-localhost" May 13 00:44:09.525490 kubelet[1814]: I0513 00:44:09.525246 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:09.525861 kubelet[1814]: I0513 00:44:09.525268 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:09.525861 kubelet[1814]: I0513 00:44:09.525288 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:09.525861 kubelet[1814]: I0513 00:44:09.525309 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:09.525861 kubelet[1814]: I0513 00:44:09.525330 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:44:09.757935 kubelet[1814]: E0513 00:44:09.756864 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:09.757935 kubelet[1814]: E0513 00:44:09.756939 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:09.757935 kubelet[1814]: E0513 00:44:09.757320 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:09.758503 env[1298]: time="2025-05-13T00:44:09.757610190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fb05e315079c85554975244b04de582a,Namespace:kube-system,Attempt:0,}" May 13 00:44:09.758503 env[1298]: time="2025-05-13T00:44:09.758007851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:44:09.758503 env[1298]: time="2025-05-13T00:44:09.758233045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:44:09.771232 kubelet[1814]: E0513 00:44:09.771022 1814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef823046b5a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:44:07.901058473 +0000 UTC m=+0.429845299,LastTimestamp:2025-05-13 00:44:07.901058473 +0000 UTC m=+0.429845299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:44:10.012107 kubelet[1814]: E0513 00:44:10.011938 1814 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:10.500624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1939570541.mount: Deactivated successfully. May 13 00:44:10.508841 env[1298]: time="2025-05-13T00:44:10.508771464Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.511500 env[1298]: time="2025-05-13T00:44:10.511457532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.514501 env[1298]: time="2025-05-13T00:44:10.513536197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.517749 env[1298]: time="2025-05-13T00:44:10.517690006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.519778 env[1298]: time="2025-05-13T00:44:10.519692091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.521491 env[1298]: time="2025-05-13T00:44:10.521445346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.522683 env[1298]: time="2025-05-13T00:44:10.522646897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.524139 env[1298]: time="2025-05-13T00:44:10.524114898Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.526078 env[1298]: time="2025-05-13T00:44:10.525993752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.527835 env[1298]: time="2025-05-13T00:44:10.527210034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.530315 env[1298]: time="2025-05-13T00:44:10.530256522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.532314 env[1298]: time="2025-05-13T00:44:10.532271544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:10.564262 kubelet[1814]: W0513 00:44:10.563595 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:10.564262 kubelet[1814]: E0513 00:44:10.563642 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused May 13 00:44:10.567778 env[1298]: time="2025-05-13T00:44:10.567635484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:10.567778 env[1298]: time="2025-05-13T00:44:10.567705524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:10.567945 env[1298]: time="2025-05-13T00:44:10.567721981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:10.568088 env[1298]: time="2025-05-13T00:44:10.568025388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8856f4761243ce3caf5a6ef86041b5590d1300759bda1c1fa82e5ed5b590026c pid=1860 runtime=io.containerd.runc.v2 May 13 00:44:10.569000 env[1298]: time="2025-05-13T00:44:10.568941031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:10.569182 env[1298]: time="2025-05-13T00:44:10.569148584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:10.569182 env[1298]: time="2025-05-13T00:44:10.569165221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:10.569458 env[1298]: time="2025-05-13T00:44:10.569423318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8969dc486369627e4492ed8283ecf1134d5c4fcf1f4fe3de912e0ad01c2b0b5 pid=1866 runtime=io.containerd.runc.v2 May 13 00:44:10.580356 env[1298]: time="2025-05-13T00:44:10.580197047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:10.580356 env[1298]: time="2025-05-13T00:44:10.580248954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:10.580544 env[1298]: time="2025-05-13T00:44:10.580268349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:10.580635 env[1298]: time="2025-05-13T00:44:10.580553213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d818d9843a135053cbaaab7df1859f646578fa224208d0cf363f6060613f9fcb pid=1902 runtime=io.containerd.runc.v2 May 13 00:44:10.629868 env[1298]: time="2025-05-13T00:44:10.629815632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8969dc486369627e4492ed8283ecf1134d5c4fcf1f4fe3de912e0ad01c2b0b5\"" May 13 00:44:10.630705 kubelet[1814]: E0513 00:44:10.630647 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:10.633979 env[1298]: time="2025-05-13T00:44:10.633935825Z" level=info msg="CreateContainer within sandbox \"b8969dc486369627e4492ed8283ecf1134d5c4fcf1f4fe3de912e0ad01c2b0b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:44:10.637943 env[1298]: time="2025-05-13T00:44:10.637873416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"8856f4761243ce3caf5a6ef86041b5590d1300759bda1c1fa82e5ed5b590026c\"" May 13 00:44:10.639369 kubelet[1814]: E0513 00:44:10.639327 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:10.642154 env[1298]: time="2025-05-13T00:44:10.642108657Z" level=info msg="CreateContainer within sandbox \"8856f4761243ce3caf5a6ef86041b5590d1300759bda1c1fa82e5ed5b590026c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:44:10.646115 env[1298]: time="2025-05-13T00:44:10.646042909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fb05e315079c85554975244b04de582a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d818d9843a135053cbaaab7df1859f646578fa224208d0cf363f6060613f9fcb\"" May 13 00:44:10.646938 kubelet[1814]: E0513 00:44:10.646912 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:10.650174 env[1298]: time="2025-05-13T00:44:10.650099259Z" level=info msg="CreateContainer within sandbox \"d818d9843a135053cbaaab7df1859f646578fa224208d0cf363f6060613f9fcb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:44:10.674890 env[1298]: time="2025-05-13T00:44:10.674786529Z" level=info msg="CreateContainer within sandbox \"b8969dc486369627e4492ed8283ecf1134d5c4fcf1f4fe3de912e0ad01c2b0b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b4ac7bbdc882415fecfc8fd84703e22524f2013a075c1060dbe2e6834278fa45\"" May 13 00:44:10.675858 env[1298]: time="2025-05-13T00:44:10.675805238Z" level=info msg="StartContainer for \"b4ac7bbdc882415fecfc8fd84703e22524f2013a075c1060dbe2e6834278fa45\"" May 13 00:44:10.684679 env[1298]: time="2025-05-13T00:44:10.684585724Z" level=info msg="CreateContainer within sandbox \"8856f4761243ce3caf5a6ef86041b5590d1300759bda1c1fa82e5ed5b590026c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e511c9863973990db0e677540460467cb5898899e27d0fb309c925cf61f17ad4\"" May 13 00:44:10.685335 env[1298]: time="2025-05-13T00:44:10.685295441Z" level=info msg="StartContainer for \"e511c9863973990db0e677540460467cb5898899e27d0fb309c925cf61f17ad4\"" May 13 00:44:10.706006 env[1298]: time="2025-05-13T00:44:10.705945796Z" level=info msg="CreateContainer within sandbox \"d818d9843a135053cbaaab7df1859f646578fa224208d0cf363f6060613f9fcb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95a27ed43ca6dccd4585d66f1d499aa8323e7fac903e32faa5865d4431aed43c\"" May 13 00:44:10.707006 env[1298]: time="2025-05-13T00:44:10.706939261Z" level=info msg="StartContainer for \"95a27ed43ca6dccd4585d66f1d499aa8323e7fac903e32faa5865d4431aed43c\"" May 13 00:44:10.770937 env[1298]: time="2025-05-13T00:44:10.769548755Z" level=info msg="StartContainer for \"b4ac7bbdc882415fecfc8fd84703e22524f2013a075c1060dbe2e6834278fa45\" returns successfully" May 13 00:44:10.776485 env[1298]: time="2025-05-13T00:44:10.776373720Z" level=info msg="StartContainer for \"e511c9863973990db0e677540460467cb5898899e27d0fb309c925cf61f17ad4\" returns successfully" May 13 00:44:10.808769 env[1298]: time="2025-05-13T00:44:10.808664426Z" level=info msg="StartContainer for \"95a27ed43ca6dccd4585d66f1d499aa8323e7fac903e32faa5865d4431aed43c\" returns successfully" May 13 00:44:10.953123 kubelet[1814]: E0513 00:44:10.953022 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:10.956213 kubelet[1814]: E0513 00:44:10.956188 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:10.958145 kubelet[1814]: E0513 00:44:10.958124 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:11.029317 kubelet[1814]: I0513 00:44:11.029206 1814 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:44:11.960384 kubelet[1814]: E0513 00:44:11.960351 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:12.300859 kubelet[1814]: E0513 00:44:12.300706 1814 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:44:12.544290 kubelet[1814]: I0513 00:44:12.544222 1814 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:44:12.602750 kubelet[1814]: E0513 00:44:12.602688 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:12.703305 kubelet[1814]: E0513 00:44:12.703238 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:12.804442 kubelet[1814]: E0513 00:44:12.804357 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:12.905552 kubelet[1814]: E0513 00:44:12.905424 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.006542 kubelet[1814]: E0513 00:44:13.006486 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.107538 kubelet[1814]: E0513 00:44:13.107477 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.208536 kubelet[1814]: E0513 00:44:13.208334 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.309201 kubelet[1814]: E0513 00:44:13.309148 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.410305 kubelet[1814]: E0513 00:44:13.410166 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.510820 kubelet[1814]: E0513 00:44:13.510669 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.611349 kubelet[1814]: E0513 00:44:13.611276 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.711880 kubelet[1814]: E0513 00:44:13.711828 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.812802 kubelet[1814]: E0513 00:44:13.812760 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:13.913181 kubelet[1814]: E0513 00:44:13.913108 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.013637 kubelet[1814]: E0513 00:44:14.013574 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.114387 kubelet[1814]: E0513 00:44:14.114229 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.214484 kubelet[1814]: E0513 00:44:14.214376 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.315070 kubelet[1814]: E0513 00:44:14.315023 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.416263 kubelet[1814]: E0513 00:44:14.416096 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.462945 kubelet[1814]: E0513 00:44:14.462897 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:14.516527 kubelet[1814]: E0513 00:44:14.516475 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.617265 kubelet[1814]: E0513 00:44:14.617209 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:44:14.888656 kubelet[1814]: I0513 00:44:14.888596 1814 apiserver.go:52] "Watching apiserver" May 13 00:44:14.913368 kubelet[1814]: I0513 00:44:14.913321 1814 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:44:14.917221 systemd[1]: Reloading. May 13 00:44:14.974770 /usr/lib/systemd/system-generators/torcx-generator[2112]: time="2025-05-13T00:44:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:44:14.974810 /usr/lib/systemd/system-generators/torcx-generator[2112]: time="2025-05-13T00:44:14Z" level=info msg="torcx already run" May 13 00:44:15.072706 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:44:15.072737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:44:15.100240 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:44:15.221338 systemd[1]: Stopping kubelet.service... May 13 00:44:15.221561 kubelet[1814]: E0513 00:44:15.221214 1814 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183eef823046b5a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:44:07.901058473 +0000 UTC m=+0.429845299,LastTimestamp:2025-05-13 00:44:07.901058473 +0000 UTC m=+0.429845299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:44:15.240947 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:44:15.241345 systemd[1]: Stopped kubelet.service. May 13 00:44:15.243697 systemd[1]: Starting kubelet.service... May 13 00:44:15.344951 systemd[1]: Started kubelet.service. May 13 00:44:15.402085 kubelet[2168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:44:15.402085 kubelet[2168]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:44:15.402085 kubelet[2168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:44:15.402085 kubelet[2168]: I0513 00:44:15.399185 2168 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:44:15.430589 kubelet[2168]: I0513 00:44:15.430533 2168 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:44:15.430589 kubelet[2168]: I0513 00:44:15.430563 2168 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:44:15.430782 kubelet[2168]: I0513 00:44:15.430753 2168 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:44:15.431903 kubelet[2168]: I0513 00:44:15.431869 2168 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:44:15.432924 kubelet[2168]: I0513 00:44:15.432893 2168 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:44:15.440369 kubelet[2168]: I0513 00:44:15.440333 2168 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:44:15.441230 kubelet[2168]: I0513 00:44:15.441195 2168 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:44:15.441545 kubelet[2168]: I0513 00:44:15.441305 2168 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:44:15.441738 kubelet[2168]: I0513 00:44:15.441718 2168 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:44:15.441848 kubelet[2168]: I0513 00:44:15.441833 2168 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:44:15.441974 kubelet[2168]: I0513 00:44:15.441959 2168 state_mem.go:36] "Initialized new in-memory state store" May 13 00:44:15.442157 kubelet[2168]: I0513 00:44:15.442143 2168 kubelet.go:400] "Attempting to sync node with API server" May 13 00:44:15.442255 kubelet[2168]: I0513 00:44:15.442230 2168 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:44:15.442360 kubelet[2168]: I0513 00:44:15.442345 2168 kubelet.go:312] "Adding apiserver pod source" May 13 00:44:15.442569 kubelet[2168]: I0513 00:44:15.442549 2168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:44:15.443367 kubelet[2168]: I0513 00:44:15.443350 2168 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:44:15.443668 kubelet[2168]: I0513 00:44:15.443645 2168 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:44:15.444223 kubelet[2168]: I0513 00:44:15.444209 2168 server.go:1264] "Started kubelet" May 13 00:44:15.444450 kubelet[2168]: I0513 00:44:15.444413 2168 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:44:15.445737 kubelet[2168]: I0513 00:44:15.445683 2168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:44:15.445951 kubelet[2168]: I0513 00:44:15.445927 2168 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:44:15.448753 kubelet[2168]: E0513 00:44:15.448725 2168 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:44:15.454142 kubelet[2168]: I0513 00:44:15.453589 2168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:44:15.455875 kubelet[2168]: I0513 00:44:15.455152 2168 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:44:15.455875 kubelet[2168]: I0513 00:44:15.455600 2168 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:44:15.455875 kubelet[2168]: I0513 00:44:15.455776 2168 reconciler.go:26] "Reconciler: start to sync state" May 13 00:44:15.458660 kubelet[2168]: I0513 00:44:15.457482 2168 factory.go:221] Registration of the systemd container factory successfully May 13 00:44:15.458660 kubelet[2168]: I0513 00:44:15.457634 2168 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:44:15.458971 kubelet[2168]: I0513 00:44:15.458702 2168 server.go:455] "Adding debug handlers to kubelet server" May 13 00:44:15.461935 kubelet[2168]: I0513 00:44:15.461907 2168 factory.go:221] Registration of the containerd container factory successfully May 13 00:44:15.468944 kubelet[2168]: I0513 00:44:15.468867 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:44:15.470001 kubelet[2168]: I0513 00:44:15.469961 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:44:15.470001 kubelet[2168]: I0513 00:44:15.470002 2168 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:44:15.470115 kubelet[2168]: I0513 00:44:15.470028 2168 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:44:15.470115 kubelet[2168]: E0513 00:44:15.470086 2168 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:44:15.519749 kubelet[2168]: I0513 00:44:15.519631 2168 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:44:15.519749 kubelet[2168]: I0513 00:44:15.519656 2168 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:44:15.519749 kubelet[2168]: I0513 00:44:15.519676 2168 state_mem.go:36] "Initialized new in-memory state store" May 13 00:44:15.520205 kubelet[2168]: I0513 00:44:15.519849 2168 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:44:15.520205 kubelet[2168]: I0513 00:44:15.519861 2168 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:44:15.520205 kubelet[2168]: I0513 00:44:15.519881 2168 policy_none.go:49] "None policy: Start" May 13 00:44:15.520885 kubelet[2168]: I0513 00:44:15.520861 2168 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:44:15.520885 kubelet[2168]: I0513 00:44:15.520890 2168 state_mem.go:35] "Initializing new in-memory state store" May 13 00:44:15.521173 kubelet[2168]: I0513 00:44:15.521153 2168 state_mem.go:75] "Updated machine memory state" May 13 00:44:15.522614 kubelet[2168]: I0513 00:44:15.522578 2168 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:44:15.522798 kubelet[2168]: I0513 00:44:15.522752 2168 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:44:15.522879 kubelet[2168]: I0513 00:44:15.522858 2168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:44:15.559522 kubelet[2168]: I0513 00:44:15.559487 2168 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:44:15.569711 kubelet[2168]: I0513 00:44:15.569638 2168 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:44:15.569981 kubelet[2168]: I0513 00:44:15.569726 2168 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:44:15.570675 kubelet[2168]: I0513 00:44:15.570612 2168 topology_manager.go:215] "Topology Admit Handler" podUID="fb05e315079c85554975244b04de582a" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:44:15.570740 kubelet[2168]: I0513 00:44:15.570729 2168 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:44:15.570820 kubelet[2168]: I0513 00:44:15.570777 2168 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:44:15.757304 kubelet[2168]: I0513 00:44:15.757237 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb05e315079c85554975244b04de582a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fb05e315079c85554975244b04de582a\") " pod="kube-system/kube-apiserver-localhost" May 13 00:44:15.757304 kubelet[2168]: I0513 00:44:15.757285 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:15.757304 kubelet[2168]: I0513 00:44:15.757303 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:15.757304 kubelet[2168]: I0513 00:44:15.757317 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:15.757653 kubelet[2168]: I0513 00:44:15.757334 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:15.757653 kubelet[2168]: I0513 00:44:15.757378 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb05e315079c85554975244b04de582a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb05e315079c85554975244b04de582a\") " pod="kube-system/kube-apiserver-localhost" May 13 00:44:15.757653 kubelet[2168]: I0513 00:44:15.757391 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb05e315079c85554975244b04de582a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb05e315079c85554975244b04de582a\") " pod="kube-system/kube-apiserver-localhost" May 13 00:44:15.757653 kubelet[2168]: I0513 00:44:15.757551 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:44:15.757653 kubelet[2168]: I0513 00:44:15.757627 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:44:15.883849 kubelet[2168]: E0513 00:44:15.883782 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:15.886544 kubelet[2168]: E0513 00:44:15.886505 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:15.886845 kubelet[2168]: E0513 00:44:15.886804 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:16.443365 kubelet[2168]: I0513 00:44:16.443277 2168 apiserver.go:52] "Watching apiserver" May 13 00:44:16.455973 kubelet[2168]: I0513 00:44:16.455897 2168 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:44:16.486514 kubelet[2168]: E0513 00:44:16.486468 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:16.486709 kubelet[2168]: E0513 00:44:16.486661 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:16.493524 kubelet[2168]: E0513 00:44:16.493452 2168 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:44:16.494042 kubelet[2168]: E0513 00:44:16.494017 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:16.510948 kubelet[2168]: I0513 00:44:16.510881 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5108409790000001 podStartE2EDuration="1.510840979s" podCreationTimestamp="2025-05-13 00:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:44:16.510784653 +0000 UTC m=+1.159604967" watchObservedRunningTime="2025-05-13 00:44:16.510840979 +0000 UTC m=+1.159661282" May 13 00:44:16.526555 kubelet[2168]: I0513 00:44:16.526479 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5264547990000001 podStartE2EDuration="1.526454799s" podCreationTimestamp="2025-05-13 00:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:44:16.526249592 +0000 UTC m=+1.175069895" watchObservedRunningTime="2025-05-13 00:44:16.526454799 +0000 UTC m=+1.175275122" May 13 00:44:16.526760 kubelet[2168]: I0513 00:44:16.526619 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5266116429999999 podStartE2EDuration="1.526611643s" podCreationTimestamp="2025-05-13 00:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:44:16.519131642 +0000 UTC m=+1.167951945" watchObservedRunningTime="2025-05-13 00:44:16.526611643 +0000 UTC m=+1.175431956" May 13 00:44:17.264800 sudo[1427]: pam_unix(sudo:session): session closed for user root May 13 00:44:17.266515 sshd[1421]: pam_unix(sshd:session): session closed for user core May 13 00:44:17.268789 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:56686.service: Deactivated successfully. May 13 00:44:17.269754 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:44:17.269775 systemd-logind[1287]: Session 5 logged out. Waiting for processes to exit. May 13 00:44:17.270673 systemd-logind[1287]: Removed session 5. May 13 00:44:17.488438 kubelet[2168]: E0513 00:44:17.488390 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:25.447988 kubelet[2168]: E0513 00:44:25.447946 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:25.500241 kubelet[2168]: E0513 00:44:25.500206 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:25.855195 kubelet[2168]: E0513 00:44:25.855114 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:26.081691 update_engine[1288]: I0513 00:44:26.081629 1288 update_attempter.cc:509] Updating boot flags... May 13 00:44:26.215191 kubelet[2168]: E0513 00:44:26.215034 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:26.502456 kubelet[2168]: E0513 00:44:26.502333 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:26.502930 kubelet[2168]: E0513 00:44:26.502586 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:28.049151 kubelet[2168]: I0513 00:44:28.049102 2168 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:44:28.049662 env[1298]: time="2025-05-13T00:44:28.049532208Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:44:28.049955 kubelet[2168]: I0513 00:44:28.049727 2168 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:44:29.109340 kubelet[2168]: I0513 00:44:29.109271 2168 topology_manager.go:215] "Topology Admit Handler" podUID="ad5e9d23-0220-475c-9908-6edcae6753e1" podNamespace="kube-system" podName="kube-proxy-gqbcx" May 13 00:44:29.125161 kubelet[2168]: I0513 00:44:29.125110 2168 topology_manager.go:215] "Topology Admit Handler" podUID="b4e7f51f-6bdd-48df-a992-f2ac12741f21" podNamespace="kube-flannel" podName="kube-flannel-ds-rwv2z" May 13 00:44:29.148200 kubelet[2168]: I0513 00:44:29.148152 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4e7f51f-6bdd-48df-a992-f2ac12741f21-xtables-lock\") pod \"kube-flannel-ds-rwv2z\" (UID: \"b4e7f51f-6bdd-48df-a992-f2ac12741f21\") " pod="kube-flannel/kube-flannel-ds-rwv2z" May 13 00:44:29.148200 kubelet[2168]: I0513 00:44:29.148198 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad5e9d23-0220-475c-9908-6edcae6753e1-lib-modules\") pod \"kube-proxy-gqbcx\" (UID: \"ad5e9d23-0220-475c-9908-6edcae6753e1\") " pod="kube-system/kube-proxy-gqbcx" May 13 00:44:29.148200 kubelet[2168]: I0513 00:44:29.148211 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad5e9d23-0220-475c-9908-6edcae6753e1-kube-proxy\") pod \"kube-proxy-gqbcx\" (UID: \"ad5e9d23-0220-475c-9908-6edcae6753e1\") " pod="kube-system/kube-proxy-gqbcx" May 13 00:44:29.148442 kubelet[2168]: I0513 00:44:29.148225 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffqr\" (UniqueName: \"kubernetes.io/projected/ad5e9d23-0220-475c-9908-6edcae6753e1-kube-api-access-nffqr\") pod \"kube-proxy-gqbcx\" (UID: \"ad5e9d23-0220-475c-9908-6edcae6753e1\") " pod="kube-system/kube-proxy-gqbcx" May 13 00:44:29.148442 kubelet[2168]: I0513 00:44:29.148241 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b4e7f51f-6bdd-48df-a992-f2ac12741f21-cni-plugin\") pod \"kube-flannel-ds-rwv2z\" (UID: \"b4e7f51f-6bdd-48df-a992-f2ac12741f21\") " pod="kube-flannel/kube-flannel-ds-rwv2z" May 13 00:44:29.148442 kubelet[2168]: I0513 00:44:29.148257 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b4e7f51f-6bdd-48df-a992-f2ac12741f21-flannel-cfg\") pod \"kube-flannel-ds-rwv2z\" (UID: \"b4e7f51f-6bdd-48df-a992-f2ac12741f21\") " pod="kube-flannel/kube-flannel-ds-rwv2z" May 13 00:44:29.148442 kubelet[2168]: I0513 00:44:29.148270 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b4e7f51f-6bdd-48df-a992-f2ac12741f21-cni\") pod \"kube-flannel-ds-rwv2z\" (UID: \"b4e7f51f-6bdd-48df-a992-f2ac12741f21\") " pod="kube-flannel/kube-flannel-ds-rwv2z" May 13 00:44:29.148442 kubelet[2168]: I0513 00:44:29.148283 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4e7f51f-6bdd-48df-a992-f2ac12741f21-run\") pod \"kube-flannel-ds-rwv2z\" (UID: \"b4e7f51f-6bdd-48df-a992-f2ac12741f21\") " pod="kube-flannel/kube-flannel-ds-rwv2z" May 13 00:44:29.148573 kubelet[2168]: I0513 00:44:29.148295 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqbc\" (UniqueName: \"kubernetes.io/projected/b4e7f51f-6bdd-48df-a992-f2ac12741f21-kube-api-access-dcqbc\") pod \"kube-flannel-ds-rwv2z\" (UID: \"b4e7f51f-6bdd-48df-a992-f2ac12741f21\") " pod="kube-flannel/kube-flannel-ds-rwv2z" May 13 00:44:29.148573 kubelet[2168]: I0513 00:44:29.148316 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad5e9d23-0220-475c-9908-6edcae6753e1-xtables-lock\") pod \"kube-proxy-gqbcx\" (UID: \"ad5e9d23-0220-475c-9908-6edcae6753e1\") " pod="kube-system/kube-proxy-gqbcx" May 13 00:44:29.413811 kubelet[2168]: E0513 00:44:29.413683 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:29.414355 env[1298]: time="2025-05-13T00:44:29.414316209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqbcx,Uid:ad5e9d23-0220-475c-9908-6edcae6753e1,Namespace:kube-system,Attempt:0,}" May 13 00:44:29.431380 env[1298]: time="2025-05-13T00:44:29.431294920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:29.431380 env[1298]: time="2025-05-13T00:44:29.431347292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:29.431380 env[1298]: time="2025-05-13T00:44:29.431363840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:29.431904 env[1298]: time="2025-05-13T00:44:29.431784028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98669e49c7662c64c5055ca76f6103e8019a9e56d960cb6113edd8a1013cf64d pid=2258 runtime=io.containerd.runc.v2 May 13 00:44:29.441746 kubelet[2168]: E0513 00:44:29.441697 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:29.443521 env[1298]: time="2025-05-13T00:44:29.442496673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rwv2z,Uid:b4e7f51f-6bdd-48df-a992-f2ac12741f21,Namespace:kube-flannel,Attempt:0,}" May 13 00:44:29.468375 env[1298]: time="2025-05-13T00:44:29.468318428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqbcx,Uid:ad5e9d23-0220-475c-9908-6edcae6753e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"98669e49c7662c64c5055ca76f6103e8019a9e56d960cb6113edd8a1013cf64d\"" May 13 00:44:29.469433 kubelet[2168]: E0513 00:44:29.469338 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:29.473439 env[1298]: time="2025-05-13T00:44:29.473376282Z" level=info msg="CreateContainer within sandbox \"98669e49c7662c64c5055ca76f6103e8019a9e56d960cb6113edd8a1013cf64d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:44:30.309093 env[1298]: time="2025-05-13T00:44:30.309009959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:30.309287 env[1298]: time="2025-05-13T00:44:30.309055193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:30.309287 env[1298]: time="2025-05-13T00:44:30.309087267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:30.309392 env[1298]: time="2025-05-13T00:44:30.309265950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be pid=2298 runtime=io.containerd.runc.v2 May 13 00:44:30.365295 env[1298]: time="2025-05-13T00:44:30.364539556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rwv2z,Uid:b4e7f51f-6bdd-48df-a992-f2ac12741f21,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\"" May 13 00:44:30.365450 kubelet[2168]: E0513 00:44:30.365104 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:30.366325 env[1298]: time="2025-05-13T00:44:30.366287939Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 00:44:30.485438 env[1298]: time="2025-05-13T00:44:30.485362320Z" level=info msg="CreateContainer within sandbox \"98669e49c7662c64c5055ca76f6103e8019a9e56d960cb6113edd8a1013cf64d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb490d054d69e76eb50ece40a45076a9a2d9ba1912cad36e694c659736fbbd76\"" May 13 00:44:30.486060 env[1298]: time="2025-05-13T00:44:30.486014144Z" level=info msg="StartContainer for \"cb490d054d69e76eb50ece40a45076a9a2d9ba1912cad36e694c659736fbbd76\"" May 13 00:44:30.719129 env[1298]: time="2025-05-13T00:44:30.718975752Z" level=info msg="StartContainer for \"cb490d054d69e76eb50ece40a45076a9a2d9ba1912cad36e694c659736fbbd76\" returns successfully" May 13 00:44:31.515534 kubelet[2168]: E0513 00:44:31.515498 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:32.517058 kubelet[2168]: E0513 00:44:32.517005 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:32.749569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272828554.mount: Deactivated successfully. May 13 00:44:33.230843 env[1298]: time="2025-05-13T00:44:33.230753375Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:33.233876 env[1298]: time="2025-05-13T00:44:33.233804295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:33.237267 env[1298]: time="2025-05-13T00:44:33.237141600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:33.239724 env[1298]: time="2025-05-13T00:44:33.239486900Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:33.240276 env[1298]: time="2025-05-13T00:44:33.240220452Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 13 00:44:33.243284 env[1298]: time="2025-05-13T00:44:33.243229919Z" level=info msg="CreateContainer within sandbox \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 00:44:33.264341 env[1298]: time="2025-05-13T00:44:33.264216033Z" level=info msg="CreateContainer within sandbox \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5acef57e2ad0f69d47ba68d3f375ec12afa89d38828ab9af616ca0bfa7b157f8\"" May 13 00:44:33.265061 env[1298]: time="2025-05-13T00:44:33.265014632Z" level=info msg="StartContainer for \"5acef57e2ad0f69d47ba68d3f375ec12afa89d38828ab9af616ca0bfa7b157f8\"" May 13 00:44:33.328352 env[1298]: time="2025-05-13T00:44:33.328273392Z" level=info msg="StartContainer for \"5acef57e2ad0f69d47ba68d3f375ec12afa89d38828ab9af616ca0bfa7b157f8\" returns successfully" May 13 00:44:33.490433 env[1298]: time="2025-05-13T00:44:33.490250568Z" level=info msg="shim disconnected" id=5acef57e2ad0f69d47ba68d3f375ec12afa89d38828ab9af616ca0bfa7b157f8 May 13 00:44:33.490433 env[1298]: time="2025-05-13T00:44:33.490312918Z" level=warning msg="cleaning up after shim disconnected" id=5acef57e2ad0f69d47ba68d3f375ec12afa89d38828ab9af616ca0bfa7b157f8 namespace=k8s.io May 13 00:44:33.490433 env[1298]: time="2025-05-13T00:44:33.490326870Z" level=info msg="cleaning up dead shim" May 13 00:44:33.499678 env[1298]: time="2025-05-13T00:44:33.499609963Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2537 runtime=io.containerd.runc.v2\n" May 13 00:44:33.520619 kubelet[2168]: E0513 00:44:33.520583 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:33.521775 env[1298]: time="2025-05-13T00:44:33.521732136Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 00:44:33.536130 kubelet[2168]: I0513 00:44:33.535805 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gqbcx" podStartSLOduration=5.535786622 podStartE2EDuration="5.535786622s" podCreationTimestamp="2025-05-13 00:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:44:31.592715155 +0000 UTC m=+16.241535468" watchObservedRunningTime="2025-05-13 00:44:33.535786622 +0000 UTC m=+18.184606925" May 13 00:44:33.661375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5acef57e2ad0f69d47ba68d3f375ec12afa89d38828ab9af616ca0bfa7b157f8-rootfs.mount: Deactivated successfully. May 13 00:44:36.521111 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:60586.service. May 13 00:44:36.530692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522519275.mount: Deactivated successfully. May 13 00:44:36.563980 sshd[2550]: Accepted publickey for core from 10.0.0.1 port 60586 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:36.565492 sshd[2550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:36.571730 systemd-logind[1287]: New session 6 of user core. May 13 00:44:36.572720 systemd[1]: Started session-6.scope. May 13 00:44:36.723718 sshd[2550]: pam_unix(sshd:session): session closed for user core May 13 00:44:36.726664 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:60586.service: Deactivated successfully. May 13 00:44:36.728028 systemd-logind[1287]: Session 6 logged out. Waiting for processes to exit. May 13 00:44:36.728167 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:44:36.729137 systemd-logind[1287]: Removed session 6. May 13 00:44:37.883692 env[1298]: time="2025-05-13T00:44:37.883625671Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:37.885858 env[1298]: time="2025-05-13T00:44:37.885805071Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:37.887858 env[1298]: time="2025-05-13T00:44:37.887806590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:37.889689 env[1298]: time="2025-05-13T00:44:37.889648870Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:44:37.890331 env[1298]: time="2025-05-13T00:44:37.890286649Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 13 00:44:37.892380 env[1298]: time="2025-05-13T00:44:37.892330752Z" level=info msg="CreateContainer within sandbox \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:44:37.905890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496422316.mount: Deactivated successfully. May 13 00:44:37.906531 env[1298]: time="2025-05-13T00:44:37.906493020Z" level=info msg="CreateContainer within sandbox \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b953299962e6a833d237d99d68a76488438ea86b1e6648e489cad3eed8fa509\"" May 13 00:44:37.907056 env[1298]: time="2025-05-13T00:44:37.907010885Z" level=info msg="StartContainer for \"8b953299962e6a833d237d99d68a76488438ea86b1e6648e489cad3eed8fa509\"" May 13 00:44:37.956559 env[1298]: time="2025-05-13T00:44:37.956490906Z" level=info msg="StartContainer for \"8b953299962e6a833d237d99d68a76488438ea86b1e6648e489cad3eed8fa509\" returns successfully" May 13 00:44:38.010878 kubelet[2168]: I0513 00:44:38.010843 2168 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:44:38.041437 env[1298]: time="2025-05-13T00:44:38.041321412Z" level=info msg="shim disconnected" id=8b953299962e6a833d237d99d68a76488438ea86b1e6648e489cad3eed8fa509 May 13 00:44:38.041437 env[1298]: time="2025-05-13T00:44:38.041378066Z" level=warning msg="cleaning up after shim disconnected" id=8b953299962e6a833d237d99d68a76488438ea86b1e6648e489cad3eed8fa509 namespace=k8s.io May 13 00:44:38.041437 env[1298]: time="2025-05-13T00:44:38.041390032Z" level=info msg="cleaning up dead shim" May 13 00:44:38.049777 kubelet[2168]: I0513 00:44:38.048454 2168 topology_manager.go:215] "Topology Admit Handler" podUID="24779452-b5c5-4f44-b321-aa6d2fa8586c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-blzjn" May 13 00:44:38.053429 kubelet[2168]: I0513 00:44:38.053292 2168 topology_manager.go:215] "Topology Admit Handler" podUID="06b05cff-db8f-45d2-bef2-adcfa373ad64" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b95bz" May 13 00:44:38.055884 env[1298]: time="2025-05-13T00:44:38.055826161Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:44:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2607 runtime=io.containerd.runc.v2\n" May 13 00:44:38.203828 kubelet[2168]: I0513 00:44:38.203262 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvh65\" (UniqueName: \"kubernetes.io/projected/24779452-b5c5-4f44-b321-aa6d2fa8586c-kube-api-access-pvh65\") pod \"coredns-7db6d8ff4d-blzjn\" (UID: \"24779452-b5c5-4f44-b321-aa6d2fa8586c\") " pod="kube-system/coredns-7db6d8ff4d-blzjn" May 13 00:44:38.203828 kubelet[2168]: I0513 00:44:38.203310 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvpmh\" (UniqueName: \"kubernetes.io/projected/06b05cff-db8f-45d2-bef2-adcfa373ad64-kube-api-access-gvpmh\") pod \"coredns-7db6d8ff4d-b95bz\" (UID: \"06b05cff-db8f-45d2-bef2-adcfa373ad64\") " pod="kube-system/coredns-7db6d8ff4d-b95bz" May 13 00:44:38.203828 kubelet[2168]: I0513 00:44:38.203333 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06b05cff-db8f-45d2-bef2-adcfa373ad64-config-volume\") pod \"coredns-7db6d8ff4d-b95bz\" (UID: \"06b05cff-db8f-45d2-bef2-adcfa373ad64\") " pod="kube-system/coredns-7db6d8ff4d-b95bz" May 13 00:44:38.203828 kubelet[2168]: I0513 00:44:38.203363 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24779452-b5c5-4f44-b321-aa6d2fa8586c-config-volume\") pod \"coredns-7db6d8ff4d-blzjn\" (UID: \"24779452-b5c5-4f44-b321-aa6d2fa8586c\") " pod="kube-system/coredns-7db6d8ff4d-blzjn" May 13 00:44:38.352820 kubelet[2168]: E0513 00:44:38.352752 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:38.353460 env[1298]: time="2025-05-13T00:44:38.353383976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blzjn,Uid:24779452-b5c5-4f44-b321-aa6d2fa8586c,Namespace:kube-system,Attempt:0,}" May 13 00:44:38.357524 kubelet[2168]: E0513 00:44:38.357502 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:38.358121 env[1298]: time="2025-05-13T00:44:38.358090338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b95bz,Uid:06b05cff-db8f-45d2-bef2-adcfa373ad64,Namespace:kube-system,Attempt:0,}" May 13 00:44:38.393422 env[1298]: time="2025-05-13T00:44:38.393327035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b95bz,Uid:06b05cff-db8f-45d2-bef2-adcfa373ad64,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97d9dff46287d9531b905470d93b477c7b39b8c4cd78444787c70fbb8d484a4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:44:38.393712 kubelet[2168]: E0513 00:44:38.393663 2168 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d9dff46287d9531b905470d93b477c7b39b8c4cd78444787c70fbb8d484a4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:44:38.393712 kubelet[2168]: E0513 00:44:38.393741 2168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d9dff46287d9531b905470d93b477c7b39b8c4cd78444787c70fbb8d484a4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-b95bz" May 13 00:44:38.393962 kubelet[2168]: E0513 00:44:38.393761 2168 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d9dff46287d9531b905470d93b477c7b39b8c4cd78444787c70fbb8d484a4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-b95bz" May 13 00:44:38.393962 kubelet[2168]: E0513 00:44:38.393806 2168 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b95bz_kube-system(06b05cff-db8f-45d2-bef2-adcfa373ad64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b95bz_kube-system(06b05cff-db8f-45d2-bef2-adcfa373ad64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97d9dff46287d9531b905470d93b477c7b39b8c4cd78444787c70fbb8d484a4c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-b95bz" podUID="06b05cff-db8f-45d2-bef2-adcfa373ad64" May 13 00:44:38.394345 env[1298]: time="2025-05-13T00:44:38.394278801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blzjn,Uid:24779452-b5c5-4f44-b321-aa6d2fa8586c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ad9ac3d3256548eb47e60fd25d06d12e1c2edfc3297a2b4a81744e19a3ebc03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:44:38.394660 kubelet[2168]: E0513 00:44:38.394599 2168 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad9ac3d3256548eb47e60fd25d06d12e1c2edfc3297a2b4a81744e19a3ebc03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:44:38.394728 kubelet[2168]: E0513 00:44:38.394686 2168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad9ac3d3256548eb47e60fd25d06d12e1c2edfc3297a2b4a81744e19a3ebc03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-blzjn" May 13 00:44:38.394728 kubelet[2168]: E0513 00:44:38.394713 2168 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad9ac3d3256548eb47e60fd25d06d12e1c2edfc3297a2b4a81744e19a3ebc03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-blzjn" May 13 00:44:38.394805 kubelet[2168]: E0513 00:44:38.394780 2168 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-blzjn_kube-system(24779452-b5c5-4f44-b321-aa6d2fa8586c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-blzjn_kube-system(24779452-b5c5-4f44-b321-aa6d2fa8586c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ad9ac3d3256548eb47e60fd25d06d12e1c2edfc3297a2b4a81744e19a3ebc03\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-blzjn" podUID="24779452-b5c5-4f44-b321-aa6d2fa8586c" May 13 00:44:38.532433 kubelet[2168]: E0513 00:44:38.531742 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:38.534072 env[1298]: time="2025-05-13T00:44:38.533996063Z" level=info msg="CreateContainer within sandbox \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 00:44:38.548413 env[1298]: time="2025-05-13T00:44:38.548341305Z" level=info msg="CreateContainer within sandbox \"10eaaad12e3a7eac4df96374a0ee8097ac7378c6955b28ee13cc52587b5093be\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b726d33b501017f5655a877c8b59b06687607fb2e46ff861fac216ebefe6b1e0\"" May 13 00:44:38.548940 env[1298]: time="2025-05-13T00:44:38.548889871Z" level=info msg="StartContainer for \"b726d33b501017f5655a877c8b59b06687607fb2e46ff861fac216ebefe6b1e0\"" May 13 00:44:38.589931 env[1298]: time="2025-05-13T00:44:38.589871755Z" level=info msg="StartContainer for \"b726d33b501017f5655a877c8b59b06687607fb2e46ff861fac216ebefe6b1e0\" returns successfully" May 13 00:44:38.906551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b953299962e6a833d237d99d68a76488438ea86b1e6648e489cad3eed8fa509-rootfs.mount: Deactivated successfully. May 13 00:44:39.535663 kubelet[2168]: E0513 00:44:39.535634 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:39.660720 systemd-networkd[1080]: flannel.1: Link UP May 13 00:44:39.660727 systemd-networkd[1080]: flannel.1: Gained carrier May 13 00:44:40.537336 kubelet[2168]: E0513 00:44:40.537281 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:40.965572 systemd-networkd[1080]: flannel.1: Gained IPv6LL May 13 00:44:41.726815 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:60602.service. May 13 00:44:41.767615 sshd[2798]: Accepted publickey for core from 10.0.0.1 port 60602 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:41.768964 sshd[2798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:41.772582 systemd-logind[1287]: New session 7 of user core. May 13 00:44:41.773532 systemd[1]: Started session-7.scope. May 13 00:44:41.879240 sshd[2798]: pam_unix(sshd:session): session closed for user core May 13 00:44:41.881984 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:60602.service: Deactivated successfully. May 13 00:44:41.882844 systemd-logind[1287]: Session 7 logged out. Waiting for processes to exit. May 13 00:44:41.882859 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:44:41.883715 systemd-logind[1287]: Removed session 7. May 13 00:44:46.883557 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:55600.service. May 13 00:44:46.923487 sshd[2834]: Accepted publickey for core from 10.0.0.1 port 55600 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:46.924795 sshd[2834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:46.928336 systemd-logind[1287]: New session 8 of user core. May 13 00:44:46.929331 systemd[1]: Started session-8.scope. May 13 00:44:47.036397 sshd[2834]: pam_unix(sshd:session): session closed for user core May 13 00:44:47.041050 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:55600.service: Deactivated successfully. May 13 00:44:47.042226 systemd-logind[1287]: Session 8 logged out. Waiting for processes to exit. May 13 00:44:47.042353 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:44:47.043513 systemd-logind[1287]: Removed session 8. May 13 00:44:49.471474 kubelet[2168]: E0513 00:44:49.471390 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:49.472256 env[1298]: time="2025-05-13T00:44:49.472209009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blzjn,Uid:24779452-b5c5-4f44-b321-aa6d2fa8586c,Namespace:kube-system,Attempt:0,}" May 13 00:44:49.868692 systemd-networkd[1080]: cni0: Link UP May 13 00:44:49.868699 systemd-networkd[1080]: cni0: Gained carrier May 13 00:44:49.871215 systemd-networkd[1080]: cni0: Lost carrier May 13 00:44:49.876572 systemd-networkd[1080]: veth37b406f2: Link UP May 13 00:44:49.882355 kernel: cni0: port 1(veth37b406f2) entered blocking state May 13 00:44:49.882435 kernel: cni0: port 1(veth37b406f2) entered disabled state May 13 00:44:49.883426 kernel: device veth37b406f2 entered promiscuous mode May 13 00:44:49.885431 kernel: cni0: port 1(veth37b406f2) entered blocking state May 13 00:44:49.887629 kernel: cni0: port 1(veth37b406f2) entered forwarding state May 13 00:44:49.887651 kernel: cni0: port 1(veth37b406f2) entered disabled state May 13 00:44:49.893365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth37b406f2: link becomes ready May 13 00:44:49.893442 kernel: cni0: port 1(veth37b406f2) entered blocking state May 13 00:44:49.893460 kernel: cni0: port 1(veth37b406f2) entered forwarding state May 13 00:44:49.893513 systemd-networkd[1080]: veth37b406f2: Gained carrier May 13 00:44:49.893725 systemd-networkd[1080]: cni0: Gained carrier May 13 00:44:49.899436 env[1298]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c928), "name":"cbr0", "type":"bridge"} May 13 00:44:49.899436 env[1298]: delegateAdd: netconf sent to delegate plugin: May 13 00:44:49.909740 env[1298]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:44:49.909662231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:49.909740 env[1298]: time="2025-05-13T00:44:49.909704860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:49.909740 env[1298]: time="2025-05-13T00:44:49.909719009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:49.910000 env[1298]: time="2025-05-13T00:44:49.909949750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d302d680c4758850e4672057ed1d3803e14083ac0f908875e30c09f33ea9f432 pid=2919 runtime=io.containerd.runc.v2 May 13 00:44:49.933058 systemd-resolved[1219]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:44:49.953265 env[1298]: time="2025-05-13T00:44:49.953219504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blzjn,Uid:24779452-b5c5-4f44-b321-aa6d2fa8586c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d302d680c4758850e4672057ed1d3803e14083ac0f908875e30c09f33ea9f432\"" May 13 00:44:49.953904 kubelet[2168]: E0513 00:44:49.953877 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:49.955952 env[1298]: time="2025-05-13T00:44:49.955919605Z" level=info msg="CreateContainer within sandbox \"d302d680c4758850e4672057ed1d3803e14083ac0f908875e30c09f33ea9f432\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:44:49.972196 env[1298]: time="2025-05-13T00:44:49.972150776Z" level=info msg="CreateContainer within sandbox \"d302d680c4758850e4672057ed1d3803e14083ac0f908875e30c09f33ea9f432\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0f7df06913eaa6145c48ec1ff3ac7448bcb134b9f936ac03a26f8bfc92c53b1\"" May 13 00:44:49.972606 env[1298]: time="2025-05-13T00:44:49.972571153Z" level=info msg="StartContainer for \"a0f7df06913eaa6145c48ec1ff3ac7448bcb134b9f936ac03a26f8bfc92c53b1\"" May 13 00:44:50.011509 env[1298]: time="2025-05-13T00:44:50.011325407Z" level=info msg="StartContainer for \"a0f7df06913eaa6145c48ec1ff3ac7448bcb134b9f936ac03a26f8bfc92c53b1\" returns successfully" May 13 00:44:50.551980 kubelet[2168]: E0513 00:44:50.551948 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:50.643787 kubelet[2168]: I0513 00:44:50.643715 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rwv2z" podStartSLOduration=14.118217382 podStartE2EDuration="21.643697299s" podCreationTimestamp="2025-05-13 00:44:29 +0000 UTC" firstStartedPulling="2025-05-13 00:44:30.36576356 +0000 UTC m=+15.014583863" lastFinishedPulling="2025-05-13 00:44:37.891243477 +0000 UTC m=+22.540063780" observedRunningTime="2025-05-13 00:44:39.544101704 +0000 UTC m=+24.192921997" watchObservedRunningTime="2025-05-13 00:44:50.643697299 +0000 UTC m=+35.292517602" May 13 00:44:50.644017 kubelet[2168]: I0513 00:44:50.643824 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-blzjn" podStartSLOduration=21.643820966 podStartE2EDuration="21.643820966s" podCreationTimestamp="2025-05-13 00:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:44:50.643515812 +0000 UTC m=+35.292336135" watchObservedRunningTime="2025-05-13 00:44:50.643820966 +0000 UTC m=+35.292641269" May 13 00:44:50.815121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1610466668.mount: Deactivated successfully. May 13 00:44:51.333537 systemd-networkd[1080]: cni0: Gained IPv6LL May 13 00:44:51.553274 kubelet[2168]: E0513 00:44:51.553236 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:51.974552 systemd-networkd[1080]: veth37b406f2: Gained IPv6LL May 13 00:44:52.039816 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:55616.service. May 13 00:44:52.081180 sshd[2995]: Accepted publickey for core from 10.0.0.1 port 55616 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:52.082451 sshd[2995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:52.086066 systemd-logind[1287]: New session 9 of user core. May 13 00:44:52.086957 systemd[1]: Started session-9.scope. May 13 00:44:52.196214 sshd[2995]: pam_unix(sshd:session): session closed for user core May 13 00:44:52.199439 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:55624.service. May 13 00:44:52.199967 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:55616.service: Deactivated successfully. May 13 00:44:52.200962 systemd-logind[1287]: Session 9 logged out. Waiting for processes to exit. May 13 00:44:52.201019 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:44:52.201926 systemd-logind[1287]: Removed session 9. May 13 00:44:52.239289 sshd[3009]: Accepted publickey for core from 10.0.0.1 port 55624 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:52.240655 sshd[3009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:52.243685 systemd-logind[1287]: New session 10 of user core. May 13 00:44:52.244375 systemd[1]: Started session-10.scope. May 13 00:44:52.383462 sshd[3009]: pam_unix(sshd:session): session closed for user core May 13 00:44:52.385834 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:55638.service. May 13 00:44:52.388999 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:55624.service: Deactivated successfully. May 13 00:44:52.390983 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:44:52.390984 systemd-logind[1287]: Session 10 logged out. Waiting for processes to exit. May 13 00:44:52.394842 systemd-logind[1287]: Removed session 10. May 13 00:44:52.433693 sshd[3020]: Accepted publickey for core from 10.0.0.1 port 55638 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:52.435010 sshd[3020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:52.438638 systemd-logind[1287]: New session 11 of user core. May 13 00:44:52.439289 systemd[1]: Started session-11.scope. May 13 00:44:52.542640 sshd[3020]: pam_unix(sshd:session): session closed for user core May 13 00:44:52.544942 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:55638.service: Deactivated successfully. May 13 00:44:52.545770 systemd-logind[1287]: Session 11 logged out. Waiting for processes to exit. May 13 00:44:52.545794 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:44:52.546523 systemd-logind[1287]: Removed session 11. May 13 00:44:52.555015 kubelet[2168]: E0513 00:44:52.554988 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:53.470779 kubelet[2168]: E0513 00:44:53.470742 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:53.471215 env[1298]: time="2025-05-13T00:44:53.471110914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b95bz,Uid:06b05cff-db8f-45d2-bef2-adcfa373ad64,Namespace:kube-system,Attempt:0,}" May 13 00:44:53.494198 systemd-networkd[1080]: vethe3aa52f1: Link UP May 13 00:44:53.496271 kernel: cni0: port 2(vethe3aa52f1) entered blocking state May 13 00:44:53.496344 kernel: cni0: port 2(vethe3aa52f1) entered disabled state May 13 00:44:53.496370 kernel: device vethe3aa52f1 entered promiscuous mode May 13 00:44:53.504432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:44:53.504529 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe3aa52f1: link becomes ready May 13 00:44:53.504548 kernel: cni0: port 2(vethe3aa52f1) entered blocking state May 13 00:44:53.504563 kernel: cni0: port 2(vethe3aa52f1) entered forwarding state May 13 00:44:53.505648 systemd-networkd[1080]: vethe3aa52f1: Gained carrier May 13 00:44:53.507103 env[1298]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000016928), "name":"cbr0", "type":"bridge"} May 13 00:44:53.507103 env[1298]: delegateAdd: netconf sent to delegate plugin: May 13 00:44:53.516040 env[1298]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:44:53.515973431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:44:53.516040 env[1298]: time="2025-05-13T00:44:53.516014296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:44:53.516040 env[1298]: time="2025-05-13T00:44:53.516032875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:44:53.516226 env[1298]: time="2025-05-13T00:44:53.516180439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43bbc70fb9879fb986eb836802c9b5a38ac8b8af9dc39d1206c10d3475c62321 pid=3082 runtime=io.containerd.runc.v2 May 13 00:44:53.534709 systemd-resolved[1219]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:44:53.557124 env[1298]: time="2025-05-13T00:44:53.557089634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b95bz,Uid:06b05cff-db8f-45d2-bef2-adcfa373ad64,Namespace:kube-system,Attempt:0,} returns sandbox id \"43bbc70fb9879fb986eb836802c9b5a38ac8b8af9dc39d1206c10d3475c62321\"" May 13 00:44:53.557642 kubelet[2168]: E0513 00:44:53.557618 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:53.559016 env[1298]: time="2025-05-13T00:44:53.558979356Z" level=info msg="CreateContainer within sandbox \"43bbc70fb9879fb986eb836802c9b5a38ac8b8af9dc39d1206c10d3475c62321\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:44:53.571673 env[1298]: time="2025-05-13T00:44:53.571637421Z" level=info msg="CreateContainer within sandbox \"43bbc70fb9879fb986eb836802c9b5a38ac8b8af9dc39d1206c10d3475c62321\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46e963c02970615b9422013d0e5cb76fe64ef0356c28b9946a49e627b197ce56\"" May 13 00:44:53.572191 env[1298]: time="2025-05-13T00:44:53.572150501Z" level=info msg="StartContainer for \"46e963c02970615b9422013d0e5cb76fe64ef0356c28b9946a49e627b197ce56\"" May 13 00:44:53.613828 env[1298]: time="2025-05-13T00:44:53.613782928Z" level=info msg="StartContainer for \"46e963c02970615b9422013d0e5cb76fe64ef0356c28b9946a49e627b197ce56\" returns successfully" May 13 00:44:54.560204 kubelet[2168]: E0513 00:44:54.559354 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:54.754063 kubelet[2168]: I0513 00:44:54.754000 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b95bz" podStartSLOduration=25.753978323 podStartE2EDuration="25.753978323s" podCreationTimestamp="2025-05-13 00:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:44:54.695560115 +0000 UTC m=+39.344380428" watchObservedRunningTime="2025-05-13 00:44:54.753978323 +0000 UTC m=+39.402798626" May 13 00:44:54.981540 systemd-networkd[1080]: vethe3aa52f1: Gained IPv6LL May 13 00:44:55.561071 kubelet[2168]: E0513 00:44:55.561029 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:56.562498 kubelet[2168]: E0513 00:44:56.562468 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:57.546724 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:56074.service. May 13 00:44:57.584416 sshd[3180]: Accepted publickey for core from 10.0.0.1 port 56074 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:44:57.585294 sshd[3180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:44:57.589140 systemd-logind[1287]: New session 12 of user core. May 13 00:44:57.590120 systemd[1]: Started session-12.scope. May 13 00:44:57.688396 sshd[3180]: pam_unix(sshd:session): session closed for user core May 13 00:44:57.690466 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:56074.service: Deactivated successfully. May 13 00:44:57.691532 systemd-logind[1287]: Session 12 logged out. Waiting for processes to exit. May 13 00:44:57.691576 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:44:57.692510 systemd-logind[1287]: Removed session 12. May 13 00:45:02.693334 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:56090.service. May 13 00:45:02.732505 sshd[3217]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:02.733934 sshd[3217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:02.737689 systemd-logind[1287]: New session 13 of user core. May 13 00:45:02.738641 systemd[1]: Started session-13.scope. May 13 00:45:02.842305 sshd[3217]: pam_unix(sshd:session): session closed for user core May 13 00:45:02.844976 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:56098.service. May 13 00:45:02.845440 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:56090.service: Deactivated successfully. May 13 00:45:02.846813 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:45:02.847012 systemd-logind[1287]: Session 13 logged out. Waiting for processes to exit. May 13 00:45:02.848195 systemd-logind[1287]: Removed session 13. May 13 00:45:02.888001 sshd[3230]: Accepted publickey for core from 10.0.0.1 port 56098 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:02.889366 sshd[3230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:02.893499 systemd-logind[1287]: New session 14 of user core. May 13 00:45:02.894458 systemd[1]: Started session-14.scope. May 13 00:45:03.070023 sshd[3230]: pam_unix(sshd:session): session closed for user core May 13 00:45:03.073128 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:56114.service. May 13 00:45:03.073898 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:56098.service: Deactivated successfully. May 13 00:45:03.075063 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:45:03.075693 systemd-logind[1287]: Session 14 logged out. Waiting for processes to exit. May 13 00:45:03.076858 systemd-logind[1287]: Removed session 14. May 13 00:45:03.116439 sshd[3242]: Accepted publickey for core from 10.0.0.1 port 56114 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:03.118021 sshd[3242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:03.123058 systemd-logind[1287]: New session 15 of user core. May 13 00:45:03.124019 systemd[1]: Started session-15.scope. May 13 00:45:04.772831 sshd[3242]: pam_unix(sshd:session): session closed for user core May 13 00:45:04.775771 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:56118.service. May 13 00:45:04.777676 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:56114.service: Deactivated successfully. May 13 00:45:04.778358 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:45:04.778848 systemd-logind[1287]: Session 15 logged out. Waiting for processes to exit. May 13 00:45:04.779622 systemd-logind[1287]: Removed session 15. May 13 00:45:04.823810 sshd[3266]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:04.824972 sshd[3266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:04.828541 systemd-logind[1287]: New session 16 of user core. May 13 00:45:04.829320 systemd[1]: Started session-16.scope. May 13 00:45:05.057209 sshd[3266]: pam_unix(sshd:session): session closed for user core May 13 00:45:05.058569 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:56126.service. May 13 00:45:05.061338 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:56118.service: Deactivated successfully. May 13 00:45:05.062467 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:45:05.063459 systemd-logind[1287]: Session 16 logged out. Waiting for processes to exit. May 13 00:45:05.064478 systemd-logind[1287]: Removed session 16. May 13 00:45:05.097315 sshd[3294]: Accepted publickey for core from 10.0.0.1 port 56126 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:05.098420 sshd[3294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:05.102412 systemd-logind[1287]: New session 17 of user core. May 13 00:45:05.103733 systemd[1]: Started session-17.scope. May 13 00:45:05.214028 sshd[3294]: pam_unix(sshd:session): session closed for user core May 13 00:45:05.216849 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:56126.service: Deactivated successfully. May 13 00:45:05.218007 systemd-logind[1287]: Session 17 logged out. Waiting for processes to exit. May 13 00:45:05.218029 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:45:05.219125 systemd-logind[1287]: Removed session 17. May 13 00:45:10.217718 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:60082.service. May 13 00:45:10.256197 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 60082 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:10.257376 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:10.260548 systemd-logind[1287]: New session 18 of user core. May 13 00:45:10.261469 systemd[1]: Started session-18.scope. May 13 00:45:10.359461 sshd[3331]: pam_unix(sshd:session): session closed for user core May 13 00:45:10.362020 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:60082.service: Deactivated successfully. May 13 00:45:10.363151 systemd-logind[1287]: Session 18 logged out. Waiting for processes to exit. May 13 00:45:10.363211 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:45:10.364139 systemd-logind[1287]: Removed session 18. May 13 00:45:15.363266 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:60092.service. May 13 00:45:15.402422 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 60092 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:15.403757 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:15.407475 systemd-logind[1287]: New session 19 of user core. May 13 00:45:15.408533 systemd[1]: Started session-19.scope. May 13 00:45:15.507483 sshd[3369]: pam_unix(sshd:session): session closed for user core May 13 00:45:15.509931 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:60092.service: Deactivated successfully. May 13 00:45:15.510817 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:45:15.511824 systemd-logind[1287]: Session 19 logged out. Waiting for processes to exit. May 13 00:45:15.512664 systemd-logind[1287]: Removed session 19. May 13 00:45:20.510352 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:60026.service. May 13 00:45:20.549071 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 60026 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:20.550113 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:20.553214 systemd-logind[1287]: New session 20 of user core. May 13 00:45:20.553951 systemd[1]: Started session-20.scope. May 13 00:45:20.645378 sshd[3406]: pam_unix(sshd:session): session closed for user core May 13 00:45:20.647304 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:60026.service: Deactivated successfully. May 13 00:45:20.648101 systemd-logind[1287]: Session 20 logged out. Waiting for processes to exit. May 13 00:45:20.648129 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:45:20.648854 systemd-logind[1287]: Removed session 20. May 13 00:45:25.647974 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:60028.service. May 13 00:45:25.686251 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:45:25.687393 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:45:25.690926 systemd-logind[1287]: New session 21 of user core. May 13 00:45:25.691949 systemd[1]: Started session-21.scope. May 13 00:45:25.791771 sshd[3441]: pam_unix(sshd:session): session closed for user core May 13 00:45:25.794526 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:60028.service: Deactivated successfully. May 13 00:45:25.795607 systemd-logind[1287]: Session 21 logged out. Waiting for processes to exit. May 13 00:45:25.795676 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:45:25.796631 systemd-logind[1287]: Removed session 21. May 13 00:45:27.471031 kubelet[2168]: E0513 00:45:27.471002 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"