Feb 12 20:24:44.810981 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:24:44.811005 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:24:44.811016 kernel: BIOS-provided physical RAM map: Feb 12 20:24:44.811024 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:24:44.811031 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:24:44.811039 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:24:44.811048 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:24:44.811056 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:24:44.811066 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:24:44.811074 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:24:44.811081 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:24:44.811089 kernel: NX (Execute Disable) protection: active Feb 12 20:24:44.811097 kernel: SMBIOS 2.8 present. Feb 12 20:24:44.811105 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:24:44.811116 kernel: Hypervisor detected: KVM Feb 12 20:24:44.811124 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:24:44.811133 kernel: kvm-clock: cpu 0, msr 4dfaa001, primary cpu clock Feb 12 20:24:44.811141 kernel: kvm-clock: using sched offset of 2154683811 cycles Feb 12 20:24:44.811162 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:24:44.811170 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:24:44.811179 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:24:44.811188 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:24:44.811197 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:24:44.811208 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:24:44.811216 kernel: Using GB pages for direct mapping Feb 12 20:24:44.811225 kernel: ACPI: Early table checksum verification disabled Feb 12 20:24:44.811234 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:24:44.811243 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:44.811251 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:44.811260 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:44.811269 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:24:44.811277 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:44.811288 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:44.811297 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:44.811305 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:24:44.811314 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:24:44.811323 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:24:44.811331 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:24:44.811340 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:24:44.811349 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:24:44.811361 kernel: No NUMA configuration found Feb 12 20:24:44.811382 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:24:44.811390 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:24:44.811399 kernel: Zone ranges: Feb 12 20:24:44.813207 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:24:44.813232 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:24:44.813245 kernel: Normal empty Feb 12 20:24:44.813252 kernel: Movable zone start for each node Feb 12 20:24:44.813259 kernel: Early memory node ranges Feb 12 20:24:44.813266 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:24:44.813272 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:24:44.813279 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:24:44.813287 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:24:44.813293 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:24:44.813300 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:24:44.813309 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:24:44.813315 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:24:44.813322 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:24:44.813328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:24:44.813335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:24:44.813342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:24:44.813349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:24:44.813355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:24:44.813362 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:24:44.813370 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:24:44.813377 kernel: TSC deadline timer available Feb 12 20:24:44.813384 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:24:44.813390 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:24:44.813397 kernel: kvm-guest: setup PV sched yield Feb 12 20:24:44.813404 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:24:44.813410 kernel: Booting paravirtualized kernel on KVM Feb 12 20:24:44.813417 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:24:44.813424 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:24:44.813432 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:24:44.813439 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:24:44.813445 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:24:44.813452 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:24:44.813459 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:24:44.813465 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:24:44.813472 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:24:44.813479 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:24:44.813485 kernel: Policy zone: DMA32 Feb 12 20:24:44.813494 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:24:44.813504 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:24:44.813511 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:24:44.813518 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:24:44.813524 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:24:44.813532 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:24:44.813549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:24:44.813567 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:24:44.813585 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:24:44.813597 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:24:44.813604 kernel: rcu: RCU event tracing is enabled. Feb 12 20:24:44.813611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:24:44.813618 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:24:44.813625 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:24:44.813631 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:24:44.813638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:24:44.813645 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:24:44.813651 kernel: random: crng init done Feb 12 20:24:44.813659 kernel: Console: colour VGA+ 80x25 Feb 12 20:24:44.813666 kernel: printk: console [ttyS0] enabled Feb 12 20:24:44.813682 kernel: ACPI: Core revision 20210730 Feb 12 20:24:44.813689 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:24:44.813695 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:24:44.813702 kernel: x2apic enabled Feb 12 20:24:44.813709 kernel: Switched APIC routing to physical x2apic. Feb 12 20:24:44.813715 kernel: kvm-guest: setup PV IPIs Feb 12 20:24:44.813722 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:24:44.813730 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:24:44.813737 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:24:44.813744 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:24:44.813750 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:24:44.813757 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:24:44.813763 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:24:44.813770 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:24:44.813777 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:24:44.813784 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:24:44.813798 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:24:44.813804 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:24:44.813811 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:24:44.813826 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:24:44.813834 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:24:44.813847 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:24:44.813855 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:24:44.813861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:24:44.813869 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:24:44.813877 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:24:44.813884 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:24:44.813891 kernel: LSM: Security Framework initializing Feb 12 20:24:44.813898 kernel: SELinux: Initializing. Feb 12 20:24:44.813905 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:24:44.813912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:24:44.813919 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:24:44.813928 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:24:44.813935 kernel: ... version: 0 Feb 12 20:24:44.813941 kernel: ... bit width: 48 Feb 12 20:24:44.813948 kernel: ... generic registers: 6 Feb 12 20:24:44.813955 kernel: ... value mask: 0000ffffffffffff Feb 12 20:24:44.813962 kernel: ... max period: 00007fffffffffff Feb 12 20:24:44.813969 kernel: ... fixed-purpose events: 0 Feb 12 20:24:44.813976 kernel: ... event mask: 000000000000003f Feb 12 20:24:44.813982 kernel: signal: max sigframe size: 1776 Feb 12 20:24:44.813990 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:24:44.813997 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:24:44.814004 kernel: x86: Booting SMP configuration: Feb 12 20:24:44.814011 kernel: .... node #0, CPUs: #1 Feb 12 20:24:44.814018 kernel: kvm-clock: cpu 1, msr 4dfaa041, secondary cpu clock Feb 12 20:24:44.814025 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:24:44.814032 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:24:44.814039 kernel: #2 Feb 12 20:24:44.814046 kernel: kvm-clock: cpu 2, msr 4dfaa081, secondary cpu clock Feb 12 20:24:44.814053 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:24:44.814061 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:24:44.814067 kernel: #3 Feb 12 20:24:44.814074 kernel: kvm-clock: cpu 3, msr 4dfaa0c1, secondary cpu clock Feb 12 20:24:44.814081 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:24:44.814087 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:24:44.814094 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:24:44.814101 kernel: smpboot: Max logical packages: 1 Feb 12 20:24:44.814108 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:24:44.814115 kernel: devtmpfs: initialized Feb 12 20:24:44.814123 kernel: x86/mm: Memory block size: 128MB Feb 12 20:24:44.814130 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:24:44.814137 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:24:44.814156 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:24:44.814163 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:24:44.814170 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:24:44.814179 kernel: audit: type=2000 audit(1707769485.004:1): state=initialized audit_enabled=0 res=1 Feb 12 20:24:44.814187 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:24:44.814195 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:24:44.814204 kernel: cpuidle: using governor menu Feb 12 20:24:44.814211 kernel: ACPI: bus type PCI registered Feb 12 20:24:44.814218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:24:44.814225 kernel: dca service started, version 1.12.1 Feb 12 20:24:44.814232 kernel: PCI: Using configuration type 1 for base access Feb 12 20:24:44.814239 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:24:44.814246 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:24:44.814253 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:24:44.814260 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:24:44.814268 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:24:44.814274 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:24:44.814281 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:24:44.814288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:24:44.814295 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:24:44.814302 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:24:44.814309 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:24:44.814316 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:24:44.814323 kernel: ACPI: Interpreter enabled Feb 12 20:24:44.814331 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:24:44.814338 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:24:44.814345 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:24:44.814351 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:24:44.814358 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:24:44.814491 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:24:44.814504 kernel: acpiphp: Slot [3] registered Feb 12 20:24:44.814511 kernel: acpiphp: Slot [4] registered Feb 12 20:24:44.814519 kernel: acpiphp: Slot [5] registered Feb 12 20:24:44.814526 kernel: acpiphp: Slot [6] registered Feb 12 20:24:44.814533 kernel: acpiphp: Slot [7] registered Feb 12 20:24:44.814540 kernel: acpiphp: Slot [8] registered Feb 12 20:24:44.814546 kernel: acpiphp: Slot [9] registered Feb 12 20:24:44.814553 kernel: acpiphp: Slot [10] registered Feb 12 20:24:44.814560 kernel: acpiphp: Slot [11] registered Feb 12 20:24:44.814567 kernel: acpiphp: Slot [12] registered Feb 12 20:24:44.814574 kernel: acpiphp: Slot [13] registered Feb 12 20:24:44.814581 kernel: acpiphp: Slot [14] registered Feb 12 20:24:44.814589 kernel: acpiphp: Slot [15] registered Feb 12 20:24:44.814596 kernel: acpiphp: Slot [16] registered Feb 12 20:24:44.814602 kernel: acpiphp: Slot [17] registered Feb 12 20:24:44.814609 kernel: acpiphp: Slot [18] registered Feb 12 20:24:44.814616 kernel: acpiphp: Slot [19] registered Feb 12 20:24:44.814623 kernel: acpiphp: Slot [20] registered Feb 12 20:24:44.814629 kernel: acpiphp: Slot [21] registered Feb 12 20:24:44.814636 kernel: acpiphp: Slot [22] registered Feb 12 20:24:44.814643 kernel: acpiphp: Slot [23] registered Feb 12 20:24:44.814651 kernel: acpiphp: Slot [24] registered Feb 12 20:24:44.814658 kernel: acpiphp: Slot [25] registered Feb 12 20:24:44.814664 kernel: acpiphp: Slot [26] registered Feb 12 20:24:44.814678 kernel: acpiphp: Slot [27] registered Feb 12 20:24:44.814686 kernel: acpiphp: Slot [28] registered Feb 12 20:24:44.814693 kernel: acpiphp: Slot [29] registered Feb 12 20:24:44.814700 kernel: acpiphp: Slot [30] registered Feb 12 20:24:44.814706 kernel: acpiphp: Slot [31] registered Feb 12 20:24:44.814713 kernel: PCI host bridge to bus 0000:00 Feb 12 20:24:44.814788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:24:44.814851 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:24:44.814910 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:24:44.814969 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:24:44.815026 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:24:44.815086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:24:44.815184 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:24:44.815296 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:24:44.815379 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:24:44.815448 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:24:44.815519 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:24:44.815585 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:24:44.815651 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:24:44.815731 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:24:44.815803 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:24:44.815870 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:24:44.815936 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:24:44.816007 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:24:44.816072 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:24:44.816234 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:24:44.816337 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:24:44.816409 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:24:44.816488 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:24:44.816560 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:24:44.816641 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:24:44.816720 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:24:44.816797 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:24:44.816871 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:24:44.816938 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:24:44.817005 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:24:44.817080 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:24:44.817171 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:24:44.817280 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:24:44.817376 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:24:44.817470 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:24:44.817484 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:24:44.817494 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:24:44.817505 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:24:44.817515 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:24:44.817525 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:24:44.817535 kernel: iommu: Default domain type: Translated Feb 12 20:24:44.817545 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:24:44.817633 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:24:44.817740 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:24:44.817853 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:24:44.817866 kernel: vgaarb: loaded Feb 12 20:24:44.817877 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:24:44.817887 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:24:44.817897 kernel: PTP clock support registered Feb 12 20:24:44.817906 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:24:44.817916 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:24:44.817929 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:24:44.817939 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:24:44.817949 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:24:44.817960 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:24:44.817969 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:24:44.817979 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:24:44.817989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:24:44.817999 kernel: pnp: PnP ACPI init Feb 12 20:24:44.818098 kernel: pnp 00:02: [dma 2] Feb 12 20:24:44.818115 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:24:44.818125 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:24:44.818135 kernel: NET: Registered PF_INET protocol family Feb 12 20:24:44.818156 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:24:44.818167 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:24:44.818177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:24:44.818187 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:24:44.818197 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:24:44.818210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:24:44.818219 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:24:44.818229 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:24:44.818239 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:24:44.818249 kernel: NET: Registered PF_XDP protocol family Feb 12 20:24:44.818345 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:24:44.818429 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:24:44.818510 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:24:44.818590 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:24:44.818683 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:24:44.818783 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:24:44.818881 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:24:44.818976 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:24:44.818989 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:24:44.819000 kernel: Initialise system trusted keyrings Feb 12 20:24:44.819010 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:24:44.819020 kernel: Key type asymmetric registered Feb 12 20:24:44.819032 kernel: Asymmetric key parser 'x509' registered Feb 12 20:24:44.819042 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:24:44.819052 kernel: io scheduler mq-deadline registered Feb 12 20:24:44.819062 kernel: io scheduler kyber registered Feb 12 20:24:44.819072 kernel: io scheduler bfq registered Feb 12 20:24:44.819082 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:24:44.819092 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:24:44.819102 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:24:44.819112 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:24:44.819124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:24:44.819134 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:24:44.819157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:24:44.819171 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:24:44.819181 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:24:44.819191 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:24:44.819289 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:24:44.819375 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:24:44.819462 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:24:44 UTC (1707769484) Feb 12 20:24:44.819546 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:24:44.819559 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:24:44.819569 kernel: Segment Routing with IPv6 Feb 12 20:24:44.819579 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:24:44.819589 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:24:44.819599 kernel: Key type dns_resolver registered Feb 12 20:24:44.819609 kernel: IPI shorthand broadcast: enabled Feb 12 20:24:44.819619 kernel: sched_clock: Marking stable (377390145, 73638025)->(476006352, -24978182) Feb 12 20:24:44.819631 kernel: registered taskstats version 1 Feb 12 20:24:44.819641 kernel: Loading compiled-in X.509 certificates Feb 12 20:24:44.819651 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:24:44.819661 kernel: Key type .fscrypt registered Feb 12 20:24:44.819671 kernel: Key type fscrypt-provisioning registered Feb 12 20:24:44.819689 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:24:44.819699 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:24:44.819709 kernel: ima: No architecture policies found Feb 12 20:24:44.819721 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:24:44.819732 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:24:44.819742 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:24:44.819752 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:24:44.819761 kernel: Run /init as init process Feb 12 20:24:44.819771 kernel: with arguments: Feb 12 20:24:44.819781 kernel: /init Feb 12 20:24:44.819791 kernel: with environment: Feb 12 20:24:44.819815 kernel: HOME=/ Feb 12 20:24:44.819825 kernel: TERM=linux Feb 12 20:24:44.819837 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:24:44.819851 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:44.819865 systemd[1]: Detected virtualization kvm. Feb 12 20:24:44.819876 systemd[1]: Detected architecture x86-64. Feb 12 20:24:44.819887 systemd[1]: Running in initrd. Feb 12 20:24:44.819898 systemd[1]: No hostname configured, using default hostname. Feb 12 20:24:44.819908 systemd[1]: Hostname set to . Feb 12 20:24:44.819921 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:44.819932 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:24:44.819942 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:44.819953 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:44.819964 systemd[1]: Reached target paths.target. Feb 12 20:24:44.819974 systemd[1]: Reached target slices.target. Feb 12 20:24:44.819985 systemd[1]: Reached target swap.target. Feb 12 20:24:44.819995 systemd[1]: Reached target timers.target. Feb 12 20:24:44.820008 systemd[1]: Listening on iscsid.socket. Feb 12 20:24:44.820019 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:24:44.820030 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:24:44.820041 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:24:44.820052 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:24:44.820062 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:44.820074 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:44.820086 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:44.820097 systemd[1]: Reached target sockets.target. Feb 12 20:24:44.820108 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:44.820119 systemd[1]: Finished network-cleanup.service. Feb 12 20:24:44.820129 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:24:44.820140 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:44.820162 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:44.820175 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:44.820186 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:24:44.820197 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:44.820208 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:24:44.820219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:24:44.820230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:24:44.820244 systemd-journald[198]: Journal started Feb 12 20:24:44.820300 systemd-journald[198]: Runtime Journal (/run/log/journal/25e8a3f29ccf42f7bbb523346fa40b7f) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:24:44.813862 systemd-modules-load[199]: Inserted module 'overlay' Feb 12 20:24:44.849489 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:24:44.849513 kernel: Bridge firewalling registered Feb 12 20:24:44.849522 kernel: audit: type=1130 audit(1707769484.844:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.849532 systemd[1]: Started systemd-journald.service. Feb 12 20:24:44.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.837729 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 12 20:24:44.853022 kernel: audit: type=1130 audit(1707769484.849:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.838003 systemd-resolved[200]: Positive Trust Anchors: Feb 12 20:24:44.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.838011 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:44.859011 kernel: audit: type=1130 audit(1707769484.853:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.859026 kernel: SCSI subsystem initialized Feb 12 20:24:44.838036 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:44.840219 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 12 20:24:44.849801 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:44.853607 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:44.866941 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:24:44.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.869748 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:24:44.874459 kernel: audit: type=1130 audit(1707769484.868:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.874479 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:24:44.874488 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:24:44.874497 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:24:44.877590 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 12 20:24:44.878285 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:44.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.879596 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:44.882179 kernel: audit: type=1130 audit(1707769484.878:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.889123 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:44.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.892162 kernel: audit: type=1130 audit(1707769484.889:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.897472 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:24:44.901402 kernel: audit: type=1130 audit(1707769484.897:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.901413 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:24:44.912551 dracut-cmdline[223]: dracut-dracut-053 Feb 12 20:24:44.914378 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:24:44.977172 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:24:44.987169 kernel: iscsi: registered transport (tcp) Feb 12 20:24:45.005174 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:24:45.005200 kernel: QLogic iSCSI HBA Driver Feb 12 20:24:45.029626 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:24:45.033582 kernel: audit: type=1130 audit(1707769485.029:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.033608 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:24:45.084183 kernel: raid6: avx2x4 gen() 16270 MB/s Feb 12 20:24:45.101173 kernel: raid6: avx2x4 xor() 4994 MB/s Feb 12 20:24:45.118173 kernel: raid6: avx2x2 gen() 16047 MB/s Feb 12 20:24:45.135170 kernel: raid6: avx2x2 xor() 11657 MB/s Feb 12 20:24:45.152170 kernel: raid6: avx2x1 gen() 13442 MB/s Feb 12 20:24:45.169176 kernel: raid6: avx2x1 xor() 9251 MB/s Feb 12 20:24:45.186169 kernel: raid6: sse2x4 gen() 8333 MB/s Feb 12 20:24:45.203171 kernel: raid6: sse2x4 xor() 3956 MB/s Feb 12 20:24:45.220172 kernel: raid6: sse2x2 gen() 8351 MB/s Feb 12 20:24:45.237177 kernel: raid6: sse2x2 xor() 5974 MB/s Feb 12 20:24:45.254175 kernel: raid6: sse2x1 gen() 6622 MB/s Feb 12 20:24:45.271589 kernel: raid6: sse2x1 xor() 4740 MB/s Feb 12 20:24:45.271608 kernel: raid6: using algorithm avx2x4 gen() 16270 MB/s Feb 12 20:24:45.271617 kernel: raid6: .... xor() 4994 MB/s, rmw enabled Feb 12 20:24:45.272303 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:24:45.289182 kernel: xor: automatically using best checksumming function avx Feb 12 20:24:45.419184 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:24:45.427190 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:24:45.430291 kernel: audit: type=1130 audit(1707769485.427:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.430000 audit: BPF prog-id=7 op=LOAD Feb 12 20:24:45.430000 audit: BPF prog-id=8 op=LOAD Feb 12 20:24:45.430766 systemd[1]: Starting systemd-udevd.service... Feb 12 20:24:45.441898 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 20:24:45.446355 systemd[1]: Started systemd-udevd.service. Feb 12 20:24:45.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.447499 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:24:45.458921 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 12 20:24:45.487185 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:24:45.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.488312 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:24:45.529950 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:24:45.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:45.555169 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:24:45.557192 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:24:45.567677 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:24:45.567734 kernel: AES CTR mode by8 optimization enabled Feb 12 20:24:45.567743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:24:45.570772 kernel: GPT:9289727 != 19775487 Feb 12 20:24:45.570825 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:24:45.570835 kernel: GPT:9289727 != 19775487 Feb 12 20:24:45.570843 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:24:45.570851 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:45.592173 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Feb 12 20:24:45.604171 kernel: libata version 3.00 loaded. Feb 12 20:24:45.605413 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:24:45.615162 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:24:45.615292 kernel: scsi host0: ata_piix Feb 12 20:24:45.615398 kernel: scsi host1: ata_piix Feb 12 20:24:45.615487 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:24:45.615498 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:24:45.619439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:24:45.622037 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:24:45.622497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:24:45.626002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:24:45.629785 systemd[1]: Starting disk-uuid.service... Feb 12 20:24:45.637501 disk-uuid[528]: Primary Header is updated. Feb 12 20:24:45.637501 disk-uuid[528]: Secondary Entries is updated. Feb 12 20:24:45.637501 disk-uuid[528]: Secondary Header is updated. Feb 12 20:24:45.641167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:45.644165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:45.764259 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:24:45.764333 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:24:45.794267 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:24:45.794466 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:24:45.812180 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:24:46.645562 disk-uuid[529]: The operation has completed successfully. Feb 12 20:24:46.646641 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:46.667020 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:24:46.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.667129 systemd[1]: Finished disk-uuid.service. Feb 12 20:24:46.675979 systemd[1]: Starting verity-setup.service... Feb 12 20:24:46.687174 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:24:46.705071 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:24:46.707114 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:24:46.709355 systemd[1]: Finished verity-setup.service. Feb 12 20:24:46.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.766171 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:24:46.766332 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:24:46.766716 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:24:46.767590 systemd[1]: Starting ignition-setup.service... Feb 12 20:24:46.769892 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:24:46.779115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:24:46.779164 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:24:46.779178 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:24:46.785370 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:24:46.791660 systemd[1]: Finished ignition-setup.service. Feb 12 20:24:46.793099 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:24:46.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.829521 ignition[629]: Ignition 2.14.0 Feb 12 20:24:46.829532 ignition[629]: Stage: fetch-offline Feb 12 20:24:46.829572 ignition[629]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:46.829580 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:46.829670 ignition[629]: parsed url from cmdline: "" Feb 12 20:24:46.829673 ignition[629]: no config URL provided Feb 12 20:24:46.829678 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:24:46.829684 ignition[629]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:24:46.829698 ignition[629]: op(1): [started] loading QEMU firmware config module Feb 12 20:24:46.829702 ignition[629]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:24:46.832447 ignition[629]: op(1): [finished] loading QEMU firmware config module Feb 12 20:24:46.837498 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:24:46.839296 systemd[1]: Starting systemd-networkd.service... Feb 12 20:24:46.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.838000 audit: BPF prog-id=9 op=LOAD Feb 12 20:24:46.864125 ignition[629]: parsing config with SHA512: b39e2590417af7cb118cb867bc1434cd6576c5eba31b29bcc1ec4a5d518daaa48700343878f1a430ab8b9212d3842c2965b4d0bb83681c00eb4256069b00f85a Feb 12 20:24:46.881066 systemd-networkd[710]: lo: Link UP Feb 12 20:24:46.881075 systemd-networkd[710]: lo: Gained carrier Feb 12 20:24:46.881437 systemd-networkd[710]: Enumeration completed Feb 12 20:24:46.881602 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:24:46.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.885342 systemd[1]: Started systemd-networkd.service. Feb 12 20:24:46.885995 systemd[1]: Reached target network.target. Feb 12 20:24:46.887163 systemd[1]: Starting iscsiuio.service... Feb 12 20:24:46.888708 systemd-networkd[710]: eth0: Link UP Feb 12 20:24:46.888712 systemd-networkd[710]: eth0: Gained carrier Feb 12 20:24:46.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.890525 unknown[629]: fetched base config from "system" Feb 12 20:24:46.891429 ignition[629]: fetch-offline: fetch-offline passed Feb 12 20:24:46.890530 unknown[629]: fetched user config from "qemu" Feb 12 20:24:46.891505 ignition[629]: Ignition finished successfully Feb 12 20:24:46.891356 systemd[1]: Started iscsiuio.service. Feb 12 20:24:46.895652 systemd[1]: Starting iscsid.service... Feb 12 20:24:46.897026 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:24:46.897931 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:24:46.897931 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:24:46.897931 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:24:46.897931 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:24:46.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.903204 systemd[1]: Started iscsid.service. Feb 12 20:24:46.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.904398 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:24:46.904398 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:24:46.904398 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:24:46.905696 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:24:46.906581 systemd[1]: Starting ignition-kargs.service... Feb 12 20:24:46.913088 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:24:46.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.913781 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:24:46.914881 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:24:46.918020 ignition[717]: Ignition 2.14.0 Feb 12 20:24:46.915241 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:24:46.918025 ignition[717]: Stage: kargs Feb 12 20:24:46.915462 systemd[1]: Reached target remote-fs.target. Feb 12 20:24:46.918103 ignition[717]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:46.916802 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:24:46.918111 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:46.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.921563 systemd[1]: Finished ignition-kargs.service. Feb 12 20:24:46.919160 ignition[717]: kargs: kargs passed Feb 12 20:24:46.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.923593 systemd[1]: Starting ignition-disks.service... Feb 12 20:24:46.919191 ignition[717]: Ignition finished successfully Feb 12 20:24:46.924714 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:24:46.929737 ignition[735]: Ignition 2.14.0 Feb 12 20:24:46.929747 ignition[735]: Stage: disks Feb 12 20:24:46.929834 ignition[735]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:46.929843 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:46.930900 ignition[735]: disks: disks passed Feb 12 20:24:46.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.931433 systemd[1]: Finished ignition-disks.service. Feb 12 20:24:46.930934 ignition[735]: Ignition finished successfully Feb 12 20:24:46.932633 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:24:46.933607 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:24:46.934206 systemd[1]: Reached target local-fs.target. Feb 12 20:24:46.934487 systemd[1]: Reached target sysinit.target. Feb 12 20:24:46.934596 systemd[1]: Reached target basic.target. Feb 12 20:24:46.935329 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:24:46.944704 systemd-fsck[743]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:24:46.949796 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:24:46.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:46.952185 systemd[1]: Mounting sysroot.mount... Feb 12 20:24:46.958172 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:24:46.958416 systemd[1]: Mounted sysroot.mount. Feb 12 20:24:46.958853 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:24:46.960832 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:24:46.961660 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:24:46.961702 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:24:46.961728 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:24:46.967209 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:24:46.969057 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:24:46.974059 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:24:46.977563 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:24:46.980941 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:24:46.984103 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:24:47.005141 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:24:47.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:47.007216 systemd[1]: Starting ignition-mount.service... Feb 12 20:24:47.008319 systemd[1]: Starting sysroot-boot.service... Feb 12 20:24:47.014926 bash[795]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:24:47.021872 systemd[1]: Finished sysroot-boot.service. Feb 12 20:24:47.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:47.023041 ignition[796]: INFO : Ignition 2.14.0 Feb 12 20:24:47.023041 ignition[796]: INFO : Stage: mount Feb 12 20:24:47.023041 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:47.023041 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:47.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:47.026295 ignition[796]: INFO : mount: mount passed Feb 12 20:24:47.026295 ignition[796]: INFO : Ignition finished successfully Feb 12 20:24:47.024714 systemd[1]: Finished ignition-mount.service. Feb 12 20:24:47.717417 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:24:47.723862 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Feb 12 20:24:47.723894 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:24:47.723904 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:24:47.724611 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:24:47.728296 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:24:47.729815 systemd[1]: Starting ignition-files.service... Feb 12 20:24:47.744894 ignition[824]: INFO : Ignition 2.14.0 Feb 12 20:24:47.744894 ignition[824]: INFO : Stage: files Feb 12 20:24:47.746359 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:47.746359 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:47.748802 ignition[824]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:24:47.750192 ignition[824]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:24:47.750192 ignition[824]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:24:47.753395 ignition[824]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:24:47.754464 ignition[824]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:24:47.754464 ignition[824]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:24:47.753989 unknown[824]: wrote ssh authorized keys file for user: core Feb 12 20:24:47.757490 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:24:47.757490 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:24:48.034284 systemd-networkd[710]: eth0: Gained IPv6LL Feb 12 20:24:48.119712 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:24:48.236609 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:24:48.238637 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:24:48.238637 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:24:48.238637 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:24:48.508373 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:24:48.593353 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:24:48.595512 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:24:48.595512 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:24:48.595512 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:24:48.622440 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:24:48.678567 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:24:48.679965 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:24:48.681091 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:24:48.754002 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 20:24:48.965660 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:24:48.965660 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:24:48.969869 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:24:48.969869 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:24:49.013606 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:24:49.549250 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:24:49.551365 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:24:49.551365 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:24:49.551365 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:24:49.596219 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:24:49.853668 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 20:24:49.853668 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:24:49.857282 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:24:49.857282 ignition[824]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:24:49.857282 ignition[824]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:24:49.857282 ignition[824]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:24:49.857282 ignition[824]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:24:49.857282 ignition[824]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Feb 12 20:24:49.885987 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 20:24:49.886019 kernel: audit: type=1130 audit(1707769489.879:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.877756 systemd[1]: Finished ignition-files.service. Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(15): [started] processing unit "coreos-metadata.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(15): op(16): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(15): op(16): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(15): [finished] processing unit "coreos-metadata.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:24:49.887924 ignition[824]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:24:49.928758 kernel: audit: type=1130 audit(1707769489.887:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.928782 kernel: audit: type=1130 audit(1707769489.892:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.928793 kernel: audit: type=1131 audit(1707769489.892:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.928803 kernel: audit: type=1130 audit(1707769489.914:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.928817 kernel: audit: type=1131 audit(1707769489.914:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.880331 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:24:49.929684 ignition[824]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:24:49.929684 ignition[824]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:24:49.929684 ignition[824]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:24:49.929684 ignition[824]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:24:49.929684 ignition[824]: INFO : files: files passed Feb 12 20:24:49.929684 ignition[824]: INFO : Ignition finished successfully Feb 12 20:24:49.939442 kernel: audit: type=1130 audit(1707769489.932:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.884221 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:24:49.941320 initrd-setup-root-after-ignition[848]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:24:49.885027 systemd[1]: Starting ignition-quench.service... Feb 12 20:24:49.944395 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:24:49.886645 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:24:49.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.888108 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:24:49.950880 kernel: audit: type=1131 audit(1707769489.947:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.888237 systemd[1]: Finished ignition-quench.service. Feb 12 20:24:49.892872 systemd[1]: Reached target ignition-complete.target. Feb 12 20:24:49.900338 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:24:49.912885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:24:49.965129 kernel: audit: type=1131 audit(1707769489.954:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.965163 kernel: audit: type=1131 audit(1707769489.956:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.912957 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:24:49.914814 systemd[1]: Reached target initrd-fs.target. Feb 12 20:24:49.920556 systemd[1]: Reached target initrd.target. Feb 12 20:24:49.920899 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:24:49.921523 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:24:49.931230 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:24:49.933269 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:24:49.941860 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:24:49.943200 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:24:49.944469 systemd[1]: Stopped target timers.target. Feb 12 20:24:49.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.946154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:24:49.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.946252 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:24:49.947605 systemd[1]: Stopped target initrd.target. Feb 12 20:24:49.950981 systemd[1]: Stopped target basic.target. Feb 12 20:24:49.952091 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:24:49.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.953345 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:24:49.954543 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:24:49.954812 systemd[1]: Stopped target remote-fs.target. Feb 12 20:24:49.955023 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:24:49.955166 systemd[1]: Stopped target sysinit.target. Feb 12 20:24:49.983838 ignition[865]: INFO : Ignition 2.14.0 Feb 12 20:24:49.983838 ignition[865]: INFO : Stage: umount Feb 12 20:24:49.983838 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:49.983838 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:49.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.955374 systemd[1]: Stopped target local-fs.target. Feb 12 20:24:49.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.988462 ignition[865]: INFO : umount: umount passed Feb 12 20:24:49.988462 ignition[865]: INFO : Ignition finished successfully Feb 12 20:24:49.955479 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:24:49.955597 systemd[1]: Stopped target swap.target. Feb 12 20:24:49.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.955684 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:24:49.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.955779 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:24:49.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.956099 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:24:49.958432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:24:49.958513 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:24:49.958857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:24:49.958976 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:24:49.961552 systemd[1]: Stopped target paths.target. Feb 12 20:24:49.961752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:24:49.968224 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:24:49.968994 systemd[1]: Stopped target slices.target. Feb 12 20:24:49.969976 systemd[1]: Stopped target sockets.target. Feb 12 20:24:49.971015 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:24:49.971083 systemd[1]: Closed iscsid.socket. Feb 12 20:24:49.972013 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:24:49.972102 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:24:50.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.973242 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:24:49.973318 systemd[1]: Stopped ignition-files.service. Feb 12 20:24:49.974955 systemd[1]: Stopping ignition-mount.service... Feb 12 20:24:49.976009 systemd[1]: Stopping iscsiuio.service... Feb 12 20:24:49.976764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:24:49.976878 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:24:49.978515 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:24:50.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.979617 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:24:50.007000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:24:49.979782 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:24:49.980915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:24:49.981020 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:24:49.984045 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:24:50.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.984117 systemd[1]: Stopped iscsiuio.service. Feb 12 20:24:50.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.985317 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:24:50.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.985379 systemd[1]: Stopped ignition-mount.service. Feb 12 20:24:49.986668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:24:49.986728 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:24:49.988599 systemd[1]: Stopped target network.target. Feb 12 20:24:49.988922 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:24:49.988943 systemd[1]: Closed iscsiuio.socket. Feb 12 20:24:49.989127 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:24:50.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.989218 systemd[1]: Stopped ignition-disks.service. Feb 12 20:24:49.989500 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:24:49.989529 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:24:49.989699 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:24:49.989722 systemd[1]: Stopped ignition-setup.service. Feb 12 20:24:49.989984 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:24:49.990172 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:24:50.001908 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:24:50.001980 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:24:50.004222 systemd-networkd[710]: eth0: DHCPv6 lease lost Feb 12 20:24:50.027000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:24:50.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.005486 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:24:50.005569 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:24:50.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.008207 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:24:50.008244 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:24:50.010349 systemd[1]: Stopping network-cleanup.service... Feb 12 20:24:50.010670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:24:50.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.010709 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:24:50.012105 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:24:50.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.012136 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:24:50.013818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:24:50.013848 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:24:50.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.014209 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:24:50.016698 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:24:50.020445 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:24:50.020599 systemd[1]: Stopped network-cleanup.service. Feb 12 20:24:50.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.026047 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:24:50.028258 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:24:50.028358 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:24:50.029590 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:24:50.029725 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:24:50.031453 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:24:50.031505 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:24:50.032566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:24:50.032596 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:24:50.033632 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:24:50.033670 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:24:50.034782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:24:50.034814 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:24:50.035975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:24:50.036004 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:24:50.037139 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:24:50.037183 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:24:50.038898 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:24:50.040077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:24:50.040113 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:24:50.043300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:24:50.043364 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:24:50.043740 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:24:50.044356 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:24:50.060099 systemd[1]: Switching root. Feb 12 20:24:50.078471 iscsid[715]: iscsid shutting down. Feb 12 20:24:50.079013 systemd-journald[198]: Journal stopped Feb 12 20:24:53.001651 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 12 20:24:53.001713 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:24:53.001731 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:24:53.001744 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:24:53.001761 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:24:53.001778 kernel: SELinux: policy capability open_perms=1 Feb 12 20:24:53.001795 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:24:53.001808 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:24:53.001821 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:24:53.001836 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:24:53.001849 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:24:53.001862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:24:53.001876 systemd[1]: Successfully loaded SELinux policy in 37.360ms. Feb 12 20:24:53.001897 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.886ms. Feb 12 20:24:53.001914 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:53.001928 systemd[1]: Detected virtualization kvm. Feb 12 20:24:53.001944 systemd[1]: Detected architecture x86-64. Feb 12 20:24:53.001961 systemd[1]: Detected first boot. Feb 12 20:24:53.001976 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:53.001991 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:24:53.002005 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:24:53.002020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:53.002040 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:53.002055 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:53.002073 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:24:53.002089 systemd[1]: Stopped iscsid.service. Feb 12 20:24:53.002102 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:24:53.002115 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:24:53.002129 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:24:53.002166 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:24:53.002183 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:24:53.002196 systemd[1]: Created slice system-getty.slice. Feb 12 20:24:53.002214 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:24:53.002228 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:24:53.002243 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:24:53.002258 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:24:53.002272 systemd[1]: Created slice user.slice. Feb 12 20:24:53.002287 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:53.002301 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:24:53.002315 systemd[1]: Set up automount boot.automount. Feb 12 20:24:53.002328 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:24:53.002344 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:24:53.002360 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:24:53.002374 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:24:53.002388 systemd[1]: Reached target integritysetup.target. Feb 12 20:24:53.002401 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:24:53.002422 systemd[1]: Reached target remote-fs.target. Feb 12 20:24:53.002438 systemd[1]: Reached target slices.target. Feb 12 20:24:53.002452 systemd[1]: Reached target swap.target. Feb 12 20:24:53.002475 systemd[1]: Reached target torcx.target. Feb 12 20:24:53.002489 systemd[1]: Reached target veritysetup.target. Feb 12 20:24:53.002505 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:24:53.002519 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:24:53.002534 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:53.002549 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:53.002562 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:53.002578 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:24:53.002594 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:24:53.002611 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:24:53.002626 systemd[1]: Mounting media.mount... Feb 12 20:24:53.002644 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:24:53.002658 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:24:53.002672 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:24:53.002685 systemd[1]: Mounting tmp.mount... Feb 12 20:24:53.002699 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:24:53.002713 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:24:53.002727 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:53.002742 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:24:53.002755 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:24:53.002771 systemd[1]: Starting modprobe@drm.service... Feb 12 20:24:53.002785 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:24:53.002798 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:24:53.002812 systemd[1]: Starting modprobe@loop.service... Feb 12 20:24:53.002826 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:24:53.002838 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:24:53.002851 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:24:53.002865 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:24:53.002879 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:24:53.002895 systemd[1]: Stopped systemd-journald.service. Feb 12 20:24:53.002911 kernel: loop: module loaded Feb 12 20:24:53.002925 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:53.002938 kernel: fuse: init (API version 7.34) Feb 12 20:24:53.002950 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:53.002963 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:24:53.002976 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:24:53.002990 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:24:53.003004 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:24:53.003019 systemd[1]: Stopped verity-setup.service. Feb 12 20:24:53.003033 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:24:53.003047 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:24:53.003063 systemd-journald[971]: Journal started Feb 12 20:24:53.003113 systemd-journald[971]: Runtime Journal (/run/log/journal/25e8a3f29ccf42f7bbb523346fa40b7f) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:24:50.135000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:24:50.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:50.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:50.685000 audit: BPF prog-id=10 op=LOAD Feb 12 20:24:50.685000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:24:50.685000 audit: BPF prog-id=11 op=LOAD Feb 12 20:24:50.685000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:24:50.717000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:24:50.717000 audit[898]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558ac a1=c0000d8de0 a2=c0000e1ac0 a3=32 items=0 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:50.717000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:24:50.719000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:24:50.719000 audit[898]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155985 a2=1ed a3=0 items=2 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:50.719000 audit: CWD cwd="/" Feb 12 20:24:50.719000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:50.719000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:50.719000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:24:52.893000 audit: BPF prog-id=12 op=LOAD Feb 12 20:24:52.893000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:24:52.893000 audit: BPF prog-id=13 op=LOAD Feb 12 20:24:52.893000 audit: BPF prog-id=14 op=LOAD Feb 12 20:24:52.893000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:24:52.893000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:24:52.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.906000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:24:52.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.984000 audit: BPF prog-id=15 op=LOAD Feb 12 20:24:52.984000 audit: BPF prog-id=16 op=LOAD Feb 12 20:24:52.984000 audit: BPF prog-id=17 op=LOAD Feb 12 20:24:52.984000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:24:52.984000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:24:52.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:52.999000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:24:52.999000 audit[971]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe2176fa30 a2=4000 a3=7ffe2176facc items=0 ppid=1 pid=971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:52.999000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:24:50.716015 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:52.891175 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:24:50.716280 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:24:52.891190 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:24:50.716305 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:24:52.894700 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:24:50.716343 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:24:50.716357 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:24:50.716398 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:24:50.716416 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:24:50.716674 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:24:50.716725 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:24:50.716743 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:24:53.005356 systemd[1]: Started systemd-journald.service. Feb 12 20:24:50.717112 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:24:50.717170 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:24:50.717195 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:24:50.717216 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:24:53.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.717238 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:24:50.717256 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:24:52.621299 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:52.621558 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:52.621643 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:52.621789 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:52.621835 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:24:52.621891 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:24:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:24:53.006321 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:24:53.007011 systemd[1]: Mounted media.mount. Feb 12 20:24:53.007755 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:24:53.008606 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:24:53.009449 systemd[1]: Mounted tmp.mount. Feb 12 20:24:53.010489 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:53.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.011510 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:24:53.011700 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:24:53.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.012739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:24:53.012965 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:24:53.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.013968 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:24:53.014167 systemd[1]: Finished modprobe@drm.service. Feb 12 20:24:53.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.015313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:24:53.015496 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:24:53.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.016556 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:24:53.016724 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:24:53.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.017697 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:24:53.017869 systemd[1]: Finished modprobe@loop.service. Feb 12 20:24:53.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.018943 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:53.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.020070 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:24:53.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.021425 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:24:53.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.022749 systemd[1]: Reached target network-pre.target. Feb 12 20:24:53.024839 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:24:53.026741 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:24:53.027510 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:24:53.029164 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:24:53.038908 systemd-journald[971]: Time spent on flushing to /var/log/journal/25e8a3f29ccf42f7bbb523346fa40b7f is 19.722ms for 1110 entries. Feb 12 20:24:53.038908 systemd-journald[971]: System Journal (/var/log/journal/25e8a3f29ccf42f7bbb523346fa40b7f) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:24:53.072682 systemd-journald[971]: Received client request to flush runtime journal. Feb 12 20:24:53.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.031021 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:24:53.031899 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:24:53.032930 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:24:53.033939 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:24:53.034993 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:53.037985 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:24:53.040179 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:24:53.041643 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:24:53.043619 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:24:53.046961 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:53.048511 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:24:53.049403 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:24:53.057939 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:24:53.071446 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:24:53.073673 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:24:53.074777 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:24:53.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.081489 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:24:53.517604 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:24:53.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.519000 audit: BPF prog-id=18 op=LOAD Feb 12 20:24:53.519000 audit: BPF prog-id=19 op=LOAD Feb 12 20:24:53.519000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:24:53.519000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:24:53.519892 systemd[1]: Starting systemd-udevd.service... Feb 12 20:24:53.535790 systemd-udevd[1004]: Using default interface naming scheme 'v252'. Feb 12 20:24:53.546154 systemd[1]: Started systemd-udevd.service. Feb 12 20:24:53.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.548000 audit: BPF prog-id=20 op=LOAD Feb 12 20:24:53.549954 systemd[1]: Starting systemd-networkd.service... Feb 12 20:24:53.552000 audit: BPF prog-id=21 op=LOAD Feb 12 20:24:53.552000 audit: BPF prog-id=22 op=LOAD Feb 12 20:24:53.552000 audit: BPF prog-id=23 op=LOAD Feb 12 20:24:53.554184 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:24:53.584958 systemd[1]: Started systemd-userdbd.service. Feb 12 20:24:53.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.599117 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:24:53.605551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:24:53.630844 systemd-networkd[1016]: lo: Link UP Feb 12 20:24:53.630857 systemd-networkd[1016]: lo: Gained carrier Feb 12 20:24:53.631280 systemd-networkd[1016]: Enumeration completed Feb 12 20:24:53.631389 systemd[1]: Started systemd-networkd.service. Feb 12 20:24:53.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.632573 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:24:53.633519 systemd-networkd[1016]: eth0: Link UP Feb 12 20:24:53.633526 systemd-networkd[1016]: eth0: Gained carrier Feb 12 20:24:53.643162 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:24:53.645602 systemd-networkd[1016]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:24:53.653175 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:24:53.646000 audit[1007]: AVC avc: denied { confidentiality } for pid=1007 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:24:53.646000 audit[1007]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556c40b44540 a1=32194 a2=7f65afd8abc5 a3=5 items=108 ppid=1004 pid=1007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:53.646000 audit: CWD cwd="/" Feb 12 20:24:53.646000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=1 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=2 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=3 name=(null) inode=15536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=4 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=5 name=(null) inode=15537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=6 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=7 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=8 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=9 name=(null) inode=15539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=10 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=11 name=(null) inode=15540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=12 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=13 name=(null) inode=15541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=14 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=15 name=(null) inode=15542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=16 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=17 name=(null) inode=15543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=18 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=19 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=20 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=21 name=(null) inode=15545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=22 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=23 name=(null) inode=15546 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=24 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=25 name=(null) inode=15547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=26 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=27 name=(null) inode=15548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=28 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=29 name=(null) inode=15549 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=30 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=31 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=32 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=33 name=(null) inode=15551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=34 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=35 name=(null) inode=15552 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=36 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=37 name=(null) inode=15553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=38 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=39 name=(null) inode=15554 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=40 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=41 name=(null) inode=15555 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=42 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=43 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=44 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=45 name=(null) inode=15557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=46 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=47 name=(null) inode=15558 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=48 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=49 name=(null) inode=15559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=50 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=51 name=(null) inode=15560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=52 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=53 name=(null) inode=15561 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=55 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=56 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=57 name=(null) inode=15563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=58 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=59 name=(null) inode=15564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=60 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=61 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=62 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=63 name=(null) inode=15566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=64 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=65 name=(null) inode=15567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=66 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=67 name=(null) inode=15568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=68 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=69 name=(null) inode=15569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=70 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=71 name=(null) inode=15570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=72 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=73 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=74 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=75 name=(null) inode=15572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=76 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=77 name=(null) inode=15573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=78 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=79 name=(null) inode=15574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=80 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=81 name=(null) inode=15575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=82 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=83 name=(null) inode=15576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=84 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=85 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=86 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=87 name=(null) inode=15578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=88 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=89 name=(null) inode=15579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=90 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=91 name=(null) inode=15580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=92 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=93 name=(null) inode=15581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=94 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=95 name=(null) inode=15582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=96 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=97 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=98 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=99 name=(null) inode=15584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=100 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=101 name=(null) inode=15585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=102 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=103 name=(null) inode=15586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=104 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=105 name=(null) inode=15587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=106 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PATH item=107 name=(null) inode=15588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:53.646000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:24:53.696193 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:24:53.713196 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:24:53.724177 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:24:53.733171 kernel: kvm: Nested Virtualization enabled Feb 12 20:24:53.733207 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:24:53.733221 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:24:53.734167 kernel: SVM: Virtual GIF supported Feb 12 20:24:53.749183 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:24:53.762485 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:24:53.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.764172 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:24:53.770846 lvm[1040]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:53.794708 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:24:53.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.795463 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:53.796910 systemd[1]: Starting lvm2-activation.service... Feb 12 20:24:53.799965 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:53.826735 systemd[1]: Finished lvm2-activation.service. Feb 12 20:24:53.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.827460 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:24:53.828068 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:24:53.828095 systemd[1]: Reached target local-fs.target. Feb 12 20:24:53.828690 systemd[1]: Reached target machines.target. Feb 12 20:24:53.830087 systemd[1]: Starting ldconfig.service... Feb 12 20:24:53.831024 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:24:53.831067 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:53.831986 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:24:53.833841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:24:53.835827 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:24:53.836849 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:53.836885 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:53.837808 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:24:53.841635 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1043 (bootctl) Feb 12 20:24:53.842681 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:24:53.844235 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:24:53.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.850041 systemd-tmpfiles[1046]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:24:53.851331 systemd-tmpfiles[1046]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:24:53.853933 systemd-tmpfiles[1046]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:24:53.879766 systemd-fsck[1051]: fsck.fat 4.2 (2021-01-31) Feb 12 20:24:53.879766 systemd-fsck[1051]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:24:53.882199 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:24:53.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:53.884964 systemd[1]: Mounting boot.mount... Feb 12 20:24:54.217264 systemd[1]: Mounted boot.mount. Feb 12 20:24:54.229245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:24:54.229961 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:24:54.231265 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:24:54.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.239479 ldconfig[1042]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:24:54.247403 systemd[1]: Finished ldconfig.service. Feb 12 20:24:54.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.284294 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:24:54.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.286342 systemd[1]: Starting audit-rules.service... Feb 12 20:24:54.287733 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:24:54.289364 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:24:54.290000 audit: BPF prog-id=24 op=LOAD Feb 12 20:24:54.291805 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:54.293000 audit: BPF prog-id=25 op=LOAD Feb 12 20:24:54.294830 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:24:54.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.296860 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:24:54.298218 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:24:54.299534 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:24:54.301000 audit[1066]: SYSTEM_BOOT pid=1066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:54.305955 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:24:54.320000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:24:54.320000 audit[1075]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbe3b7890 a2=420 a3=0 items=0 ppid=1055 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:54.320000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:24:54.320856 augenrules[1075]: No rules Feb 12 20:24:54.321468 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:24:54.322773 systemd[1]: Finished audit-rules.service. Feb 12 20:24:54.325054 systemd[1]: Starting systemd-update-done.service... Feb 12 20:24:54.331520 systemd[1]: Finished systemd-update-done.service. Feb 12 20:24:54.351224 systemd-resolved[1059]: Positive Trust Anchors: Feb 12 20:24:54.351242 systemd-resolved[1059]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:54.351281 systemd-resolved[1059]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:54.352073 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:24:55.250668 systemd-timesyncd[1061]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:24:55.250713 systemd-timesyncd[1061]: Initial clock synchronization to Mon 2024-02-12 20:24:55.250600 UTC. Feb 12 20:24:55.250761 systemd[1]: Reached target time-set.target. Feb 12 20:24:55.255672 systemd-resolved[1059]: Defaulting to hostname 'linux'. Feb 12 20:24:55.257031 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:55.257935 systemd[1]: Reached target network.target. Feb 12 20:24:55.258699 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:55.259506 systemd[1]: Reached target sysinit.target. Feb 12 20:24:55.260356 systemd[1]: Started motdgen.path. Feb 12 20:24:55.261043 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:24:55.262232 systemd[1]: Started logrotate.timer. Feb 12 20:24:55.263023 systemd[1]: Started mdadm.timer. Feb 12 20:24:55.263657 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:24:55.264431 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:24:55.264467 systemd[1]: Reached target paths.target. Feb 12 20:24:55.265146 systemd[1]: Reached target timers.target. Feb 12 20:24:55.266176 systemd[1]: Listening on dbus.socket. Feb 12 20:24:55.268034 systemd[1]: Starting docker.socket... Feb 12 20:24:55.270928 systemd[1]: Listening on sshd.socket. Feb 12 20:24:55.271744 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:55.272162 systemd[1]: Listening on docker.socket. Feb 12 20:24:55.272930 systemd[1]: Reached target sockets.target. Feb 12 20:24:55.273653 systemd[1]: Reached target basic.target. Feb 12 20:24:55.274389 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:55.274419 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:55.275445 systemd[1]: Starting containerd.service... Feb 12 20:24:55.277449 systemd[1]: Starting dbus.service... Feb 12 20:24:55.279197 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:24:55.281415 systemd[1]: Starting extend-filesystems.service... Feb 12 20:24:55.282292 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:24:55.283841 jq[1086]: false Feb 12 20:24:55.283499 systemd[1]: Starting motdgen.service... Feb 12 20:24:55.285838 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:24:55.290334 dbus-daemon[1085]: [system] SELinux support is enabled Feb 12 20:24:55.290154 systemd[1]: Starting prepare-critools.service... Feb 12 20:24:55.292126 systemd[1]: Starting prepare-helm.service... Feb 12 20:24:55.293982 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:24:55.294751 extend-filesystems[1087]: Found sr0 Feb 12 20:24:55.295680 extend-filesystems[1087]: Found vda Feb 12 20:24:55.296374 extend-filesystems[1087]: Found vda1 Feb 12 20:24:55.296374 extend-filesystems[1087]: Found vda2 Feb 12 20:24:55.296374 extend-filesystems[1087]: Found vda3 Feb 12 20:24:55.296374 extend-filesystems[1087]: Found usr Feb 12 20:24:55.299112 extend-filesystems[1087]: Found vda4 Feb 12 20:24:55.299112 extend-filesystems[1087]: Found vda6 Feb 12 20:24:55.299112 extend-filesystems[1087]: Found vda7 Feb 12 20:24:55.299112 extend-filesystems[1087]: Found vda9 Feb 12 20:24:55.299112 extend-filesystems[1087]: Checking size of /dev/vda9 Feb 12 20:24:55.302511 systemd[1]: Starting sshd-keygen.service... Feb 12 20:24:55.306404 systemd[1]: Starting systemd-logind.service... Feb 12 20:24:55.307091 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:55.307141 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:24:55.307607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:24:55.308285 systemd[1]: Starting update-engine.service... Feb 12 20:24:55.310147 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:24:55.311694 systemd[1]: Started dbus.service. Feb 12 20:24:55.313201 jq[1108]: true Feb 12 20:24:55.315854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:24:55.316043 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:24:55.316404 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:24:55.316570 systemd[1]: Finished motdgen.service. Feb 12 20:24:55.321717 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:24:55.321899 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:24:55.331092 jq[1117]: true Feb 12 20:24:55.331444 extend-filesystems[1087]: Resized partition /dev/vda9 Feb 12 20:24:55.335450 tar[1114]: linux-amd64/helm Feb 12 20:24:55.335691 extend-filesystems[1121]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:24:55.338466 tar[1112]: crictl Feb 12 20:24:55.339336 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:24:55.342290 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:24:55.342351 systemd[1]: Reached target system-config.target. Feb 12 20:24:55.343288 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:24:55.343326 systemd[1]: Reached target user-config.target. Feb 12 20:24:55.347395 tar[1111]: ./ Feb 12 20:24:55.347395 tar[1111]: ./macvlan Feb 12 20:24:55.358118 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:24:55.377798 update_engine[1107]: I0212 20:24:55.376469 1107 main.cc:92] Flatcar Update Engine starting Feb 12 20:24:55.378149 systemd[1]: Started update-engine.service. Feb 12 20:24:55.380719 extend-filesystems[1121]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:24:55.380719 extend-filesystems[1121]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:24:55.380719 extend-filesystems[1121]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:24:55.400751 update_engine[1107]: I0212 20:24:55.378844 1107 update_check_scheduler.cc:74] Next update check in 7m12s Feb 12 20:24:55.381004 systemd[1]: Started locksmithd.service. Feb 12 20:24:55.400876 extend-filesystems[1087]: Resized filesystem in /dev/vda9 Feb 12 20:24:55.382004 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:24:55.382167 systemd[1]: Finished extend-filesystems.service. Feb 12 20:24:55.393536 systemd-logind[1106]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:24:55.393555 systemd-logind[1106]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:24:55.398082 systemd-logind[1106]: New seat seat0. Feb 12 20:24:55.404104 systemd[1]: Started systemd-logind.service. Feb 12 20:24:55.407146 bash[1142]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:24:55.407501 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:24:55.408893 env[1118]: time="2024-02-12T20:24:55.408524723Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:24:55.422666 tar[1111]: ./static Feb 12 20:24:55.446720 tar[1111]: ./vlan Feb 12 20:24:55.477710 tar[1111]: ./portmap Feb 12 20:24:55.506503 tar[1111]: ./host-local Feb 12 20:24:55.508471 env[1118]: time="2024-02-12T20:24:55.508433924Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:24:55.508685 env[1118]: time="2024-02-12T20:24:55.508665979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:55.512497 env[1118]: time="2024-02-12T20:24:55.512468852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:55.512606 env[1118]: time="2024-02-12T20:24:55.512575182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:55.513104 env[1118]: time="2024-02-12T20:24:55.513080079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:55.513195 env[1118]: time="2024-02-12T20:24:55.513174215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:55.513282 env[1118]: time="2024-02-12T20:24:55.513259145Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:24:55.513379 env[1118]: time="2024-02-12T20:24:55.513358271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:55.513540 env[1118]: time="2024-02-12T20:24:55.513519533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:55.513884 env[1118]: time="2024-02-12T20:24:55.513865452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:55.514086 env[1118]: time="2024-02-12T20:24:55.514063994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:55.514171 env[1118]: time="2024-02-12T20:24:55.514151018Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:24:55.514303 env[1118]: time="2024-02-12T20:24:55.514281843Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:24:55.514416 env[1118]: time="2024-02-12T20:24:55.514395767Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.518992749Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519020451Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519035610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519064935Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519080314Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519094781Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519108015Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519123364Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519138172Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519154092Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519170272Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519185942Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:24:55.519339 env[1118]: time="2024-02-12T20:24:55.519274598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.519792549Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520096650Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520124662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520139160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520184475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520208359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520222336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520237143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520251971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520267991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520282969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520296374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520311833Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520433512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521332 env[1118]: time="2024-02-12T20:24:55.520449221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521735 env[1118]: time="2024-02-12T20:24:55.520462646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521735 env[1118]: time="2024-02-12T20:24:55.520474990Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:24:55.521735 env[1118]: time="2024-02-12T20:24:55.520491400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:24:55.521735 env[1118]: time="2024-02-12T20:24:55.520503643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:24:55.521735 env[1118]: time="2024-02-12T20:24:55.520525264Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:24:55.521735 env[1118]: time="2024-02-12T20:24:55.520568375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:24:55.521887 env[1118]: time="2024-02-12T20:24:55.520796242Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:24:55.521887 env[1118]: time="2024-02-12T20:24:55.520859611Z" level=info msg="Connect containerd service" Feb 12 20:24:55.521887 env[1118]: time="2024-02-12T20:24:55.520896480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522171862Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522285005Z" level=info msg="Start subscribing containerd event" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522336782Z" level=info msg="Start recovering state" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522390302Z" level=info msg="Start event monitor" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522402305Z" level=info msg="Start snapshots syncer" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522411472Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522419256Z" level=info msg="Start streaming server" Feb 12 20:24:55.523339 env[1118]: time="2024-02-12T20:24:55.522986470Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:24:55.525694 env[1118]: time="2024-02-12T20:24:55.523127014Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:24:55.523718 systemd[1]: Started containerd.service. Feb 12 20:24:55.533748 tar[1111]: ./vrf Feb 12 20:24:55.557305 env[1118]: time="2024-02-12T20:24:55.557268534Z" level=info msg="containerd successfully booted in 0.180658s" Feb 12 20:24:55.561053 tar[1111]: ./bridge Feb 12 20:24:55.597043 tar[1111]: ./tuning Feb 12 20:24:55.623520 tar[1111]: ./firewall Feb 12 20:24:55.656942 tar[1111]: ./host-device Feb 12 20:24:55.686337 tar[1111]: ./sbr Feb 12 20:24:55.713067 tar[1111]: ./loopback Feb 12 20:24:55.738684 tar[1111]: ./dhcp Feb 12 20:24:55.812358 tar[1111]: ./ptp Feb 12 20:24:55.818097 tar[1114]: linux-amd64/LICENSE Feb 12 20:24:55.818210 tar[1114]: linux-amd64/README.md Feb 12 20:24:55.823194 systemd[1]: Finished prepare-helm.service. Feb 12 20:24:55.843872 tar[1111]: ./ipvlan Feb 12 20:24:55.848884 systemd[1]: Finished prepare-critools.service. Feb 12 20:24:55.853775 systemd[1]: Created slice system-sshd.slice. Feb 12 20:24:55.857266 locksmithd[1145]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:24:55.873895 tar[1111]: ./bandwidth Feb 12 20:24:55.909557 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:24:55.939039 sshd_keygen[1109]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:24:55.955444 systemd[1]: Finished sshd-keygen.service. Feb 12 20:24:55.957671 systemd[1]: Starting issuegen.service... Feb 12 20:24:55.959070 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:50950.service. Feb 12 20:24:55.961797 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:24:55.961918 systemd[1]: Finished issuegen.service. Feb 12 20:24:55.963667 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:24:55.968308 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:24:55.970597 systemd[1]: Started getty@tty1.service. Feb 12 20:24:55.972215 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:24:55.973137 systemd[1]: Reached target getty.target. Feb 12 20:24:55.973902 systemd[1]: Reached target multi-user.target. Feb 12 20:24:55.975584 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:24:55.980965 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:24:55.981090 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:24:55.982055 systemd[1]: Startup finished in 552ms (kernel) + 5.416s (initrd) + 4.988s (userspace) = 10.957s. Feb 12 20:24:55.999414 sshd[1168]: Accepted publickey for core from 10.0.0.1 port 50950 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:56.000545 sshd[1168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.008388 systemd-logind[1106]: New session 1 of user core. Feb 12 20:24:56.009388 systemd[1]: Created slice user-500.slice. Feb 12 20:24:56.010376 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:24:56.016692 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:24:56.017919 systemd[1]: Starting user@500.service... Feb 12 20:24:56.019897 (systemd)[1178]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.090650 systemd[1178]: Queued start job for default target default.target. Feb 12 20:24:56.091170 systemd[1178]: Reached target paths.target. Feb 12 20:24:56.091190 systemd[1178]: Reached target sockets.target. Feb 12 20:24:56.091202 systemd[1178]: Reached target timers.target. Feb 12 20:24:56.091212 systemd[1178]: Reached target basic.target. Feb 12 20:24:56.091250 systemd[1178]: Reached target default.target. Feb 12 20:24:56.091272 systemd[1178]: Startup finished in 64ms. Feb 12 20:24:56.091331 systemd[1]: Started user@500.service. Feb 12 20:24:56.092234 systemd[1]: Started session-1.scope. Feb 12 20:24:56.142406 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:35544.service. Feb 12 20:24:56.182694 sshd[1187]: Accepted publickey for core from 10.0.0.1 port 35544 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:56.183664 sshd[1187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.186816 systemd-logind[1106]: New session 2 of user core. Feb 12 20:24:56.187590 systemd[1]: Started session-2.scope. Feb 12 20:24:56.239344 sshd[1187]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:56.242244 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:35544.service: Deactivated successfully. Feb 12 20:24:56.242840 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:24:56.243312 systemd-logind[1106]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:24:56.244497 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:35550.service. Feb 12 20:24:56.245087 systemd-logind[1106]: Removed session 2. Feb 12 20:24:56.284296 sshd[1193]: Accepted publickey for core from 10.0.0.1 port 35550 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:56.285233 sshd[1193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.288013 systemd-logind[1106]: New session 3 of user core. Feb 12 20:24:56.288734 systemd[1]: Started session-3.scope. Feb 12 20:24:56.336762 sshd[1193]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:56.339327 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:35550.service: Deactivated successfully. Feb 12 20:24:56.339883 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:24:56.340378 systemd-logind[1106]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:24:56.341261 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:35554.service. Feb 12 20:24:56.342007 systemd-logind[1106]: Removed session 3. Feb 12 20:24:56.377541 sshd[1199]: Accepted publickey for core from 10.0.0.1 port 35554 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:56.378590 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.381650 systemd-logind[1106]: New session 4 of user core. Feb 12 20:24:56.382394 systemd[1]: Started session-4.scope. Feb 12 20:24:56.433988 sshd[1199]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:56.436404 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:35554.service: Deactivated successfully. Feb 12 20:24:56.436893 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:24:56.437335 systemd-logind[1106]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:24:56.438176 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:35558.service. Feb 12 20:24:56.438766 systemd-logind[1106]: Removed session 4. Feb 12 20:24:56.475171 sshd[1205]: Accepted publickey for core from 10.0.0.1 port 35558 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:56.476160 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.479072 systemd-logind[1106]: New session 5 of user core. Feb 12 20:24:56.479786 systemd[1]: Started session-5.scope. Feb 12 20:24:56.483552 systemd-networkd[1016]: eth0: Gained IPv6LL Feb 12 20:24:56.532545 sudo[1208]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:24:56.532709 sudo[1208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:57.072680 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:24:57.078102 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:24:57.078464 systemd[1]: Reached target network-online.target. Feb 12 20:24:57.079880 systemd[1]: Starting docker.service... Feb 12 20:24:57.116616 env[1226]: time="2024-02-12T20:24:57.116533659Z" level=info msg="Starting up" Feb 12 20:24:57.117952 env[1226]: time="2024-02-12T20:24:57.117923116Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:24:57.117952 env[1226]: time="2024-02-12T20:24:57.117943885Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:24:57.118014 env[1226]: time="2024-02-12T20:24:57.117965215Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:24:57.118014 env[1226]: time="2024-02-12T20:24:57.117977458Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:24:57.121502 env[1226]: time="2024-02-12T20:24:57.120110068Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:24:57.121502 env[1226]: time="2024-02-12T20:24:57.120130035Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:24:57.121502 env[1226]: time="2024-02-12T20:24:57.120143050Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:24:57.121502 env[1226]: time="2024-02-12T20:24:57.120153489Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:24:57.127051 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3262954717-merged.mount: Deactivated successfully. Feb 12 20:24:57.747722 env[1226]: time="2024-02-12T20:24:57.747670041Z" level=info msg="Loading containers: start." Feb 12 20:24:57.840346 kernel: Initializing XFRM netlink socket Feb 12 20:24:57.867175 env[1226]: time="2024-02-12T20:24:57.867134833Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:24:57.917763 systemd-networkd[1016]: docker0: Link UP Feb 12 20:24:57.927227 env[1226]: time="2024-02-12T20:24:57.927192874Z" level=info msg="Loading containers: done." Feb 12 20:24:57.937967 env[1226]: time="2024-02-12T20:24:57.937921037Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:24:57.938118 env[1226]: time="2024-02-12T20:24:57.938097478Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:24:57.938208 env[1226]: time="2024-02-12T20:24:57.938188709Z" level=info msg="Daemon has completed initialization" Feb 12 20:24:57.953953 systemd[1]: Started docker.service. Feb 12 20:24:57.957361 env[1226]: time="2024-02-12T20:24:57.957293404Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:24:57.971945 systemd[1]: Reloading. Feb 12 20:24:58.034938 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-12T20:24:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:58.034970 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-12T20:24:58Z" level=info msg="torcx already run" Feb 12 20:24:58.099333 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:58.099346 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:58.115695 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:58.186901 systemd[1]: Started kubelet.service. Feb 12 20:24:58.244746 kubelet[1404]: E0212 20:24:58.244674 1404 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:24:58.246836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:24:58.246947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:24:58.542830 env[1118]: time="2024-02-12T20:24:58.542760433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 20:24:59.110984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206773014.mount: Deactivated successfully. Feb 12 20:25:00.845903 env[1118]: time="2024-02-12T20:25:00.845847566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:00.847889 env[1118]: time="2024-02-12T20:25:00.847856334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:00.851664 env[1118]: time="2024-02-12T20:25:00.851561504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:00.853782 env[1118]: time="2024-02-12T20:25:00.853726775Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:00.854301 env[1118]: time="2024-02-12T20:25:00.854272569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 20:25:00.863071 env[1118]: time="2024-02-12T20:25:00.863043190Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 20:25:03.355619 env[1118]: time="2024-02-12T20:25:03.355559338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:03.448802 env[1118]: time="2024-02-12T20:25:03.448754786Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:03.537003 env[1118]: time="2024-02-12T20:25:03.536933805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:03.584681 env[1118]: time="2024-02-12T20:25:03.584622265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:03.585757 env[1118]: time="2024-02-12T20:25:03.585705817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 20:25:03.595914 env[1118]: time="2024-02-12T20:25:03.595863640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 20:25:05.110565 env[1118]: time="2024-02-12T20:25:05.110500376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:05.112595 env[1118]: time="2024-02-12T20:25:05.112542306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:05.114381 env[1118]: time="2024-02-12T20:25:05.114337473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:05.116064 env[1118]: time="2024-02-12T20:25:05.116027934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:05.116825 env[1118]: time="2024-02-12T20:25:05.116789713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 20:25:05.124799 env[1118]: time="2024-02-12T20:25:05.124764341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:25:06.078168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343224923.mount: Deactivated successfully. Feb 12 20:25:07.120614 env[1118]: time="2024-02-12T20:25:07.120543252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.122854 env[1118]: time="2024-02-12T20:25:07.122813701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.124579 env[1118]: time="2024-02-12T20:25:07.124548394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.126301 env[1118]: time="2024-02-12T20:25:07.126266367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.126761 env[1118]: time="2024-02-12T20:25:07.126742009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:25:07.137116 env[1118]: time="2024-02-12T20:25:07.137065533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:25:07.629354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474512082.mount: Deactivated successfully. Feb 12 20:25:07.634498 env[1118]: time="2024-02-12T20:25:07.634456396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.636233 env[1118]: time="2024-02-12T20:25:07.636192623Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.637651 env[1118]: time="2024-02-12T20:25:07.637620571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.639054 env[1118]: time="2024-02-12T20:25:07.639026779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:07.639450 env[1118]: time="2024-02-12T20:25:07.639424695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:25:07.647563 env[1118]: time="2024-02-12T20:25:07.647522745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 20:25:08.332918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431520651.mount: Deactivated successfully. Feb 12 20:25:08.333958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:25:08.334141 systemd[1]: Stopped kubelet.service. Feb 12 20:25:08.335576 systemd[1]: Started kubelet.service. Feb 12 20:25:08.387059 kubelet[1459]: E0212 20:25:08.387000 1459 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:25:08.392782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:25:08.392888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:25:12.875354 env[1118]: time="2024-02-12T20:25:12.875292917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:12.877541 env[1118]: time="2024-02-12T20:25:12.877498274Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:12.879385 env[1118]: time="2024-02-12T20:25:12.879356329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:12.881125 env[1118]: time="2024-02-12T20:25:12.881069833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:12.881633 env[1118]: time="2024-02-12T20:25:12.881584529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 20:25:12.890470 env[1118]: time="2024-02-12T20:25:12.890434519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 20:25:13.609325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3688126451.mount: Deactivated successfully. Feb 12 20:25:14.267019 env[1118]: time="2024-02-12T20:25:14.266963684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:14.268742 env[1118]: time="2024-02-12T20:25:14.268695893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:14.270292 env[1118]: time="2024-02-12T20:25:14.270245951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:14.271613 env[1118]: time="2024-02-12T20:25:14.271590803Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:14.271935 env[1118]: time="2024-02-12T20:25:14.271910914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 20:25:16.575802 systemd[1]: Stopped kubelet.service. Feb 12 20:25:16.589184 systemd[1]: Reloading. Feb 12 20:25:16.648841 /usr/lib/systemd/system-generators/torcx-generator[1563]: time="2024-02-12T20:25:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:16.649246 /usr/lib/systemd/system-generators/torcx-generator[1563]: time="2024-02-12T20:25:16Z" level=info msg="torcx already run" Feb 12 20:25:16.704136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:16.704154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:16.720529 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:16.800270 systemd[1]: Started kubelet.service. Feb 12 20:25:16.838664 kubelet[1605]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:16.838664 kubelet[1605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:16.838664 kubelet[1605]: I0212 20:25:16.838610 1605 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:25:16.840157 kubelet[1605]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:16.840157 kubelet[1605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:17.081708 kubelet[1605]: I0212 20:25:17.081672 1605 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:25:17.081708 kubelet[1605]: I0212 20:25:17.081697 1605 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:25:17.081930 kubelet[1605]: I0212 20:25:17.081914 1605 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:25:17.084174 kubelet[1605]: I0212 20:25:17.084156 1605 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:25:17.085020 kubelet[1605]: E0212 20:25:17.084992 1605 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.088983 kubelet[1605]: I0212 20:25:17.088891 1605 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:25:17.089178 kubelet[1605]: I0212 20:25:17.089156 1605 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:25:17.089251 kubelet[1605]: I0212 20:25:17.089237 1605 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:25:17.089394 kubelet[1605]: I0212 20:25:17.089264 1605 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:25:17.089394 kubelet[1605]: I0212 20:25:17.089277 1605 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:25:17.089495 kubelet[1605]: I0212 20:25:17.089436 1605 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:17.092400 kubelet[1605]: I0212 20:25:17.092383 1605 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:25:17.092483 kubelet[1605]: I0212 20:25:17.092406 1605 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:25:17.092483 kubelet[1605]: I0212 20:25:17.092439 1605 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:25:17.092483 kubelet[1605]: I0212 20:25:17.092461 1605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:25:17.092980 kubelet[1605]: I0212 20:25:17.092950 1605 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:25:17.093082 kubelet[1605]: W0212 20:25:17.093037 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.093123 kubelet[1605]: E0212 20:25:17.093117 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.093228 kubelet[1605]: W0212 20:25:17.093199 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.093264 kubelet[1605]: E0212 20:25:17.093241 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.093380 kubelet[1605]: W0212 20:25:17.093357 1605 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:25:17.093853 kubelet[1605]: I0212 20:25:17.093822 1605 server.go:1186] "Started kubelet" Feb 12 20:25:17.094145 kubelet[1605]: I0212 20:25:17.094126 1605 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:25:17.095056 kubelet[1605]: E0212 20:25:17.094710 1605 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e09dafa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 93793699, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 93793699, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.83:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.83:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:25:17.095611 kubelet[1605]: I0212 20:25:17.095591 1605 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:25:17.096925 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:25:17.097060 kubelet[1605]: I0212 20:25:17.097041 1605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:25:17.097060 kubelet[1605]: E0212 20:25:17.097055 1605 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:25:17.097160 kubelet[1605]: E0212 20:25:17.097080 1605 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:25:17.097583 kubelet[1605]: I0212 20:25:17.097570 1605 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:25:17.098293 kubelet[1605]: E0212 20:25:17.098253 1605 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.098369 kubelet[1605]: I0212 20:25:17.098327 1605 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:25:17.098654 kubelet[1605]: W0212 20:25:17.098621 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.098766 kubelet[1605]: E0212 20:25:17.098748 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.118420 kubelet[1605]: I0212 20:25:17.118397 1605 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:25:17.118420 kubelet[1605]: I0212 20:25:17.118413 1605 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:25:17.118420 kubelet[1605]: I0212 20:25:17.118425 1605 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:17.200030 kubelet[1605]: I0212 20:25:17.199991 1605 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:25:17.200393 kubelet[1605]: E0212 20:25:17.200372 1605 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Feb 12 20:25:17.235280 kubelet[1605]: I0212 20:25:17.235235 1605 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:25:17.239836 kubelet[1605]: I0212 20:25:17.239808 1605 policy_none.go:49] "None policy: Start" Feb 12 20:25:17.240628 kubelet[1605]: I0212 20:25:17.240600 1605 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:25:17.240628 kubelet[1605]: I0212 20:25:17.240623 1605 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:25:17.251088 systemd[1]: Created slice kubepods.slice. Feb 12 20:25:17.252978 kubelet[1605]: I0212 20:25:17.252944 1605 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:25:17.252978 kubelet[1605]: I0212 20:25:17.252968 1605 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:25:17.253148 kubelet[1605]: I0212 20:25:17.252988 1605 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:25:17.253148 kubelet[1605]: E0212 20:25:17.253052 1605 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:25:17.254016 kubelet[1605]: W0212 20:25:17.253984 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.254116 kubelet[1605]: E0212 20:25:17.254025 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.254673 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:25:17.256936 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:25:17.262912 kubelet[1605]: I0212 20:25:17.262884 1605 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:25:17.263140 kubelet[1605]: I0212 20:25:17.263074 1605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:25:17.263469 kubelet[1605]: E0212 20:25:17.263434 1605 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 20:25:17.298911 kubelet[1605]: E0212 20:25:17.298858 1605 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.354172 kubelet[1605]: I0212 20:25:17.354066 1605 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:17.355191 kubelet[1605]: I0212 20:25:17.355175 1605 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:17.356215 kubelet[1605]: I0212 20:25:17.356185 1605 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:17.356413 kubelet[1605]: I0212 20:25:17.356379 1605 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.83:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.83:6443: connect: connection refused" Feb 12 20:25:17.357196 kubelet[1605]: I0212 20:25:17.357180 1605 status_manager.go:698] "Failed to get status for pod" podUID=f7803bc06b5ffe9930c11cecbe884c72 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.83:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.83:6443: connect: connection refused" Feb 12 20:25:17.358085 kubelet[1605]: I0212 20:25:17.358065 1605 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.83:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.83:6443: connect: connection refused" Feb 12 20:25:17.360410 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 12 20:25:17.383483 systemd[1]: Created slice kubepods-burstable-podf7803bc06b5ffe9930c11cecbe884c72.slice. Feb 12 20:25:17.391425 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 12 20:25:17.399216 kubelet[1605]: I0212 20:25:17.399184 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:25:17.399301 kubelet[1605]: I0212 20:25:17.399233 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7803bc06b5ffe9930c11cecbe884c72-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f7803bc06b5ffe9930c11cecbe884c72\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:17.399301 kubelet[1605]: I0212 20:25:17.399263 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:17.399368 kubelet[1605]: I0212 20:25:17.399309 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:17.399403 kubelet[1605]: I0212 20:25:17.399384 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:17.399436 kubelet[1605]: I0212 20:25:17.399411 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7803bc06b5ffe9930c11cecbe884c72-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7803bc06b5ffe9930c11cecbe884c72\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:17.399436 kubelet[1605]: I0212 20:25:17.399432 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7803bc06b5ffe9930c11cecbe884c72-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7803bc06b5ffe9930c11cecbe884c72\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:17.399502 kubelet[1605]: I0212 20:25:17.399463 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:17.399502 kubelet[1605]: I0212 20:25:17.399486 1605 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:17.401590 kubelet[1605]: I0212 20:25:17.401576 1605 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:25:17.401820 kubelet[1605]: E0212 20:25:17.401798 1605 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Feb 12 20:25:17.682270 kubelet[1605]: E0212 20:25:17.682146 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:17.682914 env[1118]: time="2024-02-12T20:25:17.682875127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:17.690054 kubelet[1605]: E0212 20:25:17.690021 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:17.690525 env[1118]: time="2024-02-12T20:25:17.690482827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f7803bc06b5ffe9930c11cecbe884c72,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:17.693685 kubelet[1605]: E0212 20:25:17.693662 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:17.694123 env[1118]: time="2024-02-12T20:25:17.693976851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:17.699576 kubelet[1605]: E0212 20:25:17.699541 1605 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:17.803094 kubelet[1605]: I0212 20:25:17.803071 1605 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:25:17.803482 kubelet[1605]: E0212 20:25:17.803457 1605 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Feb 12 20:25:18.075705 kubelet[1605]: W0212 20:25:18.075627 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:18.075705 kubelet[1605]: E0212 20:25:18.075660 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:18.175742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249943806.mount: Deactivated successfully. Feb 12 20:25:18.180527 env[1118]: time="2024-02-12T20:25:18.180494048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.183208 env[1118]: time="2024-02-12T20:25:18.183180628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.185151 env[1118]: time="2024-02-12T20:25:18.185125676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.186465 env[1118]: time="2024-02-12T20:25:18.186439841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.188439 env[1118]: time="2024-02-12T20:25:18.188412691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.189525 env[1118]: time="2024-02-12T20:25:18.189505702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.190716 env[1118]: time="2024-02-12T20:25:18.190686827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.191873 env[1118]: time="2024-02-12T20:25:18.191839649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.193071 env[1118]: time="2024-02-12T20:25:18.193049459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.197660 env[1118]: time="2024-02-12T20:25:18.197627035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.199854 env[1118]: time="2024-02-12T20:25:18.199827683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.200480 env[1118]: time="2024-02-12T20:25:18.200446744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:18.217653 env[1118]: time="2024-02-12T20:25:18.217568339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:18.217653 env[1118]: time="2024-02-12T20:25:18.217615928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:18.217653 env[1118]: time="2024-02-12T20:25:18.217628442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:18.217934 env[1118]: time="2024-02-12T20:25:18.217884723Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d62752048df44d389c8362583e5a6629572d484b205c8389f3a1a4a1668eee pid=1683 runtime=io.containerd.runc.v2 Feb 12 20:25:18.228279 env[1118]: time="2024-02-12T20:25:18.228204580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:18.228423 env[1118]: time="2024-02-12T20:25:18.228236970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:18.228423 env[1118]: time="2024-02-12T20:25:18.228278098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:18.228474 env[1118]: time="2024-02-12T20:25:18.228436414Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4899de34d3f71748e0b34ee402886663116e044b5c4bd336328adb20ceacb3f1 pid=1708 runtime=io.containerd.runc.v2 Feb 12 20:25:18.229326 systemd[1]: Started cri-containerd-e2d62752048df44d389c8362583e5a6629572d484b205c8389f3a1a4a1668eee.scope. Feb 12 20:25:18.235736 env[1118]: time="2024-02-12T20:25:18.235597557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:18.235736 env[1118]: time="2024-02-12T20:25:18.235630989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:18.235736 env[1118]: time="2024-02-12T20:25:18.235639716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:18.235998 env[1118]: time="2024-02-12T20:25:18.235935791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/832cf18a1ffda96db046ff93bfff4c2d4572843c15a8e549cb6d55a72540c66b pid=1726 runtime=io.containerd.runc.v2 Feb 12 20:25:18.243184 systemd[1]: Started cri-containerd-4899de34d3f71748e0b34ee402886663116e044b5c4bd336328adb20ceacb3f1.scope. Feb 12 20:25:18.251758 systemd[1]: Started cri-containerd-832cf18a1ffda96db046ff93bfff4c2d4572843c15a8e549cb6d55a72540c66b.scope. Feb 12 20:25:18.264334 kubelet[1605]: W0212 20:25:18.264258 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:18.264334 kubelet[1605]: E0212 20:25:18.264335 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:18.276646 env[1118]: time="2024-02-12T20:25:18.276598683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f7803bc06b5ffe9930c11cecbe884c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2d62752048df44d389c8362583e5a6629572d484b205c8389f3a1a4a1668eee\"" Feb 12 20:25:18.277645 kubelet[1605]: E0212 20:25:18.277624 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:18.280352 env[1118]: time="2024-02-12T20:25:18.280300968Z" level=info msg="CreateContainer within sandbox \"e2d62752048df44d389c8362583e5a6629572d484b205c8389f3a1a4a1668eee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:25:18.286303 env[1118]: time="2024-02-12T20:25:18.285659579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4899de34d3f71748e0b34ee402886663116e044b5c4bd336328adb20ceacb3f1\"" Feb 12 20:25:18.286376 kubelet[1605]: E0212 20:25:18.286197 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:18.287858 env[1118]: time="2024-02-12T20:25:18.287823438Z" level=info msg="CreateContainer within sandbox \"4899de34d3f71748e0b34ee402886663116e044b5c4bd336328adb20ceacb3f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:25:18.297702 env[1118]: time="2024-02-12T20:25:18.297655850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"832cf18a1ffda96db046ff93bfff4c2d4572843c15a8e549cb6d55a72540c66b\"" Feb 12 20:25:18.298375 kubelet[1605]: E0212 20:25:18.298355 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:18.300166 env[1118]: time="2024-02-12T20:25:18.300137185Z" level=info msg="CreateContainer within sandbox \"832cf18a1ffda96db046ff93bfff4c2d4572843c15a8e549cb6d55a72540c66b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:25:18.300588 env[1118]: time="2024-02-12T20:25:18.300541964Z" level=info msg="CreateContainer within sandbox \"e2d62752048df44d389c8362583e5a6629572d484b205c8389f3a1a4a1668eee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d1582ace77d7d57e0ef0f40c5030e987c19bfee9637c530a4ac915327b8bb0e\"" Feb 12 20:25:18.301004 env[1118]: time="2024-02-12T20:25:18.300981919Z" level=info msg="StartContainer for \"6d1582ace77d7d57e0ef0f40c5030e987c19bfee9637c530a4ac915327b8bb0e\"" Feb 12 20:25:18.308744 kubelet[1605]: W0212 20:25:18.308671 1605 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:18.308838 kubelet[1605]: E0212 20:25:18.308779 1605 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Feb 12 20:25:18.314329 env[1118]: time="2024-02-12T20:25:18.314263992Z" level=info msg="CreateContainer within sandbox \"4899de34d3f71748e0b34ee402886663116e044b5c4bd336328adb20ceacb3f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"220c7588a7f5b325370797127ad220ae03a41b58be200f162ba18ddd3f0278c2\"" Feb 12 20:25:18.314876 env[1118]: time="2024-02-12T20:25:18.314846315Z" level=info msg="StartContainer for \"220c7588a7f5b325370797127ad220ae03a41b58be200f162ba18ddd3f0278c2\"" Feb 12 20:25:18.317332 env[1118]: time="2024-02-12T20:25:18.317282705Z" level=info msg="CreateContainer within sandbox \"832cf18a1ffda96db046ff93bfff4c2d4572843c15a8e549cb6d55a72540c66b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9f775392d2655fc04f97859efe420a9b8dbe2bbdbad28bafac7694c99e6e24e\"" Feb 12 20:25:18.317693 systemd[1]: Started cri-containerd-6d1582ace77d7d57e0ef0f40c5030e987c19bfee9637c530a4ac915327b8bb0e.scope. Feb 12 20:25:18.319065 env[1118]: time="2024-02-12T20:25:18.319039490Z" level=info msg="StartContainer for \"c9f775392d2655fc04f97859efe420a9b8dbe2bbdbad28bafac7694c99e6e24e\"" Feb 12 20:25:18.332260 systemd[1]: Started cri-containerd-220c7588a7f5b325370797127ad220ae03a41b58be200f162ba18ddd3f0278c2.scope. Feb 12 20:25:18.337676 systemd[1]: Started cri-containerd-c9f775392d2655fc04f97859efe420a9b8dbe2bbdbad28bafac7694c99e6e24e.scope. Feb 12 20:25:18.361907 env[1118]: time="2024-02-12T20:25:18.361825144Z" level=info msg="StartContainer for \"6d1582ace77d7d57e0ef0f40c5030e987c19bfee9637c530a4ac915327b8bb0e\" returns successfully" Feb 12 20:25:18.391043 env[1118]: time="2024-02-12T20:25:18.390996851Z" level=info msg="StartContainer for \"c9f775392d2655fc04f97859efe420a9b8dbe2bbdbad28bafac7694c99e6e24e\" returns successfully" Feb 12 20:25:18.392344 env[1118]: time="2024-02-12T20:25:18.392308882Z" level=info msg="StartContainer for \"220c7588a7f5b325370797127ad220ae03a41b58be200f162ba18ddd3f0278c2\" returns successfully" Feb 12 20:25:18.605137 kubelet[1605]: I0212 20:25:18.604991 1605 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:25:19.259284 kubelet[1605]: E0212 20:25:19.258872 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:19.260993 kubelet[1605]: E0212 20:25:19.260935 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:19.262626 kubelet[1605]: E0212 20:25:19.262585 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:19.649181 kubelet[1605]: E0212 20:25:19.649051 1605 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 20:25:19.729960 kubelet[1605]: I0212 20:25:19.729905 1605 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:25:19.737436 kubelet[1605]: E0212 20:25:19.737410 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:19.830910 kubelet[1605]: E0212 20:25:19.830811 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e09dafa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 93793699, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 93793699, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:19.838031 kubelet[1605]: E0212 20:25:19.837979 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:19.884551 kubelet[1605]: E0212 20:25:19.884425 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e0cfa55e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 97067870, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 97067870, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:19.938214 kubelet[1605]: E0212 20:25:19.938080 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:19.938370 kubelet[1605]: E0212 20:25:19.938170 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c58af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117823151, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117823151, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:19.991636 kubelet[1605]: E0212 20:25:19.991546 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c8a13", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117835795, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117835795, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.039066 kubelet[1605]: E0212 20:25:20.039031 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.045155 kubelet[1605]: E0212 20:25:20.045096 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c98a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117839522, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117839522, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.100061 kubelet[1605]: E0212 20:25:20.099961 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c58af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117823151, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 199924400, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.140114 kubelet[1605]: E0212 20:25:20.140087 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.154890 kubelet[1605]: E0212 20:25:20.154815 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c8a13", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117835795, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 199934509, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.209535 kubelet[1605]: E0212 20:25:20.209431 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c98a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117839522, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 199940350, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.240782 kubelet[1605]: E0212 20:25:20.240751 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.261994 kubelet[1605]: E0212 20:25:20.261906 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753eac06705", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 263841029, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 263841029, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.264105 kubelet[1605]: E0212 20:25:20.264082 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:20.264105 kubelet[1605]: E0212 20:25:20.264087 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:20.264575 kubelet[1605]: E0212 20:25:20.264551 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:20.341582 kubelet[1605]: E0212 20:25:20.341557 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.442425 kubelet[1605]: E0212 20:25:20.442382 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.542841 kubelet[1605]: E0212 20:25:20.542712 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.630493 kubelet[1605]: E0212 20:25:20.630394 1605 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33753e20c58af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 117823151, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 17, 355070940, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:25:20.643500 kubelet[1605]: E0212 20:25:20.643465 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.743991 kubelet[1605]: E0212 20:25:20.743944 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:20.844342 kubelet[1605]: E0212 20:25:20.844234 1605 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:21.096352 kubelet[1605]: I0212 20:25:21.096228 1605 apiserver.go:52] "Watching apiserver" Feb 12 20:25:21.098438 kubelet[1605]: I0212 20:25:21.098410 1605 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:25:21.121679 kubelet[1605]: I0212 20:25:21.121644 1605 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:25:21.299256 kubelet[1605]: E0212 20:25:21.299221 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:22.265917 kubelet[1605]: E0212 20:25:22.265892 1605 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:22.366228 systemd[1]: Reloading. Feb 12 20:25:22.427537 /usr/lib/systemd/system-generators/torcx-generator[1937]: time="2024-02-12T20:25:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:22.427573 /usr/lib/systemd/system-generators/torcx-generator[1937]: time="2024-02-12T20:25:22Z" level=info msg="torcx already run" Feb 12 20:25:22.489999 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:22.490014 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:22.506529 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:22.594621 kubelet[1605]: I0212 20:25:22.594514 1605 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:25:22.594782 systemd[1]: Stopping kubelet.service... Feb 12 20:25:22.611689 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:25:22.611885 systemd[1]: Stopped kubelet.service. Feb 12 20:25:22.613489 systemd[1]: Started kubelet.service. Feb 12 20:25:22.664629 kubelet[1978]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:22.664629 kubelet[1978]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:22.664981 kubelet[1978]: I0212 20:25:22.664652 1978 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:25:22.665794 kubelet[1978]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:22.665794 kubelet[1978]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:22.668481 kubelet[1978]: I0212 20:25:22.668460 1978 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:25:22.668544 kubelet[1978]: I0212 20:25:22.668480 1978 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:25:22.668725 kubelet[1978]: I0212 20:25:22.668713 1978 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:25:22.670149 kubelet[1978]: I0212 20:25:22.670134 1978 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:25:22.670992 kubelet[1978]: I0212 20:25:22.670977 1978 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:25:22.676365 kubelet[1978]: I0212 20:25:22.676339 1978 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:25:22.676569 kubelet[1978]: I0212 20:25:22.676546 1978 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:25:22.676648 kubelet[1978]: I0212 20:25:22.676628 1978 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:25:22.676749 kubelet[1978]: I0212 20:25:22.676660 1978 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:25:22.676749 kubelet[1978]: I0212 20:25:22.676673 1978 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:25:22.676749 kubelet[1978]: I0212 20:25:22.676708 1978 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:22.684559 kubelet[1978]: I0212 20:25:22.684530 1978 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:25:22.684559 kubelet[1978]: I0212 20:25:22.684559 1978 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:25:22.684731 kubelet[1978]: I0212 20:25:22.684584 1978 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:25:22.684731 kubelet[1978]: I0212 20:25:22.684599 1978 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:25:22.685962 kubelet[1978]: I0212 20:25:22.685931 1978 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:25:22.686445 kubelet[1978]: I0212 20:25:22.686432 1978 server.go:1186] "Started kubelet" Feb 12 20:25:22.687904 kubelet[1978]: E0212 20:25:22.687888 1978 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:25:22.687996 kubelet[1978]: E0212 20:25:22.687908 1978 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:25:22.688202 kubelet[1978]: I0212 20:25:22.688180 1978 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:25:22.688710 kubelet[1978]: I0212 20:25:22.688691 1978 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:25:22.689505 kubelet[1978]: I0212 20:25:22.689458 1978 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:25:22.691954 kubelet[1978]: E0212 20:25:22.691433 1978 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:25:22.691954 kubelet[1978]: I0212 20:25:22.691454 1978 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:25:22.691954 kubelet[1978]: I0212 20:25:22.691504 1978 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:25:22.720455 kubelet[1978]: I0212 20:25:22.720421 1978 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:25:22.732197 kubelet[1978]: I0212 20:25:22.732168 1978 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:25:22.732197 kubelet[1978]: I0212 20:25:22.732195 1978 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:25:22.732467 kubelet[1978]: I0212 20:25:22.732211 1978 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:25:22.732467 kubelet[1978]: E0212 20:25:22.732265 1978 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:25:22.744048 kubelet[1978]: I0212 20:25:22.744022 1978 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:25:22.744268 kubelet[1978]: I0212 20:25:22.744238 1978 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:25:22.744391 kubelet[1978]: I0212 20:25:22.744377 1978 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:22.744601 kubelet[1978]: I0212 20:25:22.744589 1978 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:25:22.744685 kubelet[1978]: I0212 20:25:22.744670 1978 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:25:22.744763 kubelet[1978]: I0212 20:25:22.744750 1978 policy_none.go:49] "None policy: Start" Feb 12 20:25:22.745534 kubelet[1978]: I0212 20:25:22.745524 1978 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:25:22.745635 kubelet[1978]: I0212 20:25:22.745626 1978 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:25:22.745804 kubelet[1978]: I0212 20:25:22.745792 1978 state_mem.go:75] "Updated machine memory state" Feb 12 20:25:22.749669 kubelet[1978]: I0212 20:25:22.749652 1978 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:25:22.749968 kubelet[1978]: I0212 20:25:22.749957 1978 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:25:22.795406 kubelet[1978]: I0212 20:25:22.795377 1978 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:25:22.800290 kubelet[1978]: I0212 20:25:22.800272 1978 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 20:25:22.800628 kubelet[1978]: I0212 20:25:22.800592 1978 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:25:22.832640 kubelet[1978]: I0212 20:25:22.832609 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:22.832806 kubelet[1978]: I0212 20:25:22.832681 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:22.832806 kubelet[1978]: I0212 20:25:22.832709 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:22.837665 kubelet[1978]: E0212 20:25:22.837638 1978 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:22.993714 kubelet[1978]: I0212 20:25:22.993038 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:25:22.993714 kubelet[1978]: I0212 20:25:22.993080 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7803bc06b5ffe9930c11cecbe884c72-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7803bc06b5ffe9930c11cecbe884c72\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:22.993714 kubelet[1978]: I0212 20:25:22.993109 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:22.993714 kubelet[1978]: I0212 20:25:22.993191 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:22.993714 kubelet[1978]: I0212 20:25:22.993227 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:22.994023 kubelet[1978]: I0212 20:25:22.993259 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:22.994023 kubelet[1978]: I0212 20:25:22.993290 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7803bc06b5ffe9930c11cecbe884c72-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7803bc06b5ffe9930c11cecbe884c72\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:22.994023 kubelet[1978]: I0212 20:25:22.993330 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7803bc06b5ffe9930c11cecbe884c72-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f7803bc06b5ffe9930c11cecbe884c72\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:22.994023 kubelet[1978]: I0212 20:25:22.993385 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:25:23.137916 kubelet[1978]: E0212 20:25:23.137874 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:23.138621 kubelet[1978]: E0212 20:25:23.138597 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:23.190090 kubelet[1978]: E0212 20:25:23.190055 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:23.685872 kubelet[1978]: I0212 20:25:23.685820 1978 apiserver.go:52] "Watching apiserver" Feb 12 20:25:23.692675 kubelet[1978]: I0212 20:25:23.692644 1978 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:25:23.698821 kubelet[1978]: I0212 20:25:23.698796 1978 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:25:23.740178 kubelet[1978]: E0212 20:25:23.740144 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:23.924231 kubelet[1978]: E0212 20:25:23.924182 1978 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 20:25:23.924633 kubelet[1978]: E0212 20:25:23.924601 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:24.272230 kubelet[1978]: E0212 20:25:24.272201 1978 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 20:25:24.272505 kubelet[1978]: E0212 20:25:24.272479 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:24.311965 sudo[1208]: pam_unix(sudo:session): session closed for user root Feb 12 20:25:24.313396 sshd[1205]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:24.315378 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:35558.service: Deactivated successfully. Feb 12 20:25:24.315997 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:25:24.316136 systemd[1]: session-5.scope: Consumed 2.911s CPU time. Feb 12 20:25:24.316562 systemd-logind[1106]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:25:24.317172 systemd-logind[1106]: Removed session 5. Feb 12 20:25:24.501760 kubelet[1978]: I0212 20:25:24.501663 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.501439635 pod.CreationTimestamp="2024-02-12 20:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:24.500392892 +0000 UTC m=+1.883332314" watchObservedRunningTime="2024-02-12 20:25:24.501439635 +0000 UTC m=+1.884379949" Feb 12 20:25:24.741288 kubelet[1978]: E0212 20:25:24.741187 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:24.741626 kubelet[1978]: E0212 20:25:24.741568 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:25.290199 kubelet[1978]: I0212 20:25:25.290157 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.290118692 pod.CreationTimestamp="2024-02-12 20:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:24.891072706 +0000 UTC m=+2.274012118" watchObservedRunningTime="2024-02-12 20:25:25.290118692 +0000 UTC m=+2.673058104" Feb 12 20:25:25.743337 kubelet[1978]: E0212 20:25:25.743217 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:25.843898 kubelet[1978]: E0212 20:25:25.843869 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:26.744516 kubelet[1978]: E0212 20:25:26.744480 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:31.753986 kubelet[1978]: E0212 20:25:31.753955 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:31.906187 kubelet[1978]: I0212 20:25:31.906141 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.906096951 pod.CreationTimestamp="2024-02-12 20:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:25.290088806 +0000 UTC m=+2.673028238" watchObservedRunningTime="2024-02-12 20:25:31.906096951 +0000 UTC m=+9.289036393" Feb 12 20:25:32.751978 kubelet[1978]: E0212 20:25:32.751949 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:35.849519 kubelet[1978]: E0212 20:25:35.849470 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:36.476587 kubelet[1978]: E0212 20:25:36.476550 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:36.757362 kubelet[1978]: E0212 20:25:36.757236 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:36.857751 kubelet[1978]: I0212 20:25:36.857709 1978 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:25:36.858192 env[1118]: time="2024-02-12T20:25:36.858150836Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:25:36.858441 kubelet[1978]: I0212 20:25:36.858426 1978 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:25:37.969177 kubelet[1978]: I0212 20:25:37.969125 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:37.969592 kubelet[1978]: I0212 20:25:37.969241 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:37.974057 systemd[1]: Created slice kubepods-burstable-podad3ff782_7876_43d4_8cd1_d1919b2b1e45.slice. Feb 12 20:25:37.979453 systemd[1]: Created slice kubepods-besteffort-pod02aed8b6_c4c2_49a9_a57e_2b655ebe8243.slice. Feb 12 20:25:37.998019 kubelet[1978]: I0212 20:25:37.997984 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02aed8b6-c4c2-49a9-a57e-2b655ebe8243-lib-modules\") pod \"kube-proxy-czk2h\" (UID: \"02aed8b6-c4c2-49a9-a57e-2b655ebe8243\") " pod="kube-system/kube-proxy-czk2h" Feb 12 20:25:37.998019 kubelet[1978]: I0212 20:25:37.998024 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ad3ff782-7876-43d4-8cd1-d1919b2b1e45-cni-plugin\") pod \"kube-flannel-ds-wgp4x\" (UID: \"ad3ff782-7876-43d4-8cd1-d1919b2b1e45\") " pod="kube-flannel/kube-flannel-ds-wgp4x" Feb 12 20:25:37.998262 kubelet[1978]: I0212 20:25:37.998050 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k95l\" (UniqueName: \"kubernetes.io/projected/ad3ff782-7876-43d4-8cd1-d1919b2b1e45-kube-api-access-2k95l\") pod \"kube-flannel-ds-wgp4x\" (UID: \"ad3ff782-7876-43d4-8cd1-d1919b2b1e45\") " pod="kube-flannel/kube-flannel-ds-wgp4x" Feb 12 20:25:37.998262 kubelet[1978]: I0212 20:25:37.998073 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7fd\" (UniqueName: \"kubernetes.io/projected/02aed8b6-c4c2-49a9-a57e-2b655ebe8243-kube-api-access-cb7fd\") pod \"kube-proxy-czk2h\" (UID: \"02aed8b6-c4c2-49a9-a57e-2b655ebe8243\") " pod="kube-system/kube-proxy-czk2h" Feb 12 20:25:37.998262 kubelet[1978]: I0212 20:25:37.998138 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ad3ff782-7876-43d4-8cd1-d1919b2b1e45-run\") pod \"kube-flannel-ds-wgp4x\" (UID: \"ad3ff782-7876-43d4-8cd1-d1919b2b1e45\") " pod="kube-flannel/kube-flannel-ds-wgp4x" Feb 12 20:25:37.998262 kubelet[1978]: I0212 20:25:37.998188 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad3ff782-7876-43d4-8cd1-d1919b2b1e45-xtables-lock\") pod \"kube-flannel-ds-wgp4x\" (UID: \"ad3ff782-7876-43d4-8cd1-d1919b2b1e45\") " pod="kube-flannel/kube-flannel-ds-wgp4x" Feb 12 20:25:37.998262 kubelet[1978]: I0212 20:25:37.998213 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ad3ff782-7876-43d4-8cd1-d1919b2b1e45-cni\") pod \"kube-flannel-ds-wgp4x\" (UID: \"ad3ff782-7876-43d4-8cd1-d1919b2b1e45\") " pod="kube-flannel/kube-flannel-ds-wgp4x" Feb 12 20:25:37.998477 kubelet[1978]: I0212 20:25:37.998232 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ad3ff782-7876-43d4-8cd1-d1919b2b1e45-flannel-cfg\") pod \"kube-flannel-ds-wgp4x\" (UID: \"ad3ff782-7876-43d4-8cd1-d1919b2b1e45\") " pod="kube-flannel/kube-flannel-ds-wgp4x" Feb 12 20:25:37.998477 kubelet[1978]: I0212 20:25:37.998330 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02aed8b6-c4c2-49a9-a57e-2b655ebe8243-kube-proxy\") pod \"kube-proxy-czk2h\" (UID: \"02aed8b6-c4c2-49a9-a57e-2b655ebe8243\") " pod="kube-system/kube-proxy-czk2h" Feb 12 20:25:37.998477 kubelet[1978]: I0212 20:25:37.998385 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02aed8b6-c4c2-49a9-a57e-2b655ebe8243-xtables-lock\") pod \"kube-proxy-czk2h\" (UID: \"02aed8b6-c4c2-49a9-a57e-2b655ebe8243\") " pod="kube-system/kube-proxy-czk2h" Feb 12 20:25:38.277199 kubelet[1978]: E0212 20:25:38.276503 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:38.277474 env[1118]: time="2024-02-12T20:25:38.277208221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wgp4x,Uid:ad3ff782-7876-43d4-8cd1-d1919b2b1e45,Namespace:kube-flannel,Attempt:0,}" Feb 12 20:25:38.287605 kubelet[1978]: E0212 20:25:38.287585 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:38.287929 env[1118]: time="2024-02-12T20:25:38.287879550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czk2h,Uid:02aed8b6-c4c2-49a9-a57e-2b655ebe8243,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:38.649837 env[1118]: time="2024-02-12T20:25:38.649769490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:38.650016 env[1118]: time="2024-02-12T20:25:38.649812642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:38.650016 env[1118]: time="2024-02-12T20:25:38.649829254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:38.650016 env[1118]: time="2024-02-12T20:25:38.649789969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:38.650016 env[1118]: time="2024-02-12T20:25:38.649863669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:38.650016 env[1118]: time="2024-02-12T20:25:38.649877866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:38.650364 env[1118]: time="2024-02-12T20:25:38.650109226Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e932cd35bcb400156ebd57e4ee06d3f308f1a73f7986f350b7be2021c0345674 pid=2077 runtime=io.containerd.runc.v2 Feb 12 20:25:38.650364 env[1118]: time="2024-02-12T20:25:38.650015327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873 pid=2081 runtime=io.containerd.runc.v2 Feb 12 20:25:38.664048 systemd[1]: Started cri-containerd-b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873.scope. Feb 12 20:25:38.666898 systemd[1]: Started cri-containerd-e932cd35bcb400156ebd57e4ee06d3f308f1a73f7986f350b7be2021c0345674.scope. Feb 12 20:25:38.687484 env[1118]: time="2024-02-12T20:25:38.687448376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czk2h,Uid:02aed8b6-c4c2-49a9-a57e-2b655ebe8243,Namespace:kube-system,Attempt:0,} returns sandbox id \"e932cd35bcb400156ebd57e4ee06d3f308f1a73f7986f350b7be2021c0345674\"" Feb 12 20:25:38.688236 kubelet[1978]: E0212 20:25:38.688219 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:38.690220 env[1118]: time="2024-02-12T20:25:38.689901740Z" level=info msg="CreateContainer within sandbox \"e932cd35bcb400156ebd57e4ee06d3f308f1a73f7986f350b7be2021c0345674\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:25:38.704378 env[1118]: time="2024-02-12T20:25:38.704329530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wgp4x,Uid:ad3ff782-7876-43d4-8cd1-d1919b2b1e45,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\"" Feb 12 20:25:38.705029 kubelet[1978]: E0212 20:25:38.705010 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:38.706077 env[1118]: time="2024-02-12T20:25:38.706051384Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 12 20:25:38.721980 env[1118]: time="2024-02-12T20:25:38.721948548Z" level=info msg="CreateContainer within sandbox \"e932cd35bcb400156ebd57e4ee06d3f308f1a73f7986f350b7be2021c0345674\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d43dd90a64e0adfb1ea39d52ccb4a0b3831e7750fd694440a4d8183532c525bf\"" Feb 12 20:25:38.722535 env[1118]: time="2024-02-12T20:25:38.722518001Z" level=info msg="StartContainer for \"d43dd90a64e0adfb1ea39d52ccb4a0b3831e7750fd694440a4d8183532c525bf\"" Feb 12 20:25:38.735339 systemd[1]: Started cri-containerd-d43dd90a64e0adfb1ea39d52ccb4a0b3831e7750fd694440a4d8183532c525bf.scope. Feb 12 20:25:38.764983 env[1118]: time="2024-02-12T20:25:38.764936817Z" level=info msg="StartContainer for \"d43dd90a64e0adfb1ea39d52ccb4a0b3831e7750fd694440a4d8183532c525bf\" returns successfully" Feb 12 20:25:39.764556 kubelet[1978]: E0212 20:25:39.764529 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:39.772394 kubelet[1978]: I0212 20:25:39.771783 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-czk2h" podStartSLOduration=2.771724496 pod.CreationTimestamp="2024-02-12 20:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:39.771459322 +0000 UTC m=+17.154398754" watchObservedRunningTime="2024-02-12 20:25:39.771724496 +0000 UTC m=+17.154663908" Feb 12 20:25:40.470032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119151739.mount: Deactivated successfully. Feb 12 20:25:40.517877 env[1118]: time="2024-02-12T20:25:40.517829181Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:40.519307 env[1118]: time="2024-02-12T20:25:40.519250960Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcecffc7ad4af70c8b436d45688771e0562cbd20f55d98581ba22cf13aad360d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:40.520639 env[1118]: time="2024-02-12T20:25:40.520614729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:40.521769 env[1118]: time="2024-02-12T20:25:40.521749302Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:40.522124 env[1118]: time="2024-02-12T20:25:40.522104096Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:fcecffc7ad4af70c8b436d45688771e0562cbd20f55d98581ba22cf13aad360d\"" Feb 12 20:25:40.524855 env[1118]: time="2024-02-12T20:25:40.524805754Z" level=info msg="CreateContainer within sandbox \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 12 20:25:40.536175 env[1118]: time="2024-02-12T20:25:40.536119699Z" level=info msg="CreateContainer within sandbox \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86\"" Feb 12 20:25:40.536545 env[1118]: time="2024-02-12T20:25:40.536459233Z" level=info msg="StartContainer for \"fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86\"" Feb 12 20:25:40.549975 systemd[1]: Started cri-containerd-fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86.scope. Feb 12 20:25:40.573229 env[1118]: time="2024-02-12T20:25:40.572481007Z" level=info msg="StartContainer for \"fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86\" returns successfully" Feb 12 20:25:40.572655 systemd[1]: cri-containerd-fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86.scope: Deactivated successfully. Feb 12 20:25:40.652843 env[1118]: time="2024-02-12T20:25:40.652782930Z" level=info msg="shim disconnected" id=fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86 Feb 12 20:25:40.652843 env[1118]: time="2024-02-12T20:25:40.652839177Z" level=warning msg="cleaning up after shim disconnected" id=fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86 namespace=k8s.io Feb 12 20:25:40.653068 env[1118]: time="2024-02-12T20:25:40.652851842Z" level=info msg="cleaning up dead shim" Feb 12 20:25:40.659138 env[1118]: time="2024-02-12T20:25:40.659102315Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2331 runtime=io.containerd.runc.v2\n" Feb 12 20:25:40.767102 kubelet[1978]: E0212 20:25:40.766991 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:40.767102 kubelet[1978]: E0212 20:25:40.767006 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:40.767847 env[1118]: time="2024-02-12T20:25:40.767803093Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 12 20:25:41.025370 update_engine[1107]: I0212 20:25:41.025223 1107 update_attempter.cc:509] Updating boot flags... Feb 12 20:25:41.386354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa1ee27d48b36c752c1af05271dd04e828da854fc3f45446f3484d8c8f022b86-rootfs.mount: Deactivated successfully. Feb 12 20:25:42.550233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066617323.mount: Deactivated successfully. Feb 12 20:25:44.201442 env[1118]: time="2024-02-12T20:25:44.201380386Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:44.202974 env[1118]: time="2024-02-12T20:25:44.202947302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b5c6c9203f83e9a48e9d0b0fb7a38196c8412f458953ca98a4feac3515c6abb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:44.204625 env[1118]: time="2024-02-12T20:25:44.204580854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:44.205970 env[1118]: time="2024-02-12T20:25:44.205941841Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:44.206648 env[1118]: time="2024-02-12T20:25:44.206601469Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:b5c6c9203f83e9a48e9d0b0fb7a38196c8412f458953ca98a4feac3515c6abb1\"" Feb 12 20:25:44.208202 env[1118]: time="2024-02-12T20:25:44.208166983Z" level=info msg="CreateContainer within sandbox \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 20:25:44.218948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977064378.mount: Deactivated successfully. Feb 12 20:25:44.219567 env[1118]: time="2024-02-12T20:25:44.219534172Z" level=info msg="CreateContainer within sandbox \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c\"" Feb 12 20:25:44.220028 env[1118]: time="2024-02-12T20:25:44.220002809Z" level=info msg="StartContainer for \"adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c\"" Feb 12 20:25:44.233126 systemd[1]: Started cri-containerd-adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c.scope. Feb 12 20:25:44.251958 systemd[1]: cri-containerd-adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c.scope: Deactivated successfully. Feb 12 20:25:44.253116 env[1118]: time="2024-02-12T20:25:44.253077179Z" level=info msg="StartContainer for \"adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c\" returns successfully" Feb 12 20:25:44.266335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c-rootfs.mount: Deactivated successfully. Feb 12 20:25:44.310611 kubelet[1978]: I0212 20:25:44.310578 1978 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:25:44.402146 kubelet[1978]: I0212 20:25:44.402096 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:44.402352 kubelet[1978]: I0212 20:25:44.402264 1978 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:44.407992 systemd[1]: Created slice kubepods-burstable-podef5b9631_0628_451c_9c2e_f8f3790f5e5e.slice. Feb 12 20:25:44.410806 env[1118]: time="2024-02-12T20:25:44.410744498Z" level=info msg="shim disconnected" id=adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c Feb 12 20:25:44.410894 env[1118]: time="2024-02-12T20:25:44.410808359Z" level=warning msg="cleaning up after shim disconnected" id=adfd03688e37cebb62b0ce5d28ff0dc3f7945f010ec1d984313dde4a8589b83c namespace=k8s.io Feb 12 20:25:44.410894 env[1118]: time="2024-02-12T20:25:44.410818679Z" level=info msg="cleaning up dead shim" Feb 12 20:25:44.411656 systemd[1]: Created slice kubepods-burstable-pode6e5d8e6_e3f3_40c4_abb0_ec162773a3cc.slice. Feb 12 20:25:44.417869 env[1118]: time="2024-02-12T20:25:44.417827667Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2399 runtime=io.containerd.runc.v2\n" Feb 12 20:25:44.541772 kubelet[1978]: I0212 20:25:44.541184 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5b9631-0628-451c-9c2e-f8f3790f5e5e-config-volume\") pod \"coredns-787d4945fb-5qjvn\" (UID: \"ef5b9631-0628-451c-9c2e-f8f3790f5e5e\") " pod="kube-system/coredns-787d4945fb-5qjvn" Feb 12 20:25:44.541772 kubelet[1978]: I0212 20:25:44.541226 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4sr\" (UniqueName: \"kubernetes.io/projected/ef5b9631-0628-451c-9c2e-f8f3790f5e5e-kube-api-access-6l4sr\") pod \"coredns-787d4945fb-5qjvn\" (UID: \"ef5b9631-0628-451c-9c2e-f8f3790f5e5e\") " pod="kube-system/coredns-787d4945fb-5qjvn" Feb 12 20:25:44.541772 kubelet[1978]: I0212 20:25:44.541249 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qp7r\" (UniqueName: \"kubernetes.io/projected/e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc-kube-api-access-8qp7r\") pod \"coredns-787d4945fb-tdmhr\" (UID: \"e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc\") " pod="kube-system/coredns-787d4945fb-tdmhr" Feb 12 20:25:44.541772 kubelet[1978]: I0212 20:25:44.541277 1978 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc-config-volume\") pod \"coredns-787d4945fb-tdmhr\" (UID: \"e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc\") " pod="kube-system/coredns-787d4945fb-tdmhr" Feb 12 20:25:44.711529 kubelet[1978]: E0212 20:25:44.711495 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:44.712186 env[1118]: time="2024-02-12T20:25:44.712152231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5qjvn,Uid:ef5b9631-0628-451c-9c2e-f8f3790f5e5e,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:44.714972 kubelet[1978]: E0212 20:25:44.714951 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:44.715438 env[1118]: time="2024-02-12T20:25:44.715402454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tdmhr,Uid:e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:44.741819 env[1118]: time="2024-02-12T20:25:44.741749600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5qjvn,Uid:ef5b9631-0628-451c-9c2e-f8f3790f5e5e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6d70e7b157672a4eef428848c674399290b93ed2ecdc2363147208852414371\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 20:25:44.742037 kubelet[1978]: E0212 20:25:44.741999 1978 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d70e7b157672a4eef428848c674399290b93ed2ecdc2363147208852414371\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 20:25:44.742085 kubelet[1978]: E0212 20:25:44.742049 1978 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d70e7b157672a4eef428848c674399290b93ed2ecdc2363147208852414371\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5qjvn" Feb 12 20:25:44.742085 kubelet[1978]: E0212 20:25:44.742069 1978 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d70e7b157672a4eef428848c674399290b93ed2ecdc2363147208852414371\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5qjvn" Feb 12 20:25:44.742138 kubelet[1978]: E0212 20:25:44.742121 1978 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-5qjvn_kube-system(ef5b9631-0628-451c-9c2e-f8f3790f5e5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-5qjvn_kube-system(ef5b9631-0628-451c-9c2e-f8f3790f5e5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6d70e7b157672a4eef428848c674399290b93ed2ecdc2363147208852414371\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-5qjvn" podUID=ef5b9631-0628-451c-9c2e-f8f3790f5e5e Feb 12 20:25:44.744016 env[1118]: time="2024-02-12T20:25:44.743959794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tdmhr,Uid:e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a16f59f307c729bd944841e36130ca4d675760be0417396f08919591756af8de\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 20:25:44.744208 kubelet[1978]: E0212 20:25:44.744187 1978 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16f59f307c729bd944841e36130ca4d675760be0417396f08919591756af8de\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 12 20:25:44.744261 kubelet[1978]: E0212 20:25:44.744236 1978 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16f59f307c729bd944841e36130ca4d675760be0417396f08919591756af8de\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-tdmhr" Feb 12 20:25:44.744261 kubelet[1978]: E0212 20:25:44.744255 1978 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16f59f307c729bd944841e36130ca4d675760be0417396f08919591756af8de\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-tdmhr" Feb 12 20:25:44.744346 kubelet[1978]: E0212 20:25:44.744308 1978 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-tdmhr_kube-system(e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-tdmhr_kube-system(e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a16f59f307c729bd944841e36130ca4d675760be0417396f08919591756af8de\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-tdmhr" podUID=e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc Feb 12 20:25:44.774540 kubelet[1978]: E0212 20:25:44.774491 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:44.776444 env[1118]: time="2024-02-12T20:25:44.776164958Z" level=info msg="CreateContainer within sandbox \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 12 20:25:44.793022 env[1118]: time="2024-02-12T20:25:44.792915768Z" level=info msg="CreateContainer within sandbox \"b5894ba566fa5655701230452bb96934eb6b372ddcb4209079acfe8a72726873\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"92a020925fabdf950409dd5a9b069ded85e09298d8a5868b874d6f683970fe50\"" Feb 12 20:25:44.793467 env[1118]: time="2024-02-12T20:25:44.793360080Z" level=info msg="StartContainer for \"92a020925fabdf950409dd5a9b069ded85e09298d8a5868b874d6f683970fe50\"" Feb 12 20:25:44.806285 systemd[1]: Started cri-containerd-92a020925fabdf950409dd5a9b069ded85e09298d8a5868b874d6f683970fe50.scope. Feb 12 20:25:44.829173 env[1118]: time="2024-02-12T20:25:44.829114303Z" level=info msg="StartContainer for \"92a020925fabdf950409dd5a9b069ded85e09298d8a5868b874d6f683970fe50\" returns successfully" Feb 12 20:25:45.777888 kubelet[1978]: E0212 20:25:45.777863 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:46.052304 systemd-networkd[1016]: flannel.1: Link UP Feb 12 20:25:46.052326 systemd-networkd[1016]: flannel.1: Gained carrier Feb 12 20:25:46.779925 kubelet[1978]: E0212 20:25:46.779895 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:47.235427 systemd-networkd[1016]: flannel.1: Gained IPv6LL Feb 12 20:25:58.733182 kubelet[1978]: E0212 20:25:58.733110 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:58.733649 env[1118]: time="2024-02-12T20:25:58.733602341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tdmhr,Uid:e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:58.865072 systemd-networkd[1016]: cni0: Link UP Feb 12 20:25:58.865080 systemd-networkd[1016]: cni0: Gained carrier Feb 12 20:25:58.867664 systemd-networkd[1016]: cni0: Lost carrier Feb 12 20:25:58.880546 kernel: cni0: port 1(vethdebf5895) entered blocking state Feb 12 20:25:58.880635 kernel: cni0: port 1(vethdebf5895) entered disabled state Feb 12 20:25:58.880653 kernel: device vethdebf5895 entered promiscuous mode Feb 12 20:25:58.881848 kernel: cni0: port 1(vethdebf5895) entered blocking state Feb 12 20:25:58.881872 kernel: cni0: port 1(vethdebf5895) entered forwarding state Feb 12 20:25:58.883352 kernel: cni0: port 1(vethdebf5895) entered disabled state Feb 12 20:25:58.883689 systemd-networkd[1016]: vethdebf5895: Link UP Feb 12 20:25:58.925919 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdebf5895: link becomes ready Feb 12 20:25:58.926036 kernel: cni0: port 1(vethdebf5895) entered blocking state Feb 12 20:25:58.926075 kernel: cni0: port 1(vethdebf5895) entered forwarding state Feb 12 20:25:58.925990 systemd-networkd[1016]: vethdebf5895: Gained carrier Feb 12 20:25:58.926196 systemd-networkd[1016]: cni0: Gained carrier Feb 12 20:25:58.930620 env[1118]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018928), "name":"cbr0", "type":"bridge"} Feb 12 20:25:58.960773 env[1118]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T20:25:58.960705234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:58.960773 env[1118]: time="2024-02-12T20:25:58.960745560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:58.960773 env[1118]: time="2024-02-12T20:25:58.960755059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:58.961035 env[1118]: time="2024-02-12T20:25:58.960875575Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a380f36fed26b130f4a43a70405f02a017bd58b5c59728efe9967da122c9396d pid=2669 runtime=io.containerd.runc.v2 Feb 12 20:25:58.974033 systemd[1]: Started cri-containerd-a380f36fed26b130f4a43a70405f02a017bd58b5c59728efe9967da122c9396d.scope. Feb 12 20:25:58.985942 systemd-resolved[1059]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:25:59.006187 env[1118]: time="2024-02-12T20:25:59.006141847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tdmhr,Uid:e6e5d8e6-e3f3-40c4-abb0-ec162773a3cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a380f36fed26b130f4a43a70405f02a017bd58b5c59728efe9967da122c9396d\"" Feb 12 20:25:59.006820 kubelet[1978]: E0212 20:25:59.006804 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:59.008830 env[1118]: time="2024-02-12T20:25:59.008746199Z" level=info msg="CreateContainer within sandbox \"a380f36fed26b130f4a43a70405f02a017bd58b5c59728efe9967da122c9396d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:25:59.314969 env[1118]: time="2024-02-12T20:25:59.314826851Z" level=info msg="CreateContainer within sandbox \"a380f36fed26b130f4a43a70405f02a017bd58b5c59728efe9967da122c9396d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"944b6fdab74169db65d18f17111e9cb200f1c333557ed78d7551ab601b988e53\"" Feb 12 20:25:59.315610 env[1118]: time="2024-02-12T20:25:59.315570931Z" level=info msg="StartContainer for \"944b6fdab74169db65d18f17111e9cb200f1c333557ed78d7551ab601b988e53\"" Feb 12 20:25:59.328704 systemd[1]: Started cri-containerd-944b6fdab74169db65d18f17111e9cb200f1c333557ed78d7551ab601b988e53.scope. Feb 12 20:25:59.410784 env[1118]: time="2024-02-12T20:25:59.410701736Z" level=info msg="StartContainer for \"944b6fdab74169db65d18f17111e9cb200f1c333557ed78d7551ab601b988e53\" returns successfully" Feb 12 20:25:59.733577 kubelet[1978]: E0212 20:25:59.733525 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:59.734199 env[1118]: time="2024-02-12T20:25:59.733995798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5qjvn,Uid:ef5b9631-0628-451c-9c2e-f8f3790f5e5e,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:59.804445 kubelet[1978]: E0212 20:25:59.804418 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:59.832006 kubelet[1978]: I0212 20:25:59.831902 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wgp4x" podStartSLOduration=-9.223372014022913e+09 pod.CreationTimestamp="2024-02-12 20:25:37 +0000 UTC" firstStartedPulling="2024-02-12 20:25:38.705526677 +0000 UTC m=+16.088466099" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:45.785820907 +0000 UTC m=+23.168760329" watchObservedRunningTime="2024-02-12 20:25:59.831862982 +0000 UTC m=+37.214802404" Feb 12 20:25:59.832214 kubelet[1978]: I0212 20:25:59.832049 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-tdmhr" podStartSLOduration=22.832029666 pod.CreationTimestamp="2024-02-12 20:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:59.831496402 +0000 UTC m=+37.214435824" watchObservedRunningTime="2024-02-12 20:25:59.832029666 +0000 UTC m=+37.214969088" Feb 12 20:25:59.836503 systemd[1]: run-containerd-runc-k8s.io-a380f36fed26b130f4a43a70405f02a017bd58b5c59728efe9967da122c9396d-runc.BgoSiF.mount: Deactivated successfully. Feb 12 20:26:00.100450 systemd-networkd[1016]: vethdebf5895: Gained IPv6LL Feb 12 20:26:00.144899 systemd-networkd[1016]: veth12cef497: Link UP Feb 12 20:26:00.146374 kernel: cni0: port 2(veth12cef497) entered blocking state Feb 12 20:26:00.146478 kernel: cni0: port 2(veth12cef497) entered disabled state Feb 12 20:26:00.146496 kernel: device veth12cef497 entered promiscuous mode Feb 12 20:26:00.151190 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:00.151258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth12cef497: link becomes ready Feb 12 20:26:00.151285 kernel: cni0: port 2(veth12cef497) entered blocking state Feb 12 20:26:00.151724 kernel: cni0: port 2(veth12cef497) entered forwarding state Feb 12 20:26:00.152787 systemd-networkd[1016]: veth12cef497: Gained carrier Feb 12 20:26:00.154228 env[1118]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000020928), "name":"cbr0", "type":"bridge"} Feb 12 20:26:00.243900 env[1118]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T20:26:00.243825045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:00.243900 env[1118]: time="2024-02-12T20:26:00.243872353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:00.243900 env[1118]: time="2024-02-12T20:26:00.243885147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:00.244092 env[1118]: time="2024-02-12T20:26:00.244025151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9153a01972161e94210e6d071623da26df499fdf941421dc5bfdbef5a4fb258e pid=2835 runtime=io.containerd.runc.v2 Feb 12 20:26:00.260219 systemd[1]: Started cri-containerd-9153a01972161e94210e6d071623da26df499fdf941421dc5bfdbef5a4fb258e.scope. Feb 12 20:26:00.273476 systemd-resolved[1059]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:26:00.296069 env[1118]: time="2024-02-12T20:26:00.296024417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5qjvn,Uid:ef5b9631-0628-451c-9c2e-f8f3790f5e5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9153a01972161e94210e6d071623da26df499fdf941421dc5bfdbef5a4fb258e\"" Feb 12 20:26:00.296930 kubelet[1978]: E0212 20:26:00.296759 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:00.298586 env[1118]: time="2024-02-12T20:26:00.298522808Z" level=info msg="CreateContainer within sandbox \"9153a01972161e94210e6d071623da26df499fdf941421dc5bfdbef5a4fb258e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:26:00.440793 env[1118]: time="2024-02-12T20:26:00.440688929Z" level=info msg="CreateContainer within sandbox \"9153a01972161e94210e6d071623da26df499fdf941421dc5bfdbef5a4fb258e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a78731a7e96ee205c598ff9415dae63e23930e90ca581d045585a3967f52d4ca\"" Feb 12 20:26:00.441296 env[1118]: time="2024-02-12T20:26:00.441244434Z" level=info msg="StartContainer for \"a78731a7e96ee205c598ff9415dae63e23930e90ca581d045585a3967f52d4ca\"" Feb 12 20:26:00.456148 systemd[1]: Started cri-containerd-a78731a7e96ee205c598ff9415dae63e23930e90ca581d045585a3967f52d4ca.scope. Feb 12 20:26:00.526604 env[1118]: time="2024-02-12T20:26:00.526539689Z" level=info msg="StartContainer for \"a78731a7e96ee205c598ff9415dae63e23930e90ca581d045585a3967f52d4ca\" returns successfully" Feb 12 20:26:00.547456 systemd-networkd[1016]: cni0: Gained IPv6LL Feb 12 20:26:00.807587 kubelet[1978]: E0212 20:26:00.807448 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:00.807587 kubelet[1978]: E0212 20:26:00.807548 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:00.827535 kubelet[1978]: I0212 20:26:00.827481 1978 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-5qjvn" podStartSLOduration=23.827444335 pod.CreationTimestamp="2024-02-12 20:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:00.82740976 +0000 UTC m=+38.210349212" watchObservedRunningTime="2024-02-12 20:26:00.827444335 +0000 UTC m=+38.210383757" Feb 12 20:26:00.836004 systemd[1]: run-containerd-runc-k8s.io-9153a01972161e94210e6d071623da26df499fdf941421dc5bfdbef5a4fb258e-runc.PuJQMR.mount: Deactivated successfully. Feb 12 20:26:01.808640 kubelet[1978]: E0212 20:26:01.808610 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:01.808977 kubelet[1978]: E0212 20:26:01.808744 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:01.955483 systemd-networkd[1016]: veth12cef497: Gained IPv6LL Feb 12 20:26:02.810205 kubelet[1978]: E0212 20:26:02.810167 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:03.171641 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:48168.service. Feb 12 20:26:03.215606 sshd[2978]: Accepted publickey for core from 10.0.0.1 port 48168 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:03.216813 sshd[2978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:03.220169 systemd-logind[1106]: New session 6 of user core. Feb 12 20:26:03.221155 systemd[1]: Started session-6.scope. Feb 12 20:26:03.340819 sshd[2978]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:03.342910 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:48168.service: Deactivated successfully. Feb 12 20:26:03.343572 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:26:03.344047 systemd-logind[1106]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:26:03.344589 systemd-logind[1106]: Removed session 6. Feb 12 20:26:03.811611 kubelet[1978]: E0212 20:26:03.811576 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:08.345659 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:33904.service. Feb 12 20:26:08.382566 sshd[3011]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:08.383438 sshd[3011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:08.386544 systemd-logind[1106]: New session 7 of user core. Feb 12 20:26:08.387279 systemd[1]: Started session-7.scope. Feb 12 20:26:08.495427 sshd[3011]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:08.497596 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:33904.service: Deactivated successfully. Feb 12 20:26:08.498427 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:26:08.499306 systemd-logind[1106]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:26:08.500005 systemd-logind[1106]: Removed session 7. Feb 12 20:26:13.499925 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:33916.service. Feb 12 20:26:13.538856 sshd[3048]: Accepted publickey for core from 10.0.0.1 port 33916 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:13.539931 sshd[3048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:13.543065 systemd-logind[1106]: New session 8 of user core. Feb 12 20:26:13.543995 systemd[1]: Started session-8.scope. Feb 12 20:26:13.642079 sshd[3048]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:13.643940 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:33916.service: Deactivated successfully. Feb 12 20:26:13.644697 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:26:13.645268 systemd-logind[1106]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:26:13.645907 systemd-logind[1106]: Removed session 8. Feb 12 20:26:18.646565 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:40040.service. Feb 12 20:26:18.683127 sshd[3080]: Accepted publickey for core from 10.0.0.1 port 40040 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:18.684104 sshd[3080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:18.687052 systemd-logind[1106]: New session 9 of user core. Feb 12 20:26:18.687869 systemd[1]: Started session-9.scope. Feb 12 20:26:18.788851 sshd[3080]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:18.790754 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:40040.service: Deactivated successfully. Feb 12 20:26:18.791736 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:26:18.792266 systemd-logind[1106]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:26:18.792809 systemd-logind[1106]: Removed session 9. Feb 12 20:26:23.793837 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:40042.service. Feb 12 20:26:23.833466 sshd[3115]: Accepted publickey for core from 10.0.0.1 port 40042 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:23.834550 sshd[3115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:23.838049 systemd-logind[1106]: New session 10 of user core. Feb 12 20:26:23.838870 systemd[1]: Started session-10.scope. Feb 12 20:26:23.948504 sshd[3115]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:23.950874 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:40042.service: Deactivated successfully. Feb 12 20:26:23.951370 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:26:23.951846 systemd-logind[1106]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:26:23.952613 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:40054.service. Feb 12 20:26:23.953422 systemd-logind[1106]: Removed session 10. Feb 12 20:26:23.991527 sshd[3129]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:23.992617 sshd[3129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:23.995875 systemd-logind[1106]: New session 11 of user core. Feb 12 20:26:23.996770 systemd[1]: Started session-11.scope. Feb 12 20:26:24.231034 sshd[3129]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:24.234867 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:40058.service. Feb 12 20:26:24.250667 systemd-logind[1106]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:26:24.253641 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:40054.service: Deactivated successfully. Feb 12 20:26:24.254331 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:26:24.255969 systemd-logind[1106]: Removed session 11. Feb 12 20:26:24.279637 sshd[3139]: Accepted publickey for core from 10.0.0.1 port 40058 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:24.280620 sshd[3139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:24.283529 systemd-logind[1106]: New session 12 of user core. Feb 12 20:26:24.284180 systemd[1]: Started session-12.scope. Feb 12 20:26:24.383588 sshd[3139]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:24.386495 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:40058.service: Deactivated successfully. Feb 12 20:26:24.387109 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:26:24.387794 systemd-logind[1106]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:26:24.388520 systemd-logind[1106]: Removed session 12. Feb 12 20:26:29.388981 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:56842.service. Feb 12 20:26:29.427798 sshd[3171]: Accepted publickey for core from 10.0.0.1 port 56842 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:29.429134 sshd[3171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:29.432761 systemd-logind[1106]: New session 13 of user core. Feb 12 20:26:29.433790 systemd[1]: Started session-13.scope. Feb 12 20:26:29.543949 sshd[3171]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:29.546286 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:56842.service: Deactivated successfully. Feb 12 20:26:29.547207 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:26:29.547803 systemd-logind[1106]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:26:29.548464 systemd-logind[1106]: Removed session 13. Feb 12 20:26:33.733254 kubelet[1978]: E0212 20:26:33.733209 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:34.548671 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:56650.service. Feb 12 20:26:34.588044 sshd[3203]: Accepted publickey for core from 10.0.0.1 port 56650 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:34.589242 sshd[3203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:34.592501 systemd-logind[1106]: New session 14 of user core. Feb 12 20:26:34.593504 systemd[1]: Started session-14.scope. Feb 12 20:26:34.692811 sshd[3203]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:34.695543 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:56650.service: Deactivated successfully. Feb 12 20:26:34.696076 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:26:34.696605 systemd-logind[1106]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:26:34.697577 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:56664.service. Feb 12 20:26:34.698349 systemd-logind[1106]: Removed session 14. Feb 12 20:26:34.735800 sshd[3216]: Accepted publickey for core from 10.0.0.1 port 56664 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:34.736860 sshd[3216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:34.739806 systemd-logind[1106]: New session 15 of user core. Feb 12 20:26:34.740700 systemd[1]: Started session-15.scope. Feb 12 20:26:34.901967 sshd[3216]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:34.904704 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:56664.service: Deactivated successfully. Feb 12 20:26:34.905293 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:26:34.905942 systemd-logind[1106]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:26:34.907253 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:56672.service. Feb 12 20:26:34.909242 systemd-logind[1106]: Removed session 15. Feb 12 20:26:34.947050 sshd[3227]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:34.948021 sshd[3227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:34.951248 systemd-logind[1106]: New session 16 of user core. Feb 12 20:26:34.952053 systemd[1]: Started session-16.scope. Feb 12 20:26:36.264836 sshd[3227]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:36.268451 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:56672.service: Deactivated successfully. Feb 12 20:26:36.269187 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:26:36.269957 systemd-logind[1106]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:26:36.271570 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:56680.service. Feb 12 20:26:36.272605 systemd-logind[1106]: Removed session 16. Feb 12 20:26:36.310608 sshd[3290]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:36.311701 sshd[3290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:36.315244 systemd-logind[1106]: New session 17 of user core. Feb 12 20:26:36.316058 systemd[1]: Started session-17.scope. Feb 12 20:26:36.681167 sshd[3290]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:36.684111 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:56680.service: Deactivated successfully. Feb 12 20:26:36.684702 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:26:36.685281 systemd-logind[1106]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:26:36.686362 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:56694.service. Feb 12 20:26:36.687244 systemd-logind[1106]: Removed session 17. Feb 12 20:26:36.724101 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 56694 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:36.725224 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:36.728533 systemd-logind[1106]: New session 18 of user core. Feb 12 20:26:36.729416 systemd[1]: Started session-18.scope. Feb 12 20:26:36.830227 sshd[3310]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:36.832548 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:56694.service: Deactivated successfully. Feb 12 20:26:36.833247 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:26:36.834140 systemd-logind[1106]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:26:36.834849 systemd-logind[1106]: Removed session 18. Feb 12 20:26:37.732888 kubelet[1978]: E0212 20:26:37.732835 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:40.733888 kubelet[1978]: E0212 20:26:40.733843 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:41.834553 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:56700.service. Feb 12 20:26:41.872097 sshd[3349]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:41.873351 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:41.877309 systemd-logind[1106]: New session 19 of user core. Feb 12 20:26:41.878449 systemd[1]: Started session-19.scope. Feb 12 20:26:41.990068 sshd[3349]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:41.992929 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:56700.service: Deactivated successfully. Feb 12 20:26:41.993734 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:26:41.994677 systemd-logind[1106]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:26:41.995571 systemd-logind[1106]: Removed session 19. Feb 12 20:26:46.994176 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:44752.service. Feb 12 20:26:47.031859 sshd[3407]: Accepted publickey for core from 10.0.0.1 port 44752 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:47.032946 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:47.036000 systemd-logind[1106]: New session 20 of user core. Feb 12 20:26:47.036835 systemd[1]: Started session-20.scope. Feb 12 20:26:47.133211 sshd[3407]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:47.135479 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:44752.service: Deactivated successfully. Feb 12 20:26:47.136145 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:26:47.136742 systemd-logind[1106]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:26:47.137353 systemd-logind[1106]: Removed session 20. Feb 12 20:26:52.137971 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:44754.service. Feb 12 20:26:52.175077 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 44754 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:52.176159 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:52.179357 systemd-logind[1106]: New session 21 of user core. Feb 12 20:26:52.180125 systemd[1]: Started session-21.scope. Feb 12 20:26:52.277414 sshd[3438]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:52.279749 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:44754.service: Deactivated successfully. Feb 12 20:26:52.280420 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:26:52.281122 systemd-logind[1106]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:26:52.281751 systemd-logind[1106]: Removed session 21. Feb 12 20:26:55.733181 kubelet[1978]: E0212 20:26:55.733132 1978 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:57.281534 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:54850.service. Feb 12 20:26:57.318654 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 54850 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:57.319837 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:57.323051 systemd-logind[1106]: New session 22 of user core. Feb 12 20:26:57.323823 systemd[1]: Started session-22.scope. Feb 12 20:26:57.427773 sshd[3469]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:57.430159 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:54850.service: Deactivated successfully. Feb 12 20:26:57.430899 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:26:57.431389 systemd-logind[1106]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:26:57.432057 systemd-logind[1106]: Removed session 22.