Feb 9 18:50:05.810434 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 18:50:05.810463 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:50:05.810471 kernel: BIOS-provided physical RAM map: Feb 9 18:50:05.810477 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 18:50:05.810482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 18:50:05.810488 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 18:50:05.810494 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 9 18:50:05.810500 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 9 18:50:05.810506 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 18:50:05.810512 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 18:50:05.810517 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 9 18:50:05.810523 kernel: NX (Execute Disable) protection: active Feb 9 18:50:05.810528 kernel: SMBIOS 2.8 present. Feb 9 18:50:05.810534 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 9 18:50:05.810541 kernel: Hypervisor detected: KVM Feb 9 18:50:05.810547 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 18:50:05.810553 kernel: kvm-clock: cpu 0, msr 89faa001, primary cpu clock Feb 9 18:50:05.810559 kernel: kvm-clock: using sched offset of 2173563532 cycles Feb 9 18:50:05.810565 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 18:50:05.810571 kernel: tsc: Detected 2794.750 MHz processor Feb 9 18:50:05.810578 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 18:50:05.810584 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 18:50:05.810590 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 9 18:50:05.810597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 18:50:05.810603 kernel: Using GB pages for direct mapping Feb 9 18:50:05.810609 kernel: ACPI: Early table checksum verification disabled Feb 9 18:50:05.810615 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 9 18:50:05.810621 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:50:05.810627 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:50:05.810633 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:50:05.810639 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 9 18:50:05.810645 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:50:05.810652 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:50:05.810658 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:50:05.810664 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 9 18:50:05.810670 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 9 18:50:05.810676 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 9 18:50:05.810682 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 9 18:50:05.810688 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 9 18:50:05.810694 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 9 18:50:05.810703 kernel: No NUMA configuration found Feb 9 18:50:05.810710 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 9 18:50:05.810716 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 9 18:50:05.810723 kernel: Zone ranges: Feb 9 18:50:05.810729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 18:50:05.810735 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 9 18:50:05.810743 kernel: Normal empty Feb 9 18:50:05.810749 kernel: Movable zone start for each node Feb 9 18:50:05.810756 kernel: Early memory node ranges Feb 9 18:50:05.810762 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 18:50:05.810768 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 9 18:50:05.810775 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 9 18:50:05.810781 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 18:50:05.810787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 18:50:05.810794 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 9 18:50:05.810802 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 18:50:05.810808 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 18:50:05.810814 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 18:50:05.810821 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 18:50:05.810827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 18:50:05.810834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 18:50:05.810840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 18:50:05.810846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 18:50:05.810853 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 18:50:05.810860 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 18:50:05.810866 kernel: TSC deadline timer available Feb 9 18:50:05.810873 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 18:50:05.810880 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 18:50:05.810886 kernel: kvm-guest: setup PV sched yield Feb 9 18:50:05.810892 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 9 18:50:05.810899 kernel: Booting paravirtualized kernel on KVM Feb 9 18:50:05.810905 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 18:50:05.810912 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 18:50:05.810920 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 18:50:05.810926 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 18:50:05.810932 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 18:50:05.810944 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 18:50:05.810950 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 9 18:50:05.810957 kernel: kvm-guest: PV spinlocks enabled Feb 9 18:50:05.810963 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 18:50:05.810969 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 9 18:50:05.810977 kernel: Policy zone: DMA32 Feb 9 18:50:05.810984 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:50:05.810992 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:50:05.810999 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:50:05.811005 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:50:05.811012 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:50:05.811019 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 9 18:50:05.811025 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:50:05.811032 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 18:50:05.811038 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 18:50:05.811046 kernel: rcu: Hierarchical RCU implementation. Feb 9 18:50:05.811053 kernel: rcu: RCU event tracing is enabled. Feb 9 18:50:05.811059 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:50:05.811066 kernel: Rude variant of Tasks RCU enabled. Feb 9 18:50:05.811072 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:50:05.811079 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:50:05.811085 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:50:05.811092 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 18:50:05.811098 kernel: random: crng init done Feb 9 18:50:05.811105 kernel: Console: colour VGA+ 80x25 Feb 9 18:50:05.811112 kernel: printk: console [ttyS0] enabled Feb 9 18:50:05.811118 kernel: ACPI: Core revision 20210730 Feb 9 18:50:05.811125 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 18:50:05.811131 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 18:50:05.811137 kernel: x2apic enabled Feb 9 18:50:05.811144 kernel: Switched APIC routing to physical x2apic. Feb 9 18:50:05.811150 kernel: kvm-guest: setup PV IPIs Feb 9 18:50:05.811157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 18:50:05.811164 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 18:50:05.811171 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 18:50:05.811177 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 18:50:05.811183 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 18:50:05.811190 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 18:50:05.811196 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 18:50:05.811203 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 18:50:05.811209 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 18:50:05.811216 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 18:50:05.811228 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 18:50:05.811235 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 18:50:05.811241 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 18:50:05.811249 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 18:50:05.811256 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 18:50:05.811263 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 18:50:05.811270 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 18:50:05.811277 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 18:50:05.811284 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 18:50:05.811291 kernel: Freeing SMP alternatives memory: 32K Feb 9 18:50:05.811298 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:50:05.811305 kernel: LSM: Security Framework initializing Feb 9 18:50:05.811312 kernel: SELinux: Initializing. Feb 9 18:50:05.811318 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:50:05.811325 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:50:05.811332 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 18:50:05.811340 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 18:50:05.811347 kernel: ... version: 0 Feb 9 18:50:05.811354 kernel: ... bit width: 48 Feb 9 18:50:05.811361 kernel: ... generic registers: 6 Feb 9 18:50:05.811368 kernel: ... value mask: 0000ffffffffffff Feb 9 18:50:05.811374 kernel: ... max period: 00007fffffffffff Feb 9 18:50:05.811381 kernel: ... fixed-purpose events: 0 Feb 9 18:50:05.811388 kernel: ... event mask: 000000000000003f Feb 9 18:50:05.811394 kernel: signal: max sigframe size: 1776 Feb 9 18:50:05.811402 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:50:05.811409 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:50:05.811415 kernel: x86: Booting SMP configuration: Feb 9 18:50:05.811422 kernel: .... node #0, CPUs: #1 Feb 9 18:50:05.811429 kernel: kvm-clock: cpu 1, msr 89faa041, secondary cpu clock Feb 9 18:50:05.811444 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 18:50:05.811451 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 9 18:50:05.811458 kernel: #2 Feb 9 18:50:05.811476 kernel: kvm-clock: cpu 2, msr 89faa081, secondary cpu clock Feb 9 18:50:05.811483 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 18:50:05.811492 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 9 18:50:05.811499 kernel: #3 Feb 9 18:50:05.811506 kernel: kvm-clock: cpu 3, msr 89faa0c1, secondary cpu clock Feb 9 18:50:05.811512 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 18:50:05.811519 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 9 18:50:05.811526 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:50:05.811532 kernel: smpboot: Max logical packages: 1 Feb 9 18:50:05.811539 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 18:50:05.811546 kernel: devtmpfs: initialized Feb 9 18:50:05.811554 kernel: x86/mm: Memory block size: 128MB Feb 9 18:50:05.811561 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:50:05.811568 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:50:05.811574 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:50:05.811581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:50:05.811588 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:50:05.811595 kernel: audit: type=2000 audit(1707504605.449:1): state=initialized audit_enabled=0 res=1 Feb 9 18:50:05.811601 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:50:05.811608 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 18:50:05.811616 kernel: cpuidle: using governor menu Feb 9 18:50:05.811623 kernel: ACPI: bus type PCI registered Feb 9 18:50:05.811629 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:50:05.811636 kernel: dca service started, version 1.12.1 Feb 9 18:50:05.811643 kernel: PCI: Using configuration type 1 for base access Feb 9 18:50:05.811650 kernel: PCI: Using configuration type 1 for extended access Feb 9 18:50:05.811657 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 18:50:05.811664 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:50:05.811671 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:50:05.811679 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:50:05.811686 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:50:05.811693 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:50:05.811699 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:50:05.811706 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:50:05.811713 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:50:05.811719 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:50:05.811726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:50:05.811733 kernel: ACPI: Interpreter enabled Feb 9 18:50:05.811741 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 18:50:05.811747 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 18:50:05.811754 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 18:50:05.811761 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 18:50:05.811768 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:50:05.811890 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:50:05.811906 kernel: acpiphp: Slot [3] registered Feb 9 18:50:05.811913 kernel: acpiphp: Slot [4] registered Feb 9 18:50:05.811922 kernel: acpiphp: Slot [5] registered Feb 9 18:50:05.811929 kernel: acpiphp: Slot [6] registered Feb 9 18:50:05.811935 kernel: acpiphp: Slot [7] registered Feb 9 18:50:05.811950 kernel: acpiphp: Slot [8] registered Feb 9 18:50:05.811956 kernel: acpiphp: Slot [9] registered Feb 9 18:50:05.811963 kernel: acpiphp: Slot [10] registered Feb 9 18:50:05.811970 kernel: acpiphp: Slot [11] registered Feb 9 18:50:05.811977 kernel: acpiphp: Slot [12] registered Feb 9 18:50:05.811983 kernel: acpiphp: Slot [13] registered Feb 9 18:50:05.811990 kernel: acpiphp: Slot [14] registered Feb 9 18:50:05.811998 kernel: acpiphp: Slot [15] registered Feb 9 18:50:05.812004 kernel: acpiphp: Slot [16] registered Feb 9 18:50:05.812011 kernel: acpiphp: Slot [17] registered Feb 9 18:50:05.812017 kernel: acpiphp: Slot [18] registered Feb 9 18:50:05.812024 kernel: acpiphp: Slot [19] registered Feb 9 18:50:05.812031 kernel: acpiphp: Slot [20] registered Feb 9 18:50:05.812037 kernel: acpiphp: Slot [21] registered Feb 9 18:50:05.812044 kernel: acpiphp: Slot [22] registered Feb 9 18:50:05.812051 kernel: acpiphp: Slot [23] registered Feb 9 18:50:05.812058 kernel: acpiphp: Slot [24] registered Feb 9 18:50:05.812065 kernel: acpiphp: Slot [25] registered Feb 9 18:50:05.812072 kernel: acpiphp: Slot [26] registered Feb 9 18:50:05.812078 kernel: acpiphp: Slot [27] registered Feb 9 18:50:05.812085 kernel: acpiphp: Slot [28] registered Feb 9 18:50:05.812092 kernel: acpiphp: Slot [29] registered Feb 9 18:50:05.812098 kernel: acpiphp: Slot [30] registered Feb 9 18:50:05.812105 kernel: acpiphp: Slot [31] registered Feb 9 18:50:05.812112 kernel: PCI host bridge to bus 0000:00 Feb 9 18:50:05.812194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 18:50:05.812262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 18:50:05.812324 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 18:50:05.812384 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 18:50:05.812457 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 18:50:05.812519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:50:05.812627 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 18:50:05.812708 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 18:50:05.812788 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 18:50:05.812860 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 18:50:05.812929 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 18:50:05.813008 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 18:50:05.813077 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 18:50:05.813145 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 18:50:05.813224 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 18:50:05.813294 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 18:50:05.813364 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 18:50:05.813458 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 18:50:05.813530 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 9 18:50:05.813598 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 9 18:50:05.813670 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 9 18:50:05.813739 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 18:50:05.813815 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:50:05.813885 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 18:50:05.813967 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 9 18:50:05.814037 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 9 18:50:05.814112 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 18:50:05.814183 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 18:50:05.814254 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 9 18:50:05.814321 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 9 18:50:05.814397 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 18:50:05.814480 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 18:50:05.814550 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 9 18:50:05.814619 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 9 18:50:05.814689 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 9 18:50:05.814698 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 18:50:05.814705 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 18:50:05.814712 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 18:50:05.814719 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 18:50:05.814726 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 18:50:05.814733 kernel: iommu: Default domain type: Translated Feb 9 18:50:05.814739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 18:50:05.814807 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 18:50:05.814880 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 18:50:05.814960 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 18:50:05.814969 kernel: vgaarb: loaded Feb 9 18:50:05.814976 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:50:05.814984 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:50:05.814990 kernel: PTP clock support registered Feb 9 18:50:05.814997 kernel: PCI: Using ACPI for IRQ routing Feb 9 18:50:05.815004 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 18:50:05.815012 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 18:50:05.815020 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 9 18:50:05.815026 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 18:50:05.815033 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 18:50:05.815040 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 18:50:05.815046 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:50:05.815053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:50:05.815060 kernel: pnp: PnP ACPI init Feb 9 18:50:05.815137 kernel: pnp 00:02: [dma 2] Feb 9 18:50:05.815149 kernel: pnp: PnP ACPI: found 6 devices Feb 9 18:50:05.815156 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 18:50:05.815163 kernel: NET: Registered PF_INET protocol family Feb 9 18:50:05.815170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:50:05.815177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:50:05.815184 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:50:05.815191 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:50:05.815198 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:50:05.815206 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:50:05.815213 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:50:05.815220 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:50:05.815227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:50:05.815233 kernel: NET: Registered PF_XDP protocol family Feb 9 18:50:05.815295 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 18:50:05.815357 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 18:50:05.815417 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 18:50:05.815491 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 18:50:05.815556 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 18:50:05.815627 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 18:50:05.815699 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 18:50:05.815768 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 18:50:05.815777 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:50:05.815784 kernel: Initialise system trusted keyrings Feb 9 18:50:05.815791 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:50:05.815798 kernel: Key type asymmetric registered Feb 9 18:50:05.815806 kernel: Asymmetric key parser 'x509' registered Feb 9 18:50:05.815813 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:50:05.815820 kernel: io scheduler mq-deadline registered Feb 9 18:50:05.815827 kernel: io scheduler kyber registered Feb 9 18:50:05.815834 kernel: io scheduler bfq registered Feb 9 18:50:05.815841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 18:50:05.815848 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 18:50:05.815857 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 18:50:05.815880 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 18:50:05.815896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:50:05.815903 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 18:50:05.815910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 18:50:05.815917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 18:50:05.815923 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 18:50:05.816010 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 18:50:05.816021 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 18:50:05.816104 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 18:50:05.816189 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T18:50:05 UTC (1707504605) Feb 9 18:50:05.816265 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 18:50:05.816275 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:50:05.816289 kernel: Segment Routing with IPv6 Feb 9 18:50:05.816299 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:50:05.816306 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:50:05.816313 kernel: Key type dns_resolver registered Feb 9 18:50:05.816320 kernel: IPI shorthand broadcast: enabled Feb 9 18:50:05.816327 kernel: sched_clock: Marking stable (365151740, 71928416)->(460502943, -23422787) Feb 9 18:50:05.816347 kernel: registered taskstats version 1 Feb 9 18:50:05.816354 kernel: Loading compiled-in X.509 certificates Feb 9 18:50:05.816361 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 18:50:05.816368 kernel: Key type .fscrypt registered Feb 9 18:50:05.816374 kernel: Key type fscrypt-provisioning registered Feb 9 18:50:05.816381 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:50:05.816388 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:50:05.816404 kernel: ima: No architecture policies found Feb 9 18:50:05.816412 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 18:50:05.816419 kernel: Write protecting the kernel read-only data: 28672k Feb 9 18:50:05.816426 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 18:50:05.816433 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 18:50:05.816458 kernel: Run /init as init process Feb 9 18:50:05.816466 kernel: with arguments: Feb 9 18:50:05.816472 kernel: /init Feb 9 18:50:05.816479 kernel: with environment: Feb 9 18:50:05.816495 kernel: HOME=/ Feb 9 18:50:05.816502 kernel: TERM=linux Feb 9 18:50:05.816511 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:50:05.816520 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:50:05.816539 systemd[1]: Detected virtualization kvm. Feb 9 18:50:05.816547 systemd[1]: Detected architecture x86-64. Feb 9 18:50:05.816555 systemd[1]: Running in initrd. Feb 9 18:50:05.816562 systemd[1]: No hostname configured, using default hostname. Feb 9 18:50:05.816571 systemd[1]: Hostname set to . Feb 9 18:50:05.816579 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:50:05.816586 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:50:05.816594 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:50:05.816601 systemd[1]: Reached target cryptsetup.target. Feb 9 18:50:05.816608 systemd[1]: Reached target paths.target. Feb 9 18:50:05.816616 systemd[1]: Reached target slices.target. Feb 9 18:50:05.816623 systemd[1]: Reached target swap.target. Feb 9 18:50:05.816640 systemd[1]: Reached target timers.target. Feb 9 18:50:05.816650 systemd[1]: Listening on iscsid.socket. Feb 9 18:50:05.816658 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:50:05.816665 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:50:05.816673 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:50:05.816680 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:50:05.816688 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:50:05.816695 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:50:05.816704 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:50:05.816712 systemd[1]: Reached target sockets.target. Feb 9 18:50:05.816719 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:50:05.816727 systemd[1]: Finished network-cleanup.service. Feb 9 18:50:05.816744 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:50:05.816752 systemd[1]: Starting systemd-journald.service... Feb 9 18:50:05.816759 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:50:05.816768 systemd[1]: Starting systemd-resolved.service... Feb 9 18:50:05.816776 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:50:05.816784 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:50:05.816792 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:50:05.816800 kernel: audit: type=1130 audit(1707504605.811:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.816807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:50:05.816821 systemd-journald[197]: Journal started Feb 9 18:50:05.816870 systemd-journald[197]: Runtime Journal (/run/log/journal/2106fd2efaaf477eb470bfadc4ae44a1) is 6.0M, max 48.5M, 42.5M free. Feb 9 18:50:05.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.818623 systemd-modules-load[198]: Inserted module 'overlay' Feb 9 18:50:05.825090 systemd-resolved[199]: Positive Trust Anchors: Feb 9 18:50:05.842086 systemd[1]: Started systemd-journald.service. Feb 9 18:50:05.842114 kernel: audit: type=1130 audit(1707504605.838:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.847002 kernel: audit: type=1130 audit(1707504605.841:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.847049 kernel: audit: type=1130 audit(1707504605.844:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.825100 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:50:05.825126 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:50:05.854818 kernel: audit: type=1130 audit(1707504605.846:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.827188 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 9 18:50:05.840828 systemd[1]: Started systemd-resolved.service. Feb 9 18:50:05.842176 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:50:05.845121 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:50:05.847491 systemd[1]: Reached target nss-lookup.target. Feb 9 18:50:05.848340 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:50:05.857603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:50:05.862413 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 9 18:50:05.862758 kernel: Bridge firewalling registered Feb 9 18:50:05.867780 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:50:05.870916 kernel: audit: type=1130 audit(1707504605.867:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.870957 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:50:05.878467 kernel: SCSI subsystem initialized Feb 9 18:50:05.880851 dracut-cmdline[216]: dracut-dracut-053 Feb 9 18:50:05.882969 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:50:05.889459 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:50:05.889482 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:50:05.889492 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:50:05.893288 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 9 18:50:05.894146 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:50:05.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.896199 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:50:05.897651 kernel: audit: type=1130 audit(1707504605.894:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.905196 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:50:05.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.908450 kernel: audit: type=1130 audit(1707504605.905:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:05.945466 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:50:05.955470 kernel: iscsi: registered transport (tcp) Feb 9 18:50:05.975640 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:50:05.975694 kernel: QLogic iSCSI HBA Driver Feb 9 18:50:05.999114 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:50:05.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:06.000263 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:50:06.003845 kernel: audit: type=1130 audit(1707504605.999:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:06.048487 kernel: raid6: avx2x4 gen() 29664 MB/s Feb 9 18:50:06.065463 kernel: raid6: avx2x4 xor() 7386 MB/s Feb 9 18:50:06.082462 kernel: raid6: avx2x2 gen() 32302 MB/s Feb 9 18:50:06.099462 kernel: raid6: avx2x2 xor() 19262 MB/s Feb 9 18:50:06.116466 kernel: raid6: avx2x1 gen() 26428 MB/s Feb 9 18:50:06.133466 kernel: raid6: avx2x1 xor() 15295 MB/s Feb 9 18:50:06.150457 kernel: raid6: sse2x4 gen() 14622 MB/s Feb 9 18:50:06.167463 kernel: raid6: sse2x4 xor() 7102 MB/s Feb 9 18:50:06.184463 kernel: raid6: sse2x2 gen() 16256 MB/s Feb 9 18:50:06.201465 kernel: raid6: sse2x2 xor() 9818 MB/s Feb 9 18:50:06.218464 kernel: raid6: sse2x1 gen() 12279 MB/s Feb 9 18:50:06.235897 kernel: raid6: sse2x1 xor() 7787 MB/s Feb 9 18:50:06.235919 kernel: raid6: using algorithm avx2x2 gen() 32302 MB/s Feb 9 18:50:06.235944 kernel: raid6: .... xor() 19262 MB/s, rmw enabled Feb 9 18:50:06.235956 kernel: raid6: using avx2x2 recovery algorithm Feb 9 18:50:06.263469 kernel: xor: automatically using best checksumming function avx Feb 9 18:50:06.350478 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 18:50:06.357603 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:50:06.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:06.358000 audit: BPF prog-id=7 op=LOAD Feb 9 18:50:06.358000 audit: BPF prog-id=8 op=LOAD Feb 9 18:50:06.359296 systemd[1]: Starting systemd-udevd.service... Feb 9 18:50:06.370290 systemd-udevd[402]: Using default interface naming scheme 'v252'. Feb 9 18:50:06.374007 systemd[1]: Started systemd-udevd.service. Feb 9 18:50:06.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:06.375679 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:50:06.384873 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Feb 9 18:50:06.410002 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:50:06.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:06.410956 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:50:06.442209 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:50:06.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:06.476460 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:50:06.484461 kernel: libata version 3.00 loaded. Feb 9 18:50:06.484486 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:50:06.501613 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 18:50:06.501729 kernel: scsi host0: ata_piix Feb 9 18:50:06.501849 kernel: scsi host1: ata_piix Feb 9 18:50:06.501981 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 18:50:06.501997 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 18:50:06.502010 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 18:50:06.502026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:50:06.502037 kernel: GPT:9289727 != 19775487 Feb 9 18:50:06.502048 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:50:06.502060 kernel: GPT:9289727 != 19775487 Feb 9 18:50:06.502071 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:50:06.502082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:50:06.502094 kernel: AES CTR mode by8 optimization enabled Feb 9 18:50:06.649455 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 18:50:06.649511 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 18:50:06.666542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:50:06.670446 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443) Feb 9 18:50:06.667709 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:50:06.675631 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:50:06.681461 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 18:50:06.681592 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:50:06.681575 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:50:06.687487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:50:06.688790 systemd[1]: Starting disk-uuid.service... Feb 9 18:50:06.700477 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 18:50:06.843227 disk-uuid[528]: Primary Header is updated. Feb 9 18:50:06.843227 disk-uuid[528]: Secondary Entries is updated. Feb 9 18:50:06.843227 disk-uuid[528]: Secondary Header is updated. Feb 9 18:50:06.846019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:50:07.851466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:50:07.851540 disk-uuid[531]: The operation has completed successfully. Feb 9 18:50:07.875327 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:50:07.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:07.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:07.875416 systemd[1]: Finished disk-uuid.service. Feb 9 18:50:07.882484 systemd[1]: Starting verity-setup.service... Feb 9 18:50:07.894463 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 18:50:07.909397 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:50:07.911588 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:50:07.913038 systemd[1]: Finished verity-setup.service. Feb 9 18:50:07.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:07.969457 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:50:07.969784 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:50:07.970199 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:50:07.970769 systemd[1]: Starting ignition-setup.service... Feb 9 18:50:07.973018 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:50:07.981767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:50:07.981808 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:50:07.981821 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:50:07.988129 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:50:07.994924 systemd[1]: Finished ignition-setup.service. Feb 9 18:50:07.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:07.996085 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:50:08.028415 ignition[630]: Ignition 2.14.0 Feb 9 18:50:08.028424 ignition[630]: Stage: fetch-offline Feb 9 18:50:08.028487 ignition[630]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:50:08.028495 ignition[630]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:50:08.028582 ignition[630]: parsed url from cmdline: "" Feb 9 18:50:08.028584 ignition[630]: no config URL provided Feb 9 18:50:08.028588 ignition[630]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:50:08.031936 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:50:08.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.028594 ignition[630]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:50:08.028608 ignition[630]: op(1): [started] loading QEMU firmware config module Feb 9 18:50:08.034000 audit: BPF prog-id=9 op=LOAD Feb 9 18:50:08.035300 systemd[1]: Starting systemd-networkd.service... Feb 9 18:50:08.028612 ignition[630]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:50:08.033648 ignition[630]: op(1): [finished] loading QEMU firmware config module Feb 9 18:50:08.069783 ignition[630]: parsing config with SHA512: f740771ae0d7e2c03126999bb42403cbdaca105edd26719b149422248d4dbeb311f4e6265e892fd3a86bfc049e5856bdaea892161bf9272f2c1b66f932088b9a Feb 9 18:50:08.085829 systemd-networkd[709]: lo: Link UP Feb 9 18:50:08.085838 systemd-networkd[709]: lo: Gained carrier Feb 9 18:50:08.086208 systemd-networkd[709]: Enumeration completed Feb 9 18:50:08.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.086298 systemd[1]: Started systemd-networkd.service. Feb 9 18:50:08.086388 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:50:08.087259 systemd-networkd[709]: eth0: Link UP Feb 9 18:50:08.087261 systemd-networkd[709]: eth0: Gained carrier Feb 9 18:50:08.088419 systemd[1]: Reached target network.target. Feb 9 18:50:08.092290 systemd[1]: Starting iscsiuio.service... Feb 9 18:50:08.096491 systemd[1]: Started iscsiuio.service. Feb 9 18:50:08.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.098292 systemd[1]: Starting iscsid.service... Feb 9 18:50:08.100493 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:50:08.101748 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:50:08.101748 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:50:08.101748 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:50:08.101748 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:50:08.101748 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:50:08.101748 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:50:08.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.105303 unknown[630]: fetched base config from "system" Feb 9 18:50:08.107800 ignition[630]: fetch-offline: fetch-offline passed Feb 9 18:50:08.105310 unknown[630]: fetched user config from "qemu" Feb 9 18:50:08.107935 ignition[630]: Ignition finished successfully Feb 9 18:50:08.105530 systemd[1]: Started iscsid.service. Feb 9 18:50:08.107658 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:50:08.109536 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:50:08.110992 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:50:08.111660 systemd[1]: Starting ignition-kargs.service... Feb 9 18:50:08.117869 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:50:08.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.121763 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:50:08.125165 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:50:08.125369 systemd[1]: Reached target remote-fs.target. Feb 9 18:50:08.128395 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:50:08.131321 ignition[718]: Ignition 2.14.0 Feb 9 18:50:08.131333 ignition[718]: Stage: kargs Feb 9 18:50:08.131453 ignition[718]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:50:08.131464 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:50:08.134385 ignition[718]: kargs: kargs passed Feb 9 18:50:08.134422 ignition[718]: Ignition finished successfully Feb 9 18:50:08.135369 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:50:08.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.136535 systemd[1]: Finished ignition-kargs.service. Feb 9 18:50:08.138492 systemd[1]: Starting ignition-disks.service... Feb 9 18:50:08.144696 ignition[736]: Ignition 2.14.0 Feb 9 18:50:08.144706 ignition[736]: Stage: disks Feb 9 18:50:08.144797 ignition[736]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:50:08.146694 systemd[1]: Finished ignition-disks.service. Feb 9 18:50:08.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.144804 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:50:08.148082 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:50:08.145857 ignition[736]: disks: disks passed Feb 9 18:50:08.149503 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:50:08.145901 ignition[736]: Ignition finished successfully Feb 9 18:50:08.150205 systemd[1]: Reached target local-fs.target. Feb 9 18:50:08.150843 systemd[1]: Reached target sysinit.target. Feb 9 18:50:08.151150 systemd[1]: Reached target basic.target. Feb 9 18:50:08.152005 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:50:08.161209 systemd-fsck[744]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 18:50:08.165669 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:50:08.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.167833 systemd[1]: Mounting sysroot.mount... Feb 9 18:50:08.174461 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:50:08.174582 systemd[1]: Mounted sysroot.mount. Feb 9 18:50:08.174953 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:50:08.176523 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:50:08.177126 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:50:08.177164 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:50:08.177188 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:50:08.183548 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:50:08.184694 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:50:08.189813 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:50:08.192373 initrd-setup-root[762]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:50:08.195990 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:50:08.199074 initrd-setup-root[778]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:50:08.222753 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:50:08.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.224191 systemd[1]: Starting ignition-mount.service... Feb 9 18:50:08.225359 systemd[1]: Starting sysroot-boot.service... Feb 9 18:50:08.228463 bash[795]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:50:08.234972 ignition[796]: INFO : Ignition 2.14.0 Feb 9 18:50:08.234972 ignition[796]: INFO : Stage: mount Feb 9 18:50:08.236699 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:50:08.236699 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:50:08.236699 ignition[796]: INFO : mount: mount passed Feb 9 18:50:08.236699 ignition[796]: INFO : Ignition finished successfully Feb 9 18:50:08.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.236824 systemd[1]: Finished ignition-mount.service. Feb 9 18:50:08.240822 systemd[1]: Finished sysroot-boot.service. Feb 9 18:50:08.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:08.919413 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:50:08.924471 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Feb 9 18:50:08.926621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:50:08.926646 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:50:08.926658 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:50:08.929333 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:50:08.930278 systemd[1]: Starting ignition-files.service... Feb 9 18:50:08.943011 ignition[825]: INFO : Ignition 2.14.0 Feb 9 18:50:08.943011 ignition[825]: INFO : Stage: files Feb 9 18:50:08.944255 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:50:08.944255 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:50:08.945959 ignition[825]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:50:08.945959 ignition[825]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:50:08.945959 ignition[825]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:50:08.948598 ignition[825]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:50:08.948598 ignition[825]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:50:08.948598 ignition[825]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:50:08.948598 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 18:50:08.948598 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 18:50:08.948071 unknown[825]: wrote ssh authorized keys file for user: core Feb 9 18:50:09.299736 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:50:09.473302 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 18:50:09.475766 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 18:50:09.475766 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 18:50:09.475766 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 18:50:09.763647 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:50:09.844988 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 18:50:09.847084 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 18:50:09.847084 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 18:50:09.847084 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 18:50:09.865613 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:50:09.937807 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 18:50:09.941703 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:50:09.941703 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 18:50:10.005697 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:50:10.071865 systemd-networkd[709]: eth0: Gained IPv6LL Feb 9 18:50:10.192499 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 18:50:10.195059 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:50:10.195059 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:50:10.195059 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 18:50:10.240176 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:50:10.704339 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 18:50:10.706519 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:50:10.706519 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:50:10.706519 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 18:50:10.757396 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:50:10.924486 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 18:50:10.924486 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:50:10.924486 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:50:10.928825 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:50:10.928825 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:50:10.928825 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:50:10.932310 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:50:10.932310 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:50:10.932310 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:50:10.935788 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:50:10.935788 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:50:10.938136 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:50:10.939383 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:50:10.940743 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(11): [started] processing unit "prepare-helm.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(11): op(12): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(11): op(12): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(11): [finished] processing unit "prepare-helm.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(13): [started] processing unit "coreos-metadata.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(13): op(14): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(13): op(14): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(13): [finished] processing unit "coreos-metadata.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:50:10.941937 ignition[825]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:50:10.964152 ignition[825]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:50:10.964152 ignition[825]: INFO : files: op(19): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:50:10.964152 ignition[825]: INFO : files: op(19): op(1a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:50:10.976756 ignition[825]: INFO : files: op(19): op(1a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:50:10.977862 ignition[825]: INFO : files: op(19): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:50:10.977862 ignition[825]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:50:10.977862 ignition[825]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:50:10.977862 ignition[825]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:50:10.977862 ignition[825]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:50:10.977862 ignition[825]: INFO : files: files passed Feb 9 18:50:10.977862 ignition[825]: INFO : Ignition finished successfully Feb 9 18:50:10.984908 systemd[1]: Finished ignition-files.service. Feb 9 18:50:10.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:10.986059 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:50:10.991504 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 9 18:50:10.991527 kernel: audit: type=1130 audit(1707504610.984:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:10.989179 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:50:10.989859 systemd[1]: Starting ignition-quench.service... Feb 9 18:50:10.996509 kernel: audit: type=1130 audit(1707504610.992:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:10.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:10.992784 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:50:10.997716 initrd-setup-root-after-ignition[849]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:50:10.993235 systemd[1]: Reached target ignition-complete.target. Feb 9 18:50:10.999645 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:50:10.997482 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:50:11.002765 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:50:11.002877 systemd[1]: Finished ignition-quench.service. Feb 9 18:50:11.009384 kernel: audit: type=1130 audit(1707504611.003:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.009410 kernel: audit: type=1131 audit(1707504611.003:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.011191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:50:11.011277 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:50:11.017864 kernel: audit: type=1130 audit(1707504611.012:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.017878 kernel: audit: type=1131 audit(1707504611.012:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.012541 systemd[1]: Reached target initrd-fs.target. Feb 9 18:50:11.017860 systemd[1]: Reached target initrd.target. Feb 9 18:50:11.018873 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:50:11.019561 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:50:11.028802 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:50:11.032425 kernel: audit: type=1130 audit(1707504611.029:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.030182 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:50:11.037682 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:50:11.038822 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:50:11.039517 systemd[1]: Stopped target timers.target. Feb 9 18:50:11.040557 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:50:11.044514 kernel: audit: type=1131 audit(1707504611.041:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.040642 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:50:11.041652 systemd[1]: Stopped target initrd.target. Feb 9 18:50:11.044600 systemd[1]: Stopped target basic.target. Feb 9 18:50:11.045650 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:50:11.046692 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:50:11.047731 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:50:11.048874 systemd[1]: Stopped target remote-fs.target. Feb 9 18:50:11.049951 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:50:11.051076 systemd[1]: Stopped target sysinit.target. Feb 9 18:50:11.052097 systemd[1]: Stopped target local-fs.target. Feb 9 18:50:11.053135 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:50:11.054164 systemd[1]: Stopped target swap.target. Feb 9 18:50:11.059001 kernel: audit: type=1131 audit(1707504611.055:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.055112 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:50:11.055195 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:50:11.063305 kernel: audit: type=1131 audit(1707504611.059:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.056247 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:50:11.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.059044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:50:11.059123 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:50:11.060290 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:50:11.060371 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:50:11.063415 systemd[1]: Stopped target paths.target. Feb 9 18:50:11.064389 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:50:11.068482 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:50:11.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.074285 iscsid[716]: iscsid shutting down. Feb 9 18:50:11.068731 systemd[1]: Stopped target slices.target. Feb 9 18:50:11.068951 systemd[1]: Stopped target sockets.target. Feb 9 18:50:11.069075 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:50:11.069157 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:50:11.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.078767 ignition[866]: INFO : Ignition 2.14.0 Feb 9 18:50:11.078767 ignition[866]: INFO : Stage: umount Feb 9 18:50:11.078767 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:50:11.078767 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:50:11.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.069307 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:50:11.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.084152 ignition[866]: INFO : umount: umount passed Feb 9 18:50:11.084152 ignition[866]: INFO : Ignition finished successfully Feb 9 18:50:11.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.069381 systemd[1]: Stopped ignition-files.service. Feb 9 18:50:11.070218 systemd[1]: Stopping ignition-mount.service... Feb 9 18:50:11.070607 systemd[1]: Stopping iscsid.service... Feb 9 18:50:11.070744 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:50:11.070859 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:50:11.071768 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:50:11.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.071962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:50:11.072087 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:50:11.072407 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:50:11.072526 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:50:11.075931 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:50:11.076196 systemd[1]: Stopped iscsid.service. Feb 9 18:50:11.077924 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:50:11.077990 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:50:11.079534 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:50:11.079561 systemd[1]: Closed iscsid.socket. Feb 9 18:50:11.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.080428 systemd[1]: Stopping iscsiuio.service... Feb 9 18:50:11.081608 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:50:11.081670 systemd[1]: Stopped ignition-mount.service. Feb 9 18:50:11.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.082925 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:50:11.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.082966 systemd[1]: Stopped ignition-disks.service. Feb 9 18:50:11.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.084116 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:50:11.084146 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:50:11.084606 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:50:11.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.084641 systemd[1]: Stopped ignition-setup.service. Feb 9 18:50:11.085781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:50:11.086095 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:50:11.086298 systemd[1]: Stopped iscsiuio.service. Feb 9 18:50:11.087044 systemd[1]: Stopped target network.target. Feb 9 18:50:11.088125 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:50:11.088152 systemd[1]: Closed iscsiuio.socket. Feb 9 18:50:11.089247 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:50:11.090258 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:50:11.113000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:50:11.091463 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:50:11.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.091525 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:50:11.092354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:50:11.092383 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:50:11.097474 systemd-networkd[709]: eth0: DHCPv6 lease lost Feb 9 18:50:11.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.115000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:50:11.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.098351 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:50:11.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.098421 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:50:11.100024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:50:11.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.100048 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:50:11.101514 systemd[1]: Stopping network-cleanup.service... Feb 9 18:50:11.102037 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:50:11.102072 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:50:11.103211 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:50:11.103243 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:50:11.104465 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:50:11.104495 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:50:11.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.105674 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:50:11.107961 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:50:11.108307 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:50:11.108378 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:50:11.113984 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:50:11.114101 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:50:11.115499 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:50:11.115556 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:50:11.115886 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:50:11.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:11.115909 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:50:11.116078 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:50:11.116104 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:50:11.116312 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:50:11.116337 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:50:11.116635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:50:11.116661 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:50:11.120942 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:50:11.121425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:50:11.121472 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:50:11.126815 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:50:11.126878 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:50:11.132549 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:50:11.132621 systemd[1]: Stopped network-cleanup.service. Feb 9 18:50:11.133241 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:50:11.134751 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:50:11.149775 systemd[1]: Switching root. Feb 9 18:50:11.167605 systemd-journald[197]: Journal stopped Feb 9 18:50:14.225284 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 9 18:50:14.225350 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:50:14.225368 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:50:14.225387 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:50:14.225400 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:50:14.225413 kernel: SELinux: policy capability open_perms=1 Feb 9 18:50:14.225431 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:50:14.225459 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:50:14.225472 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:50:14.225488 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:50:14.225501 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:50:14.225514 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:50:14.225528 systemd[1]: Successfully loaded SELinux policy in 34.201ms. Feb 9 18:50:14.225555 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.559ms. Feb 9 18:50:14.225571 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:50:14.225586 systemd[1]: Detected virtualization kvm. Feb 9 18:50:14.225600 systemd[1]: Detected architecture x86-64. Feb 9 18:50:14.225613 systemd[1]: Detected first boot. Feb 9 18:50:14.225634 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:50:14.225648 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:50:14.225661 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:50:14.225676 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:50:14.225691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:50:14.225706 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:50:14.225722 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:50:14.225744 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:50:14.225758 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:50:14.225772 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:50:14.225786 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:50:14.225799 systemd[1]: Created slice system-getty.slice. Feb 9 18:50:14.225813 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:50:14.225827 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:50:14.225841 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:50:14.225857 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:50:14.225870 systemd[1]: Created slice user.slice. Feb 9 18:50:14.225884 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:50:14.225897 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:50:14.225911 systemd[1]: Set up automount boot.automount. Feb 9 18:50:14.225925 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:50:14.225938 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:50:14.225957 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:50:14.225972 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:50:14.225986 systemd[1]: Reached target integritysetup.target. Feb 9 18:50:14.225999 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:50:14.226013 systemd[1]: Reached target remote-fs.target. Feb 9 18:50:14.226027 systemd[1]: Reached target slices.target. Feb 9 18:50:14.226040 systemd[1]: Reached target swap.target. Feb 9 18:50:14.226054 systemd[1]: Reached target torcx.target. Feb 9 18:50:14.226068 systemd[1]: Reached target veritysetup.target. Feb 9 18:50:14.226082 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:50:14.226097 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:50:14.226111 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:50:14.226125 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:50:14.226140 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:50:14.226154 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:50:14.226168 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:50:14.226182 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:50:14.226196 systemd[1]: Mounting media.mount... Feb 9 18:50:14.226210 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:50:14.226224 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:50:14.226239 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:50:14.226253 systemd[1]: Mounting tmp.mount... Feb 9 18:50:14.226267 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:50:14.226281 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:50:14.226295 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:50:14.226309 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:50:14.226325 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:50:14.226339 systemd[1]: Starting modprobe@drm.service... Feb 9 18:50:14.226354 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:50:14.226370 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:50:14.226383 systemd[1]: Starting modprobe@loop.service... Feb 9 18:50:14.226398 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:50:14.226412 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:50:14.226426 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:50:14.226463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:50:14.226477 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:50:14.226491 systemd[1]: Stopped systemd-journald.service. Feb 9 18:50:14.226506 systemd[1]: Starting systemd-journald.service... Feb 9 18:50:14.226520 kernel: fuse: init (API version 7.34) Feb 9 18:50:14.226533 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:50:14.226548 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:50:14.226561 kernel: loop: module loaded Feb 9 18:50:14.226574 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:50:14.226588 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:50:14.226602 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:50:14.226616 systemd[1]: Stopped verity-setup.service. Feb 9 18:50:14.226630 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:50:14.226646 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:50:14.226661 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:50:14.226679 systemd[1]: Mounted media.mount. Feb 9 18:50:14.226692 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:50:14.226706 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:50:14.226719 systemd[1]: Mounted tmp.mount. Feb 9 18:50:14.226743 systemd-journald[972]: Journal started Feb 9 18:50:14.226793 systemd-journald[972]: Runtime Journal (/run/log/journal/2106fd2efaaf477eb470bfadc4ae44a1) is 6.0M, max 48.5M, 42.5M free. Feb 9 18:50:11.220000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:50:12.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:50:12.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:50:12.059000 audit: BPF prog-id=10 op=LOAD Feb 9 18:50:12.059000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:50:12.059000 audit: BPF prog-id=11 op=LOAD Feb 9 18:50:12.059000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:50:12.085000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:50:12.085000 audit[899]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:50:12.085000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:50:12.086000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:50:12.086000 audit[899]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:50:12.086000 audit: CWD cwd="/" Feb 9 18:50:12.086000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:12.086000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:12.086000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:50:14.120000 audit: BPF prog-id=12 op=LOAD Feb 9 18:50:14.120000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:50:14.120000 audit: BPF prog-id=13 op=LOAD Feb 9 18:50:14.120000 audit: BPF prog-id=14 op=LOAD Feb 9 18:50:14.120000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:50:14.120000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:50:14.121000 audit: BPF prog-id=15 op=LOAD Feb 9 18:50:14.121000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:50:14.121000 audit: BPF prog-id=16 op=LOAD Feb 9 18:50:14.121000 audit: BPF prog-id=17 op=LOAD Feb 9 18:50:14.121000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:50:14.227655 systemd[1]: Started systemd-journald.service. Feb 9 18:50:14.121000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:50:14.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.130000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:50:14.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.201000 audit: BPF prog-id=18 op=LOAD Feb 9 18:50:14.201000 audit: BPF prog-id=19 op=LOAD Feb 9 18:50:14.201000 audit: BPF prog-id=20 op=LOAD Feb 9 18:50:14.201000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:50:14.201000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:50:14.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.223000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:50:14.223000 audit[972]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc4c893550 a2=4000 a3=7ffc4c8935ec items=0 ppid=1 pid=972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:50:14.223000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:50:14.119191 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:50:14.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:12.084944 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:50:14.119201 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:50:12.085097 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:50:14.122277 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:50:12.085112 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:50:14.228454 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:50:12.085136 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:50:12.085144 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:50:14.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:12.085167 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:50:12.085177 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:50:12.085348 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:50:12.085379 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:50:14.229301 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:50:14.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:12.085390 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:50:14.229402 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:50:12.085628 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:50:14.230242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:50:12.085658 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:50:12.085673 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:50:12.085686 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:50:12.085700 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:50:12.085712 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:50:13.870966 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:13Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:50:13.871194 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:13Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:50:13.871274 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:13Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:50:13.871410 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:13Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:50:13.871464 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:13Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:50:13.871515 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:50:13Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:50:14.231676 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:50:14.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.232642 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:50:14.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.232813 systemd[1]: Finished modprobe@drm.service. Feb 9 18:50:14.233648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:50:14.233820 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:50:14.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.234762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:50:14.234911 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:50:14.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.235692 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:50:14.235872 systemd[1]: Finished modprobe@loop.service. Feb 9 18:50:14.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.236738 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:50:14.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.237602 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:50:14.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.238543 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:50:14.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.239376 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:50:14.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.240410 systemd[1]: Reached target network-pre.target. Feb 9 18:50:14.242034 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:50:14.243502 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:50:14.244033 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:50:14.245207 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:50:14.246657 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:50:14.247815 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:50:14.248671 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:50:14.249403 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:50:14.250321 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:50:14.252064 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:50:14.254416 systemd-journald[972]: Time spent on flushing to /var/log/journal/2106fd2efaaf477eb470bfadc4ae44a1 is 18.082ms for 1125 entries. Feb 9 18:50:14.254416 systemd-journald[972]: System Journal (/var/log/journal/2106fd2efaaf477eb470bfadc4ae44a1) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:50:14.279495 systemd-journald[972]: Received client request to flush runtime journal. Feb 9 18:50:14.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.256135 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:50:14.256902 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:50:14.259852 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:50:14.260595 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:50:14.263572 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:50:14.268850 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:50:14.277572 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:50:14.279268 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:50:14.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.280209 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:50:14.285584 udevadm[1004]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:50:14.699222 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:50:14.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.700000 audit: BPF prog-id=21 op=LOAD Feb 9 18:50:14.700000 audit: BPF prog-id=22 op=LOAD Feb 9 18:50:14.700000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:50:14.700000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:50:14.701168 systemd[1]: Starting systemd-udevd.service... Feb 9 18:50:14.715856 systemd-udevd[1006]: Using default interface naming scheme 'v252'. Feb 9 18:50:14.726643 systemd[1]: Started systemd-udevd.service. Feb 9 18:50:14.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.727000 audit: BPF prog-id=23 op=LOAD Feb 9 18:50:14.730155 systemd[1]: Starting systemd-networkd.service... Feb 9 18:50:14.733000 audit: BPF prog-id=24 op=LOAD Feb 9 18:50:14.733000 audit: BPF prog-id=25 op=LOAD Feb 9 18:50:14.733000 audit: BPF prog-id=26 op=LOAD Feb 9 18:50:14.735170 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:50:14.761507 systemd[1]: Started systemd-userdbd.service. Feb 9 18:50:14.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.773459 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 18:50:14.780862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:50:14.800459 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 18:50:14.810465 kernel: ACPI: button: Power Button [PWRF] Feb 9 18:50:14.812153 systemd-networkd[1012]: lo: Link UP Feb 9 18:50:14.812338 systemd-networkd[1012]: lo: Gained carrier Feb 9 18:50:14.812909 systemd-networkd[1012]: Enumeration completed Feb 9 18:50:14.813061 systemd[1]: Started systemd-networkd.service. Feb 9 18:50:14.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.814481 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:50:14.815273 systemd-networkd[1012]: eth0: Link UP Feb 9 18:50:14.815344 systemd-networkd[1012]: eth0: Gained carrier Feb 9 18:50:14.829537 systemd-networkd[1012]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:50:14.812000 audit[1019]: AVC avc: denied { confidentiality } for pid=1019 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:50:14.812000 audit[1019]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5624118872b0 a1=32194 a2=7f0187e06bc5 a3=5 items=108 ppid=1006 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:50:14.812000 audit: CWD cwd="/" Feb 9 18:50:14.812000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=1 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=2 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=3 name=(null) inode=14336 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=4 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=5 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=6 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=7 name=(null) inode=16386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=8 name=(null) inode=16386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=9 name=(null) inode=16387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=10 name=(null) inode=16386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=11 name=(null) inode=16388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=12 name=(null) inode=16386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=13 name=(null) inode=16389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=14 name=(null) inode=16386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=15 name=(null) inode=16390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=16 name=(null) inode=16386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=17 name=(null) inode=16391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=18 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=19 name=(null) inode=16392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=20 name=(null) inode=16392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=21 name=(null) inode=16393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=22 name=(null) inode=16392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=23 name=(null) inode=16394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=24 name=(null) inode=16392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=25 name=(null) inode=16395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=26 name=(null) inode=16392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=27 name=(null) inode=16396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=28 name=(null) inode=16392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=29 name=(null) inode=16397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=30 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=31 name=(null) inode=16398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=32 name=(null) inode=16398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=33 name=(null) inode=16399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=34 name=(null) inode=16398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=35 name=(null) inode=16400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=36 name=(null) inode=16398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=37 name=(null) inode=16401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=38 name=(null) inode=16398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=39 name=(null) inode=16402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=40 name=(null) inode=16398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=41 name=(null) inode=16403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=42 name=(null) inode=14335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=43 name=(null) inode=16404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=44 name=(null) inode=16404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=45 name=(null) inode=16405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=46 name=(null) inode=16404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=47 name=(null) inode=16406 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=48 name=(null) inode=16404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=49 name=(null) inode=16407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=50 name=(null) inode=16404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=51 name=(null) inode=16408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=52 name=(null) inode=16404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=53 name=(null) inode=16409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=55 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=56 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=57 name=(null) inode=16411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=58 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=59 name=(null) inode=16412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=60 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=61 name=(null) inode=16413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=62 name=(null) inode=16413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=63 name=(null) inode=16414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=64 name=(null) inode=16413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=65 name=(null) inode=16415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=66 name=(null) inode=16413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=67 name=(null) inode=16416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=68 name=(null) inode=16413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=69 name=(null) inode=16417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=70 name=(null) inode=16413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=71 name=(null) inode=16418 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=72 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=73 name=(null) inode=16419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=74 name=(null) inode=16419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=75 name=(null) inode=16420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=76 name=(null) inode=16419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=77 name=(null) inode=16421 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=78 name=(null) inode=16419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=79 name=(null) inode=16422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=80 name=(null) inode=16419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=81 name=(null) inode=16423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=82 name=(null) inode=16419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=83 name=(null) inode=16424 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=84 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=85 name=(null) inode=16425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=86 name=(null) inode=16425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=87 name=(null) inode=16426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=88 name=(null) inode=16425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=89 name=(null) inode=16427 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=90 name=(null) inode=16425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=91 name=(null) inode=16428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=92 name=(null) inode=16425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=93 name=(null) inode=16429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=94 name=(null) inode=16425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=95 name=(null) inode=16430 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=96 name=(null) inode=16410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=97 name=(null) inode=16431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=98 name=(null) inode=16431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=99 name=(null) inode=16432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=100 name=(null) inode=16431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=101 name=(null) inode=16433 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=102 name=(null) inode=16431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=103 name=(null) inode=16434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=104 name=(null) inode=16431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=105 name=(null) inode=16435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=106 name=(null) inode=16431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PATH item=107 name=(null) inode=16436 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:50:14.812000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:50:14.849457 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 18:50:14.852461 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 18:50:14.852631 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:50:14.891460 kernel: kvm: Nested Virtualization enabled Feb 9 18:50:14.891503 kernel: SVM: kvm: Nested Paging enabled Feb 9 18:50:14.891518 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 18:50:14.891530 kernel: SVM: Virtual GIF supported Feb 9 18:50:14.906452 kernel: EDAC MC: Ver: 3.0.0 Feb 9 18:50:14.929810 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:50:14.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.931548 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:50:14.938408 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:50:14.963435 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:50:14.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.964250 systemd[1]: Reached target cryptsetup.target. Feb 9 18:50:14.965833 systemd[1]: Starting lvm2-activation.service... Feb 9 18:50:14.969118 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:50:14.993318 systemd[1]: Finished lvm2-activation.service. Feb 9 18:50:14.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:14.994102 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:50:14.994718 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:50:14.994742 systemd[1]: Reached target local-fs.target. Feb 9 18:50:14.995298 systemd[1]: Reached target machines.target. Feb 9 18:50:14.996932 systemd[1]: Starting ldconfig.service... Feb 9 18:50:14.997729 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:50:14.997771 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:50:14.998805 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:50:15.000662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:50:15.002511 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:50:15.003548 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:50:15.003600 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:50:15.004739 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:50:15.006315 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) Feb 9 18:50:15.009129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:50:15.012493 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:50:15.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.017431 systemd-tmpfiles[1047]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:50:15.019285 systemd-tmpfiles[1047]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:50:15.021784 systemd-tmpfiles[1047]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:50:15.047508 systemd-fsck[1053]: fsck.fat 4.2 (2021-01-31) Feb 9 18:50:15.047508 systemd-fsck[1053]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 18:50:15.049068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:50:15.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.051752 systemd[1]: Mounting boot.mount... Feb 9 18:50:15.079055 systemd[1]: Mounted boot.mount. Feb 9 18:50:15.463724 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:50:15.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.521285 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:50:15.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.523262 systemd[1]: Starting audit-rules.service... Feb 9 18:50:15.524631 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:50:15.524845 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:50:15.526075 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:50:15.527000 audit: BPF prog-id=27 op=LOAD Feb 9 18:50:15.528000 audit: BPF prog-id=28 op=LOAD Feb 9 18:50:15.528148 systemd[1]: Starting systemd-resolved.service... Feb 9 18:50:15.532614 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:50:15.534066 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:50:15.535584 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:50:15.536172 systemd[1]: Finished ldconfig.service. Feb 9 18:50:15.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.537017 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:50:15.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.537931 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:50:15.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.538880 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:50:15.540000 audit[1067]: SYSTEM_BOOT pid=1067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.542931 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:50:15.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.548015 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:50:15.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:50:15.548930 augenrules[1076]: No rules Feb 9 18:50:15.548000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:50:15.548000 audit[1076]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5d2a7410 a2=420 a3=0 items=0 ppid=1056 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:50:15.548000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:50:15.549714 systemd[1]: Starting systemd-update-done.service... Feb 9 18:50:15.550524 systemd[1]: Finished audit-rules.service. Feb 9 18:50:15.554187 systemd[1]: Finished systemd-update-done.service. Feb 9 18:50:15.577772 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:50:15.578796 systemd-timesyncd[1066]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:50:15.578837 systemd[1]: Reached target time-set.target. Feb 9 18:50:15.578843 systemd-timesyncd[1066]: Initial clock synchronization to Fri 2024-02-09 18:50:15.905276 UTC. Feb 9 18:50:15.582347 systemd-resolved[1060]: Positive Trust Anchors: Feb 9 18:50:15.582366 systemd-resolved[1060]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:50:15.582401 systemd-resolved[1060]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:50:15.589321 systemd-resolved[1060]: Defaulting to hostname 'linux'. Feb 9 18:50:15.591058 systemd[1]: Started systemd-resolved.service. Feb 9 18:50:15.591808 systemd[1]: Reached target network.target. Feb 9 18:50:15.592403 systemd[1]: Reached target nss-lookup.target. Feb 9 18:50:15.593029 systemd[1]: Reached target sysinit.target. Feb 9 18:50:15.593674 systemd[1]: Started motdgen.path. Feb 9 18:50:15.594323 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:50:15.595344 systemd[1]: Started logrotate.timer. Feb 9 18:50:15.595933 systemd[1]: Started mdadm.timer. Feb 9 18:50:15.596424 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:50:15.597036 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:50:15.597065 systemd[1]: Reached target paths.target. Feb 9 18:50:15.597593 systemd[1]: Reached target timers.target. Feb 9 18:50:15.598423 systemd[1]: Listening on dbus.socket. Feb 9 18:50:15.599937 systemd[1]: Starting docker.socket... Feb 9 18:50:15.602460 systemd[1]: Listening on sshd.socket. Feb 9 18:50:15.603111 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:50:15.603458 systemd[1]: Listening on docker.socket. Feb 9 18:50:15.604056 systemd[1]: Reached target sockets.target. Feb 9 18:50:15.604627 systemd[1]: Reached target basic.target. Feb 9 18:50:15.605194 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:50:15.605216 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:50:15.606047 systemd[1]: Starting containerd.service... Feb 9 18:50:15.607396 systemd[1]: Starting dbus.service... Feb 9 18:50:15.608655 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:50:15.610124 systemd[1]: Starting extend-filesystems.service... Feb 9 18:50:15.610858 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:50:15.611659 systemd[1]: Starting motdgen.service... Feb 9 18:50:15.611987 jq[1087]: false Feb 9 18:50:15.613540 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:50:15.615499 systemd[1]: Starting prepare-critools.service... Feb 9 18:50:15.617065 systemd[1]: Starting prepare-helm.service... Feb 9 18:50:15.618788 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:50:15.620195 systemd[1]: Starting sshd-keygen.service... Feb 9 18:50:15.621751 dbus-daemon[1086]: [system] SELinux support is enabled Feb 9 18:50:15.622785 systemd[1]: Starting systemd-logind.service... Feb 9 18:50:15.625151 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:50:15.625202 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:50:15.625727 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:50:15.626106 extend-filesystems[1088]: Found sr0 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda1 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda2 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda3 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found usr Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda4 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda6 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda7 Feb 9 18:50:15.627574 extend-filesystems[1088]: Found vda9 Feb 9 18:50:15.627574 extend-filesystems[1088]: Checking size of /dev/vda9 Feb 9 18:50:15.626265 systemd[1]: Starting update-engine.service... Feb 9 18:50:15.633630 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:50:15.635269 systemd[1]: Started dbus.service. Feb 9 18:50:15.637635 jq[1110]: true Feb 9 18:50:15.638359 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:50:15.638517 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:50:15.638742 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:50:15.638858 systemd[1]: Finished motdgen.service. Feb 9 18:50:15.641215 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:50:15.641336 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:50:15.644918 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:50:15.644944 systemd[1]: Reached target system-config.target. Feb 9 18:50:15.645835 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:50:15.645854 systemd[1]: Reached target user-config.target. Feb 9 18:50:15.647529 tar[1112]: ./ Feb 9 18:50:15.647529 tar[1112]: ./macvlan Feb 9 18:50:15.648194 tar[1113]: crictl Feb 9 18:50:15.657529 tar[1114]: linux-amd64/helm Feb 9 18:50:15.657791 jq[1118]: true Feb 9 18:50:15.669241 extend-filesystems[1088]: Resized partition /dev/vda9 Feb 9 18:50:15.673036 update_engine[1106]: I0209 18:50:15.672729 1106 main.cc:92] Flatcar Update Engine starting Feb 9 18:50:15.674464 systemd[1]: Started update-engine.service. Feb 9 18:50:15.674655 update_engine[1106]: I0209 18:50:15.674492 1106 update_check_scheduler.cc:74] Next update check in 9m22s Feb 9 18:50:15.675036 extend-filesystems[1133]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:50:15.679720 systemd[1]: Started locksmithd.service. Feb 9 18:50:15.681808 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:50:15.701929 systemd-logind[1102]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 18:50:15.701948 systemd-logind[1102]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:50:15.702390 systemd-logind[1102]: New seat seat0. Feb 9 18:50:15.703470 env[1119]: time="2024-02-09T18:50:15.703422604Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:50:15.707674 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:50:15.707764 systemd[1]: Started systemd-logind.service. Feb 9 18:50:15.726890 tar[1112]: ./static Feb 9 18:50:15.726203 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:50:15.727087 extend-filesystems[1133]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:50:15.727087 extend-filesystems[1133]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:50:15.727087 extend-filesystems[1133]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:50:15.726339 systemd[1]: Finished extend-filesystems.service. Feb 9 18:50:15.731479 bash[1144]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:50:15.731554 extend-filesystems[1088]: Resized filesystem in /dev/vda9 Feb 9 18:50:15.730888 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:50:15.747647 env[1119]: time="2024-02-09T18:50:15.747611007Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:50:15.758816 tar[1112]: ./vlan Feb 9 18:50:15.769642 env[1119]: time="2024-02-09T18:50:15.769619664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:50:15.772085 env[1119]: time="2024-02-09T18:50:15.772061913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:50:15.772158 env[1119]: time="2024-02-09T18:50:15.772139568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:50:15.772411 env[1119]: time="2024-02-09T18:50:15.772392192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:50:15.772507 env[1119]: time="2024-02-09T18:50:15.772488332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:50:15.772585 env[1119]: time="2024-02-09T18:50:15.772565597Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:50:15.772653 env[1119]: time="2024-02-09T18:50:15.772635849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:50:15.772786 env[1119]: time="2024-02-09T18:50:15.772769329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:50:15.773041 env[1119]: time="2024-02-09T18:50:15.773024017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:50:15.773209 env[1119]: time="2024-02-09T18:50:15.773190629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:50:15.773286 env[1119]: time="2024-02-09T18:50:15.773267483Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:50:15.773392 env[1119]: time="2024-02-09T18:50:15.773374324Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:50:15.773479 env[1119]: time="2024-02-09T18:50:15.773461016Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:50:15.790543 env[1119]: time="2024-02-09T18:50:15.790501848Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:50:15.790587 env[1119]: time="2024-02-09T18:50:15.790571248Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:50:15.790610 env[1119]: time="2024-02-09T18:50:15.790587078Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:50:15.790699 env[1119]: time="2024-02-09T18:50:15.790621332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790699 env[1119]: time="2024-02-09T18:50:15.790689369Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790757 env[1119]: time="2024-02-09T18:50:15.790720207Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790757 env[1119]: time="2024-02-09T18:50:15.790732651Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790757 env[1119]: time="2024-02-09T18:50:15.790745455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790821 env[1119]: time="2024-02-09T18:50:15.790759060Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790821 env[1119]: time="2024-02-09T18:50:15.790771724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790821 env[1119]: time="2024-02-09T18:50:15.790794506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.790821 env[1119]: time="2024-02-09T18:50:15.790805898Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:50:15.790961 env[1119]: time="2024-02-09T18:50:15.790926314Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:50:15.791047 env[1119]: time="2024-02-09T18:50:15.791024518Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:50:15.791337 env[1119]: time="2024-02-09T18:50:15.791314191Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:50:15.791378 env[1119]: time="2024-02-09T18:50:15.791343476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791378 env[1119]: time="2024-02-09T18:50:15.791355879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:50:15.791419 env[1119]: time="2024-02-09T18:50:15.791407786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791453 env[1119]: time="2024-02-09T18:50:15.791420470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791482 env[1119]: time="2024-02-09T18:50:15.791433515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791482 env[1119]: time="2024-02-09T18:50:15.791465525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791482 env[1119]: time="2024-02-09T18:50:15.791477387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791536 env[1119]: time="2024-02-09T18:50:15.791489550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791575 env[1119]: time="2024-02-09T18:50:15.791500951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791575 env[1119]: time="2024-02-09T18:50:15.791574218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791644 env[1119]: time="2024-02-09T18:50:15.791599726Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:50:15.791747 env[1119]: time="2024-02-09T18:50:15.791722066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791787 env[1119]: time="2024-02-09T18:50:15.791756230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791787 env[1119]: time="2024-02-09T18:50:15.791768703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.791787 env[1119]: time="2024-02-09T18:50:15.791779984Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:50:15.791853 env[1119]: time="2024-02-09T18:50:15.791794281Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:50:15.791853 env[1119]: time="2024-02-09T18:50:15.791805372Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:50:15.791853 env[1119]: time="2024-02-09T18:50:15.791834717Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:50:15.791913 env[1119]: time="2024-02-09T18:50:15.791872548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:50:15.792138 env[1119]: time="2024-02-09T18:50:15.792083443Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:50:15.793043 env[1119]: time="2024-02-09T18:50:15.792146892Z" level=info msg="Connect containerd service" Feb 9 18:50:15.793043 env[1119]: time="2024-02-09T18:50:15.792174133Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:50:15.793332 env[1119]: time="2024-02-09T18:50:15.793314421Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:50:15.793555 env[1119]: time="2024-02-09T18:50:15.793528573Z" level=info msg="Start subscribing containerd event" Feb 9 18:50:15.793662 env[1119]: time="2024-02-09T18:50:15.793645502Z" level=info msg="Start recovering state" Feb 9 18:50:15.793803 env[1119]: time="2024-02-09T18:50:15.793777600Z" level=info msg="Start event monitor" Feb 9 18:50:15.793885 env[1119]: time="2024-02-09T18:50:15.793869021Z" level=info msg="Start snapshots syncer" Feb 9 18:50:15.793972 env[1119]: time="2024-02-09T18:50:15.793955744Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:50:15.794059 env[1119]: time="2024-02-09T18:50:15.794042296Z" level=info msg="Start streaming server" Feb 9 18:50:15.794448 env[1119]: time="2024-02-09T18:50:15.794425495Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:50:15.794567 env[1119]: time="2024-02-09T18:50:15.794552062Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:50:15.794688 env[1119]: time="2024-02-09T18:50:15.794675594Z" level=info msg="containerd successfully booted in 0.099016s" Feb 9 18:50:15.794778 systemd[1]: Started containerd.service. Feb 9 18:50:15.796618 tar[1112]: ./portmap Feb 9 18:50:15.828704 tar[1112]: ./host-local Feb 9 18:50:15.854982 tar[1112]: ./vrf Feb 9 18:50:15.882644 tar[1112]: ./bridge Feb 9 18:50:15.916025 tar[1112]: ./tuning Feb 9 18:50:15.942986 tar[1112]: ./firewall Feb 9 18:50:15.953819 systemd[1]: Created slice system-sshd.slice. Feb 9 18:50:15.977731 tar[1112]: ./host-device Feb 9 18:50:16.008229 tar[1112]: ./sbr Feb 9 18:50:16.036657 tar[1112]: ./loopback Feb 9 18:50:16.066433 tar[1112]: ./dhcp Feb 9 18:50:16.069717 tar[1114]: linux-amd64/LICENSE Feb 9 18:50:16.069810 tar[1114]: linux-amd64/README.md Feb 9 18:50:16.073769 systemd[1]: Finished prepare-helm.service. Feb 9 18:50:16.080993 locksmithd[1143]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:50:16.098944 systemd[1]: Finished prepare-critools.service. Feb 9 18:50:16.137373 sshd_keygen[1108]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:50:16.145240 tar[1112]: ./ptp Feb 9 18:50:16.155266 systemd[1]: Finished sshd-keygen.service. Feb 9 18:50:16.157306 systemd[1]: Starting issuegen.service... Feb 9 18:50:16.158683 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:35668.service. Feb 9 18:50:16.163742 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:50:16.163864 systemd[1]: Finished issuegen.service. Feb 9 18:50:16.165665 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:50:16.170305 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:50:16.172149 systemd[1]: Started getty@tty1.service. Feb 9 18:50:16.173614 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 18:50:16.174455 systemd[1]: Reached target getty.target. Feb 9 18:50:16.179061 tar[1112]: ./ipvlan Feb 9 18:50:16.200792 sshd[1167]: Accepted publickey for core from 10.0.0.1 port 35668 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:50:16.202041 sshd[1167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:50:16.207466 tar[1112]: ./bandwidth Feb 9 18:50:16.210960 systemd-logind[1102]: New session 1 of user core. Feb 9 18:50:16.211290 systemd[1]: Created slice user-500.slice. Feb 9 18:50:16.213026 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:50:16.216589 systemd-networkd[1012]: eth0: Gained IPv6LL Feb 9 18:50:16.221196 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:50:16.222987 systemd[1]: Starting user@500.service... Feb 9 18:50:16.225985 (systemd)[1175]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:50:16.244631 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:50:16.245578 systemd[1]: Reached target multi-user.target. Feb 9 18:50:16.247232 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:50:16.253684 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:50:16.253802 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:50:16.297054 systemd[1175]: Queued start job for default target default.target. Feb 9 18:50:16.297510 systemd[1175]: Reached target paths.target. Feb 9 18:50:16.297530 systemd[1175]: Reached target sockets.target. Feb 9 18:50:16.297542 systemd[1175]: Reached target timers.target. Feb 9 18:50:16.297552 systemd[1175]: Reached target basic.target. Feb 9 18:50:16.297585 systemd[1175]: Reached target default.target. Feb 9 18:50:16.297609 systemd[1175]: Startup finished in 66ms. Feb 9 18:50:16.297635 systemd[1]: Started user@500.service. Feb 9 18:50:16.298915 systemd[1]: Started session-1.scope. Feb 9 18:50:16.299544 systemd[1]: Startup finished in 543ms (kernel) + 5.501s (initrd) + 5.115s (userspace) = 11.160s. Feb 9 18:50:16.352896 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:52270.service. Feb 9 18:50:16.391473 sshd[1187]: Accepted publickey for core from 10.0.0.1 port 52270 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:50:16.392490 sshd[1187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:50:16.395851 systemd-logind[1102]: New session 2 of user core. Feb 9 18:50:16.396610 systemd[1]: Started session-2.scope. Feb 9 18:50:16.450770 sshd[1187]: pam_unix(sshd:session): session closed for user core Feb 9 18:50:16.453394 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:52270.service: Deactivated successfully. Feb 9 18:50:16.453908 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:50:16.454348 systemd-logind[1102]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:50:16.455543 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:52274.service. Feb 9 18:50:16.456290 systemd-logind[1102]: Removed session 2. Feb 9 18:50:16.494326 sshd[1193]: Accepted publickey for core from 10.0.0.1 port 52274 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:50:16.495381 sshd[1193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:50:16.498935 systemd-logind[1102]: New session 3 of user core. Feb 9 18:50:16.499824 systemd[1]: Started session-3.scope. Feb 9 18:50:16.550406 sshd[1193]: pam_unix(sshd:session): session closed for user core Feb 9 18:50:16.553044 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:52274.service: Deactivated successfully. Feb 9 18:50:16.553556 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:50:16.554050 systemd-logind[1102]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:50:16.554938 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:52290.service. Feb 9 18:50:16.555660 systemd-logind[1102]: Removed session 3. Feb 9 18:50:16.593217 sshd[1199]: Accepted publickey for core from 10.0.0.1 port 52290 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:50:16.594207 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:50:16.597325 systemd-logind[1102]: New session 4 of user core. Feb 9 18:50:16.597981 systemd[1]: Started session-4.scope. Feb 9 18:50:16.651772 sshd[1199]: pam_unix(sshd:session): session closed for user core Feb 9 18:50:16.654198 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:52290.service: Deactivated successfully. Feb 9 18:50:16.654772 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:50:16.655276 systemd-logind[1102]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:50:16.656343 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:52302.service. Feb 9 18:50:16.656978 systemd-logind[1102]: Removed session 4. Feb 9 18:50:16.694612 sshd[1205]: Accepted publickey for core from 10.0.0.1 port 52302 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:50:16.695603 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:50:16.699004 systemd-logind[1102]: New session 5 of user core. Feb 9 18:50:16.700168 systemd[1]: Started session-5.scope. Feb 9 18:50:16.755085 sudo[1208]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:50:16.755250 sudo[1208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:50:17.271153 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:50:18.906031 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:50:18.906316 systemd[1]: Reached target network-online.target. Feb 9 18:50:18.907463 systemd[1]: Starting docker.service... Feb 9 18:50:18.946378 env[1226]: time="2024-02-09T18:50:18.946309940Z" level=info msg="Starting up" Feb 9 18:50:18.947998 env[1226]: time="2024-02-09T18:50:18.947963401Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:50:18.947998 env[1226]: time="2024-02-09T18:50:18.947987000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:50:18.948092 env[1226]: time="2024-02-09T18:50:18.948057804Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:50:18.948092 env[1226]: time="2024-02-09T18:50:18.948078106Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:50:18.949802 env[1226]: time="2024-02-09T18:50:18.949768846Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:50:18.949802 env[1226]: time="2024-02-09T18:50:18.949790935Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:50:18.949883 env[1226]: time="2024-02-09T18:50:18.949806815Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:50:18.949883 env[1226]: time="2024-02-09T18:50:18.949819813Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:50:19.689147 env[1226]: time="2024-02-09T18:50:19.689103172Z" level=info msg="Loading containers: start." Feb 9 18:50:19.780489 kernel: Initializing XFRM netlink socket Feb 9 18:50:19.807033 env[1226]: time="2024-02-09T18:50:19.806993467Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:50:19.847432 systemd-networkd[1012]: docker0: Link UP Feb 9 18:50:19.856019 env[1226]: time="2024-02-09T18:50:19.855993327Z" level=info msg="Loading containers: done." Feb 9 18:50:19.863648 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1436953611-merged.mount: Deactivated successfully. Feb 9 18:50:19.864903 env[1226]: time="2024-02-09T18:50:19.864870383Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:50:19.865061 env[1226]: time="2024-02-09T18:50:19.865039657Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:50:19.865129 env[1226]: time="2024-02-09T18:50:19.865116750Z" level=info msg="Daemon has completed initialization" Feb 9 18:50:19.879089 systemd[1]: Started docker.service. Feb 9 18:50:19.882685 env[1226]: time="2024-02-09T18:50:19.882654273Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:50:19.897930 systemd[1]: Reloading. Feb 9 18:50:19.960230 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2024-02-09T18:50:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:50:19.960255 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2024-02-09T18:50:19Z" level=info msg="torcx already run" Feb 9 18:50:20.019664 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:50:20.019685 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:50:20.037129 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:50:20.110590 systemd[1]: Started kubelet.service. Feb 9 18:50:20.156052 kubelet[1409]: E0209 18:50:20.155987 1409 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:50:20.158119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:50:20.158229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:50:20.597531 env[1119]: time="2024-02-09T18:50:20.597487810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:50:21.272204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212643236.mount: Deactivated successfully. Feb 9 18:50:22.996620 env[1119]: time="2024-02-09T18:50:22.996552889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:22.999802 env[1119]: time="2024-02-09T18:50:22.999765332Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:23.001594 env[1119]: time="2024-02-09T18:50:23.001544732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:23.003177 env[1119]: time="2024-02-09T18:50:23.003137682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:23.003817 env[1119]: time="2024-02-09T18:50:23.003789324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 18:50:23.012246 env[1119]: time="2024-02-09T18:50:23.012202902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:50:26.172612 env[1119]: time="2024-02-09T18:50:26.172549975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:26.174341 env[1119]: time="2024-02-09T18:50:26.174320946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:26.176058 env[1119]: time="2024-02-09T18:50:26.176037182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:26.177529 env[1119]: time="2024-02-09T18:50:26.177497547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:26.178174 env[1119]: time="2024-02-09T18:50:26.178142971Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 18:50:26.187112 env[1119]: time="2024-02-09T18:50:26.187077839Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:50:27.491262 env[1119]: time="2024-02-09T18:50:27.491208684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.493038 env[1119]: time="2024-02-09T18:50:27.492995768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.494701 env[1119]: time="2024-02-09T18:50:27.494674735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.496661 env[1119]: time="2024-02-09T18:50:27.496628510Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.499200 env[1119]: time="2024-02-09T18:50:27.499156588Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 18:50:27.508608 env[1119]: time="2024-02-09T18:50:27.508576076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:50:28.971887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764087156.mount: Deactivated successfully. Feb 9 18:50:29.427854 env[1119]: time="2024-02-09T18:50:29.427779394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:29.429366 env[1119]: time="2024-02-09T18:50:29.429325838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:29.430713 env[1119]: time="2024-02-09T18:50:29.430687881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:29.432014 env[1119]: time="2024-02-09T18:50:29.431984305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:29.432435 env[1119]: time="2024-02-09T18:50:29.432402117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 18:50:29.442436 env[1119]: time="2024-02-09T18:50:29.442399491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:50:30.049635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799176961.mount: Deactivated successfully. Feb 9 18:50:30.054672 env[1119]: time="2024-02-09T18:50:30.054630579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:30.056335 env[1119]: time="2024-02-09T18:50:30.056310567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:30.057709 env[1119]: time="2024-02-09T18:50:30.057677928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:30.058851 env[1119]: time="2024-02-09T18:50:30.058824426Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:30.059185 env[1119]: time="2024-02-09T18:50:30.059158992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 18:50:30.068047 env[1119]: time="2024-02-09T18:50:30.068021027Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:50:30.235326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:50:30.235530 systemd[1]: Stopped kubelet.service. Feb 9 18:50:30.236775 systemd[1]: Started kubelet.service. Feb 9 18:50:30.276719 kubelet[1467]: E0209 18:50:30.276662 1467 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:50:30.279818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:50:30.279937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:50:30.699835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575041344.mount: Deactivated successfully. Feb 9 18:50:35.397677 env[1119]: time="2024-02-09T18:50:35.397603341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:35.399942 env[1119]: time="2024-02-09T18:50:35.399885932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:35.401684 env[1119]: time="2024-02-09T18:50:35.401644338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:35.403282 env[1119]: time="2024-02-09T18:50:35.403257222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:35.403809 env[1119]: time="2024-02-09T18:50:35.403768011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 18:50:35.412928 env[1119]: time="2024-02-09T18:50:35.412887793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:50:36.049073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491216980.mount: Deactivated successfully. Feb 9 18:50:37.290067 env[1119]: time="2024-02-09T18:50:37.290007419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:37.291926 env[1119]: time="2024-02-09T18:50:37.291879105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:37.294028 env[1119]: time="2024-02-09T18:50:37.293970283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:37.295366 env[1119]: time="2024-02-09T18:50:37.295339348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:37.295806 env[1119]: time="2024-02-09T18:50:37.295782601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 18:50:39.453282 systemd[1]: Stopped kubelet.service. Feb 9 18:50:39.466016 systemd[1]: Reloading. Feb 9 18:50:39.528128 /usr/lib/systemd/system-generators/torcx-generator[1572]: time="2024-02-09T18:50:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:50:39.528465 /usr/lib/systemd/system-generators/torcx-generator[1572]: time="2024-02-09T18:50:39Z" level=info msg="torcx already run" Feb 9 18:50:39.584151 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:50:39.584167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:50:39.600585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:50:39.673890 systemd[1]: Started kubelet.service. Feb 9 18:50:39.713969 kubelet[1611]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:50:39.713969 kubelet[1611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:50:39.713969 kubelet[1611]: I0209 18:50:39.713923 1611 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:50:39.717476 kubelet[1611]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:50:39.717476 kubelet[1611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:50:40.037880 kubelet[1611]: I0209 18:50:40.037843 1611 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:50:40.037880 kubelet[1611]: I0209 18:50:40.037868 1611 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:50:40.038079 kubelet[1611]: I0209 18:50:40.038062 1611 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:50:40.041672 kubelet[1611]: E0209 18:50:40.041641 1611 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.041886 kubelet[1611]: I0209 18:50:40.041861 1611 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:50:40.044317 kubelet[1611]: I0209 18:50:40.044301 1611 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:50:40.044487 kubelet[1611]: I0209 18:50:40.044471 1611 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:50:40.044549 kubelet[1611]: I0209 18:50:40.044534 1611 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:50:40.044638 kubelet[1611]: I0209 18:50:40.044553 1611 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:50:40.044638 kubelet[1611]: I0209 18:50:40.044564 1611 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:50:40.044699 kubelet[1611]: I0209 18:50:40.044641 1611 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:50:40.050588 kubelet[1611]: I0209 18:50:40.050565 1611 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:50:40.050588 kubelet[1611]: I0209 18:50:40.050589 1611 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:50:40.050735 kubelet[1611]: I0209 18:50:40.050611 1611 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:50:40.050735 kubelet[1611]: I0209 18:50:40.050626 1611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:50:40.051088 kubelet[1611]: W0209 18:50:40.051032 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.051088 kubelet[1611]: E0209 18:50:40.051076 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.051158 kubelet[1611]: I0209 18:50:40.051132 1611 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:50:40.051255 kubelet[1611]: W0209 18:50:40.051222 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.051303 kubelet[1611]: E0209 18:50:40.051261 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.051390 kubelet[1611]: W0209 18:50:40.051335 1611 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:50:40.051719 kubelet[1611]: I0209 18:50:40.051695 1611 server.go:1186] "Started kubelet" Feb 9 18:50:40.052097 kubelet[1611]: E0209 18:50:40.051979 1611 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c629c5fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 51666888, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 51666888, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.35:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.35:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:50:40.052699 kubelet[1611]: E0209 18:50:40.052672 1611 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:50:40.052745 kubelet[1611]: E0209 18:50:40.052704 1611 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:50:40.053064 kubelet[1611]: I0209 18:50:40.053045 1611 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:50:40.053787 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:50:40.053884 kubelet[1611]: I0209 18:50:40.053859 1611 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:50:40.054100 kubelet[1611]: I0209 18:50:40.054075 1611 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:50:40.055755 kubelet[1611]: E0209 18:50:40.055739 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:40.055840 kubelet[1611]: I0209 18:50:40.055816 1611 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:50:40.056004 kubelet[1611]: I0209 18:50:40.055928 1611 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:50:40.056153 kubelet[1611]: E0209 18:50:40.056129 1611 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.056313 kubelet[1611]: W0209 18:50:40.056243 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.056362 kubelet[1611]: E0209 18:50:40.056317 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.096843 kubelet[1611]: I0209 18:50:40.096812 1611 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:50:40.097015 kubelet[1611]: I0209 18:50:40.097001 1611 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:50:40.097100 kubelet[1611]: I0209 18:50:40.097086 1611 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:50:40.100044 kubelet[1611]: I0209 18:50:40.100028 1611 policy_none.go:49] "None policy: Start" Feb 9 18:50:40.100660 kubelet[1611]: I0209 18:50:40.100640 1611 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:50:40.106553 kubelet[1611]: I0209 18:50:40.106516 1611 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:50:40.106662 kubelet[1611]: I0209 18:50:40.106560 1611 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:50:40.113344 systemd[1]: Created slice kubepods.slice. Feb 9 18:50:40.116944 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:50:40.119090 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:50:40.124024 kubelet[1611]: I0209 18:50:40.123995 1611 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:50:40.124192 kubelet[1611]: I0209 18:50:40.124169 1611 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:50:40.124626 kubelet[1611]: E0209 18:50:40.124606 1611 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:50:40.127575 kubelet[1611]: I0209 18:50:40.127551 1611 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:50:40.127575 kubelet[1611]: I0209 18:50:40.127569 1611 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:50:40.127699 kubelet[1611]: I0209 18:50:40.127586 1611 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:50:40.127699 kubelet[1611]: E0209 18:50:40.127621 1611 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:50:40.128060 kubelet[1611]: W0209 18:50:40.128038 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.128237 kubelet[1611]: E0209 18:50:40.128223 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.157831 kubelet[1611]: I0209 18:50:40.157808 1611 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:50:40.158150 kubelet[1611]: E0209 18:50:40.158131 1611 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 9 18:50:40.228230 kubelet[1611]: I0209 18:50:40.228197 1611 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:50:40.229118 kubelet[1611]: I0209 18:50:40.229102 1611 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:50:40.229612 kubelet[1611]: I0209 18:50:40.229597 1611 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:50:40.230797 kubelet[1611]: I0209 18:50:40.230774 1611 status_manager.go:698] "Failed to get status for pod" podUID=3989cad7beddfbfeefc5d6ab7f841d08 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.35:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.35:6443: connect: connection refused" Feb 9 18:50:40.231003 kubelet[1611]: I0209 18:50:40.230985 1611 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.35:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.35:6443: connect: connection refused" Feb 9 18:50:40.231290 kubelet[1611]: I0209 18:50:40.231275 1611 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.35:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.35:6443: connect: connection refused" Feb 9 18:50:40.234088 systemd[1]: Created slice kubepods-burstable-pod3989cad7beddfbfeefc5d6ab7f841d08.slice. Feb 9 18:50:40.242778 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 18:50:40.250825 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 18:50:40.257236 kubelet[1611]: E0209 18:50:40.257204 1611 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.357758 kubelet[1611]: I0209 18:50:40.357604 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3989cad7beddfbfeefc5d6ab7f841d08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3989cad7beddfbfeefc5d6ab7f841d08\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:40.357758 kubelet[1611]: I0209 18:50:40.357652 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:40.357758 kubelet[1611]: I0209 18:50:40.357671 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:40.357758 kubelet[1611]: I0209 18:50:40.357690 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:40.357758 kubelet[1611]: I0209 18:50:40.357718 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:50:40.358049 kubelet[1611]: I0209 18:50:40.357763 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3989cad7beddfbfeefc5d6ab7f841d08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3989cad7beddfbfeefc5d6ab7f841d08\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:40.358049 kubelet[1611]: I0209 18:50:40.357785 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:40.358049 kubelet[1611]: I0209 18:50:40.357803 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:40.358049 kubelet[1611]: I0209 18:50:40.357823 1611 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3989cad7beddfbfeefc5d6ab7f841d08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3989cad7beddfbfeefc5d6ab7f841d08\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:40.359598 kubelet[1611]: I0209 18:50:40.359583 1611 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:50:40.359868 kubelet[1611]: E0209 18:50:40.359850 1611 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 9 18:50:40.541573 kubelet[1611]: E0209 18:50:40.541532 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:40.542190 env[1119]: time="2024-02-09T18:50:40.542140055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3989cad7beddfbfeefc5d6ab7f841d08,Namespace:kube-system,Attempt:0,}" Feb 9 18:50:40.550393 kubelet[1611]: E0209 18:50:40.550352 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:40.550820 env[1119]: time="2024-02-09T18:50:40.550782099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 18:50:40.552913 kubelet[1611]: E0209 18:50:40.552897 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:40.553237 env[1119]: time="2024-02-09T18:50:40.553202211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 18:50:40.658082 kubelet[1611]: E0209 18:50:40.658008 1611 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:40.761305 kubelet[1611]: I0209 18:50:40.761271 1611 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:50:40.761744 kubelet[1611]: E0209 18:50:40.761653 1611 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 9 18:50:41.075102 kubelet[1611]: W0209 18:50:41.075040 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.075102 kubelet[1611]: E0209 18:50:41.075099 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.094836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309221742.mount: Deactivated successfully. Feb 9 18:50:41.100388 env[1119]: time="2024-02-09T18:50:41.100335051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.101951 env[1119]: time="2024-02-09T18:50:41.101902462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.102856 env[1119]: time="2024-02-09T18:50:41.102813155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.103458 env[1119]: time="2024-02-09T18:50:41.103417060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.107322 env[1119]: time="2024-02-09T18:50:41.107281847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.108483 env[1119]: time="2024-02-09T18:50:41.108453707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.109678 env[1119]: time="2024-02-09T18:50:41.109646455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.111077 env[1119]: time="2024-02-09T18:50:41.111048077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.113338 env[1119]: time="2024-02-09T18:50:41.113310487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.115406 env[1119]: time="2024-02-09T18:50:41.115373253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.116132 env[1119]: time="2024-02-09T18:50:41.116099735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.117747 env[1119]: time="2024-02-09T18:50:41.117713381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:41.137815 env[1119]: time="2024-02-09T18:50:41.136054286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:41.137815 env[1119]: time="2024-02-09T18:50:41.136089643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:41.137815 env[1119]: time="2024-02-09T18:50:41.136099817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:41.138135 env[1119]: time="2024-02-09T18:50:41.138079338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e57f02177f75fcbbfed71e6c52642a0c5727fee6f955c13b6b2a3ca32cdd3385 pid=1691 runtime=io.containerd.runc.v2 Feb 9 18:50:41.146825 env[1119]: time="2024-02-09T18:50:41.145286408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:41.146825 env[1119]: time="2024-02-09T18:50:41.145343518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:41.146825 env[1119]: time="2024-02-09T18:50:41.145353281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:41.146825 env[1119]: time="2024-02-09T18:50:41.146188412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15a07ab58fac65c64b8fe196b229576904ef739693f094575c001b89559f2588 pid=1706 runtime=io.containerd.runc.v2 Feb 9 18:50:41.147343 env[1119]: time="2024-02-09T18:50:41.147251020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:41.147422 env[1119]: time="2024-02-09T18:50:41.147386499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:41.147495 env[1119]: time="2024-02-09T18:50:41.147466996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:41.148389 env[1119]: time="2024-02-09T18:50:41.148298295Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93c6cd78aa37279c41c316a3f74cc5a3c1adb9c1e169202956f3d428f47bd3ac pid=1722 runtime=io.containerd.runc.v2 Feb 9 18:50:41.153070 systemd[1]: Started cri-containerd-e57f02177f75fcbbfed71e6c52642a0c5727fee6f955c13b6b2a3ca32cdd3385.scope. Feb 9 18:50:41.163376 systemd[1]: Started cri-containerd-93c6cd78aa37279c41c316a3f74cc5a3c1adb9c1e169202956f3d428f47bd3ac.scope. Feb 9 18:50:41.210595 systemd[1]: Started cri-containerd-15a07ab58fac65c64b8fe196b229576904ef739693f094575c001b89559f2588.scope. Feb 9 18:50:41.212969 kubelet[1611]: W0209 18:50:41.212891 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.212969 kubelet[1611]: E0209 18:50:41.212940 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.351546 env[1119]: time="2024-02-09T18:50:41.351337052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3989cad7beddfbfeefc5d6ab7f841d08,Namespace:kube-system,Attempt:0,} returns sandbox id \"e57f02177f75fcbbfed71e6c52642a0c5727fee6f955c13b6b2a3ca32cdd3385\"" Feb 9 18:50:41.352925 kubelet[1611]: E0209 18:50:41.352907 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:41.355059 env[1119]: time="2024-02-09T18:50:41.355036431Z" level=info msg="CreateContainer within sandbox \"e57f02177f75fcbbfed71e6c52642a0c5727fee6f955c13b6b2a3ca32cdd3385\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:50:41.367157 env[1119]: time="2024-02-09T18:50:41.367114298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a07ab58fac65c64b8fe196b229576904ef739693f094575c001b89559f2588\"" Feb 9 18:50:41.367509 kubelet[1611]: E0209 18:50:41.367492 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:41.368620 env[1119]: time="2024-02-09T18:50:41.368599337Z" level=info msg="CreateContainer within sandbox \"15a07ab58fac65c64b8fe196b229576904ef739693f094575c001b89559f2588\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:50:41.396763 env[1119]: time="2024-02-09T18:50:41.396709107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"93c6cd78aa37279c41c316a3f74cc5a3c1adb9c1e169202956f3d428f47bd3ac\"" Feb 9 18:50:41.397507 kubelet[1611]: E0209 18:50:41.397472 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:41.399052 env[1119]: time="2024-02-09T18:50:41.399013676Z" level=info msg="CreateContainer within sandbox \"e57f02177f75fcbbfed71e6c52642a0c5727fee6f955c13b6b2a3ca32cdd3385\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a002f6c9ad0a0e82132bb81b512095104baa5f506cf999d8095ee14b7b68b3c3\"" Feb 9 18:50:41.399164 env[1119]: time="2024-02-09T18:50:41.399138109Z" level=info msg="CreateContainer within sandbox \"93c6cd78aa37279c41c316a3f74cc5a3c1adb9c1e169202956f3d428f47bd3ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:50:41.399584 env[1119]: time="2024-02-09T18:50:41.399557932Z" level=info msg="StartContainer for \"a002f6c9ad0a0e82132bb81b512095104baa5f506cf999d8095ee14b7b68b3c3\"" Feb 9 18:50:41.402155 kubelet[1611]: W0209 18:50:41.402091 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.402212 kubelet[1611]: E0209 18:50:41.402176 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.402623 env[1119]: time="2024-02-09T18:50:41.402592514Z" level=info msg="CreateContainer within sandbox \"15a07ab58fac65c64b8fe196b229576904ef739693f094575c001b89559f2588\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d76cfeb68d256f32e980b9a09b4677e4ae72fda752a721c9c38d8e9cfcd3751\"" Feb 9 18:50:41.403705 env[1119]: time="2024-02-09T18:50:41.403680285Z" level=info msg="StartContainer for \"8d76cfeb68d256f32e980b9a09b4677e4ae72fda752a721c9c38d8e9cfcd3751\"" Feb 9 18:50:41.414721 systemd[1]: Started cri-containerd-a002f6c9ad0a0e82132bb81b512095104baa5f506cf999d8095ee14b7b68b3c3.scope. Feb 9 18:50:41.417646 env[1119]: time="2024-02-09T18:50:41.417584023Z" level=info msg="CreateContainer within sandbox \"93c6cd78aa37279c41c316a3f74cc5a3c1adb9c1e169202956f3d428f47bd3ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"07d4f02c8134ea1e617be21a0fdd550a70fd0c53195937979c0e7b83212f61b6\"" Feb 9 18:50:41.418062 env[1119]: time="2024-02-09T18:50:41.418024595Z" level=info msg="StartContainer for \"07d4f02c8134ea1e617be21a0fdd550a70fd0c53195937979c0e7b83212f61b6\"" Feb 9 18:50:41.418506 systemd[1]: Started cri-containerd-8d76cfeb68d256f32e980b9a09b4677e4ae72fda752a721c9c38d8e9cfcd3751.scope. Feb 9 18:50:41.468514 kubelet[1611]: W0209 18:50:41.467935 1611 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.468514 kubelet[1611]: E0209 18:50:41.467977 1611 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.468514 kubelet[1611]: E0209 18:50:41.468024 1611 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.35:6443: connect: connection refused Feb 9 18:50:41.469404 systemd[1]: Started cri-containerd-07d4f02c8134ea1e617be21a0fdd550a70fd0c53195937979c0e7b83212f61b6.scope. Feb 9 18:50:41.478247 env[1119]: time="2024-02-09T18:50:41.478210257Z" level=info msg="StartContainer for \"a002f6c9ad0a0e82132bb81b512095104baa5f506cf999d8095ee14b7b68b3c3\" returns successfully" Feb 9 18:50:41.488298 env[1119]: time="2024-02-09T18:50:41.488264619Z" level=info msg="StartContainer for \"8d76cfeb68d256f32e980b9a09b4677e4ae72fda752a721c9c38d8e9cfcd3751\" returns successfully" Feb 9 18:50:41.563036 kubelet[1611]: I0209 18:50:41.562994 1611 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:50:41.570248 env[1119]: time="2024-02-09T18:50:41.570204606Z" level=info msg="StartContainer for \"07d4f02c8134ea1e617be21a0fdd550a70fd0c53195937979c0e7b83212f61b6\" returns successfully" Feb 9 18:50:42.147233 kubelet[1611]: E0209 18:50:42.147200 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:42.152099 kubelet[1611]: E0209 18:50:42.152079 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:42.155822 kubelet[1611]: E0209 18:50:42.155803 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:43.040823 kubelet[1611]: I0209 18:50:43.040772 1611 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:50:43.047067 kubelet[1611]: E0209 18:50:43.047033 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.094226 kubelet[1611]: E0209 18:50:43.094140 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c629c5fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 51666888, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 51666888, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.147344 kubelet[1611]: E0209 18:50:43.147321 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.147644 kubelet[1611]: E0209 18:50:43.147417 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c62ac106f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 52695151, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 52695151, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.155653 kubelet[1611]: E0209 18:50:43.155636 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:43.155936 kubelet[1611]: E0209 18:50:43.155921 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:43.156119 kubelet[1611]: E0209 18:50:43.156097 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:43.200371 kubelet[1611]: E0209 18:50:43.200286 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c65437e9f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96173727, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96173727, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.247567 kubelet[1611]: E0209 18:50:43.247536 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.253265 kubelet[1611]: E0209 18:50:43.253205 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c6544216c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96215404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96215404, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.305911 kubelet[1611]: E0209 18:50:43.305795 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c65443c8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96222348, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96222348, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.348280 kubelet[1611]: E0209 18:50:43.348248 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.359033 kubelet[1611]: E0209 18:50:43.358980 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c66fbd9ad", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 125032877, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 125032877, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.413361 kubelet[1611]: E0209 18:50:43.413300 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c65437e9f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96173727, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 157774534, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.449328 kubelet[1611]: E0209 18:50:43.449304 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.467846 kubelet[1611]: E0209 18:50:43.467719 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c6544216c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96215404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 157782080, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.522499 kubelet[1611]: E0209 18:50:43.522418 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c65443c8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96222348, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 157786094, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.549591 kubelet[1611]: E0209 18:50:43.549567 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.650246 kubelet[1611]: E0209 18:50:43.650156 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.747816 kubelet[1611]: E0209 18:50:43.747750 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c65437e9f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96173727, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 229044153, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:43.750992 kubelet[1611]: E0209 18:50:43.750961 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.851994 kubelet[1611]: E0209 18:50:43.851962 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:43.952321 kubelet[1611]: E0209 18:50:43.952216 1611 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:50:44.053223 kubelet[1611]: I0209 18:50:44.053186 1611 apiserver.go:52] "Watching apiserver" Feb 9 18:50:44.056982 kubelet[1611]: I0209 18:50:44.056943 1611 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:50:44.138991 kubelet[1611]: I0209 18:50:44.138944 1611 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:50:44.147575 kubelet[1611]: E0209 18:50:44.147482 1611 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2466c6544216c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 96215404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 50, 40, 229052573, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:50:44.256978 kubelet[1611]: E0209 18:50:44.256957 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:44.455910 kubelet[1611]: E0209 18:50:44.455872 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:44.655082 kubelet[1611]: E0209 18:50:44.654973 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:45.157554 kubelet[1611]: E0209 18:50:45.157522 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:45.157943 kubelet[1611]: E0209 18:50:45.157764 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:45.157974 kubelet[1611]: E0209 18:50:45.157947 1611 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:45.747646 systemd[1]: Reloading. Feb 9 18:50:45.817305 /usr/lib/systemd/system-generators/torcx-generator[1945]: time="2024-02-09T18:50:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:50:45.817344 /usr/lib/systemd/system-generators/torcx-generator[1945]: time="2024-02-09T18:50:45Z" level=info msg="torcx already run" Feb 9 18:50:45.875152 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:50:45.875170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:50:45.892519 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:50:45.986213 kubelet[1611]: I0209 18:50:45.986168 1611 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:50:45.986317 systemd[1]: Stopping kubelet.service... Feb 9 18:50:46.003008 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:50:46.003286 systemd[1]: Stopped kubelet.service. Feb 9 18:50:46.004946 systemd[1]: Started kubelet.service. Feb 9 18:50:46.060681 kubelet[1986]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:50:46.060681 kubelet[1986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:50:46.061069 kubelet[1986]: I0209 18:50:46.060714 1986 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:50:46.062192 kubelet[1986]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:50:46.062192 kubelet[1986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:50:46.065141 kubelet[1986]: I0209 18:50:46.065114 1986 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:50:46.065194 kubelet[1986]: I0209 18:50:46.065144 1986 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:50:46.065384 kubelet[1986]: I0209 18:50:46.065368 1986 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:50:46.066531 kubelet[1986]: I0209 18:50:46.066517 1986 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:50:46.067069 kubelet[1986]: I0209 18:50:46.067047 1986 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:50:46.070138 kubelet[1986]: I0209 18:50:46.070126 1986 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:50:46.070296 kubelet[1986]: I0209 18:50:46.070281 1986 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:50:46.070344 kubelet[1986]: I0209 18:50:46.070336 1986 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:50:46.070428 kubelet[1986]: I0209 18:50:46.070353 1986 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:50:46.070428 kubelet[1986]: I0209 18:50:46.070362 1986 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:50:46.070428 kubelet[1986]: I0209 18:50:46.070388 1986 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:50:46.073061 kubelet[1986]: I0209 18:50:46.073041 1986 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:50:46.073111 kubelet[1986]: I0209 18:50:46.073064 1986 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:50:46.073111 kubelet[1986]: I0209 18:50:46.073089 1986 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:50:46.073111 kubelet[1986]: I0209 18:50:46.073106 1986 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:50:46.073923 kubelet[1986]: I0209 18:50:46.073898 1986 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:50:46.074307 kubelet[1986]: I0209 18:50:46.074287 1986 server.go:1186] "Started kubelet" Feb 9 18:50:46.076069 kubelet[1986]: I0209 18:50:46.076054 1986 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:50:46.077626 kubelet[1986]: I0209 18:50:46.077581 1986 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:50:46.077743 kubelet[1986]: I0209 18:50:46.077713 1986 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:50:46.077845 kubelet[1986]: I0209 18:50:46.077823 1986 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:50:46.078346 kubelet[1986]: I0209 18:50:46.078322 1986 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:50:46.090912 kubelet[1986]: E0209 18:50:46.084876 1986 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:50:46.090912 kubelet[1986]: E0209 18:50:46.084911 1986 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:50:46.111920 kubelet[1986]: I0209 18:50:46.111893 1986 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:50:46.130059 kubelet[1986]: I0209 18:50:46.130018 1986 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:50:46.130059 kubelet[1986]: I0209 18:50:46.130044 1986 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:50:46.130059 kubelet[1986]: I0209 18:50:46.130065 1986 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:50:46.130297 kubelet[1986]: E0209 18:50:46.130120 1986 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:50:46.135905 kubelet[1986]: I0209 18:50:46.135882 1986 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:50:46.135905 kubelet[1986]: I0209 18:50:46.135900 1986 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:50:46.136008 kubelet[1986]: I0209 18:50:46.135914 1986 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:50:46.136199 kubelet[1986]: I0209 18:50:46.136176 1986 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:50:46.136199 kubelet[1986]: I0209 18:50:46.136192 1986 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:50:46.136199 kubelet[1986]: I0209 18:50:46.136198 1986 policy_none.go:49] "None policy: Start" Feb 9 18:50:46.136662 kubelet[1986]: I0209 18:50:46.136648 1986 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:50:46.136662 kubelet[1986]: I0209 18:50:46.136664 1986 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:50:46.136896 kubelet[1986]: I0209 18:50:46.136882 1986 state_mem.go:75] "Updated machine memory state" Feb 9 18:50:46.140149 kubelet[1986]: I0209 18:50:46.140130 1986 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:50:46.140332 kubelet[1986]: I0209 18:50:46.140318 1986 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:50:46.181549 kubelet[1986]: I0209 18:50:46.181516 1986 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:50:46.188313 kubelet[1986]: I0209 18:50:46.188289 1986 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:50:46.188417 kubelet[1986]: I0209 18:50:46.188354 1986 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:50:46.230799 kubelet[1986]: I0209 18:50:46.230748 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:50:46.230963 kubelet[1986]: I0209 18:50:46.230853 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:50:46.230963 kubelet[1986]: I0209 18:50:46.230876 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:50:46.236650 kubelet[1986]: E0209 18:50:46.236623 1986 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:46.282261 kubelet[1986]: E0209 18:50:46.282155 1986 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:46.379047 kubelet[1986]: I0209 18:50:46.378985 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3989cad7beddfbfeefc5d6ab7f841d08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3989cad7beddfbfeefc5d6ab7f841d08\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:46.379208 kubelet[1986]: I0209 18:50:46.379074 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:46.379208 kubelet[1986]: I0209 18:50:46.379102 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:50:46.379208 kubelet[1986]: I0209 18:50:46.379147 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3989cad7beddfbfeefc5d6ab7f841d08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3989cad7beddfbfeefc5d6ab7f841d08\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:46.379208 kubelet[1986]: I0209 18:50:46.379188 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3989cad7beddfbfeefc5d6ab7f841d08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3989cad7beddfbfeefc5d6ab7f841d08\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:46.379306 kubelet[1986]: I0209 18:50:46.379215 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:46.379306 kubelet[1986]: I0209 18:50:46.379248 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:46.379306 kubelet[1986]: I0209 18:50:46.379276 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:46.379306 kubelet[1986]: I0209 18:50:46.379301 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:46.477038 kubelet[1986]: E0209 18:50:46.476993 1986 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 18:50:46.538335 kubelet[1986]: E0209 18:50:46.537717 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:46.582789 kubelet[1986]: E0209 18:50:46.582754 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:46.778191 kubelet[1986]: E0209 18:50:46.778160 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:47.074328 kubelet[1986]: I0209 18:50:47.074288 1986 apiserver.go:52] "Watching apiserver" Feb 9 18:50:47.078214 kubelet[1986]: I0209 18:50:47.078196 1986 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:50:47.084406 kubelet[1986]: I0209 18:50:47.084384 1986 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:50:47.126520 sudo[1208]: pam_unix(sudo:session): session closed for user root Feb 9 18:50:47.128038 sshd[1205]: pam_unix(sshd:session): session closed for user core Feb 9 18:50:47.130201 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:52302.service: Deactivated successfully. Feb 9 18:50:47.130909 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:50:47.131052 systemd[1]: session-5.scope: Consumed 2.749s CPU time. Feb 9 18:50:47.131390 systemd-logind[1102]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:50:47.132038 systemd-logind[1102]: Removed session 5. Feb 9 18:50:47.558586 kubelet[1986]: E0209 18:50:47.558544 1986 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:50:47.559144 kubelet[1986]: E0209 18:50:47.559120 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:47.804639 kubelet[1986]: E0209 18:50:47.804608 1986 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 18:50:47.804884 kubelet[1986]: E0209 18:50:47.804871 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:48.137387 kubelet[1986]: E0209 18:50:48.137359 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:48.137743 kubelet[1986]: E0209 18:50:48.137660 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:48.160576 kubelet[1986]: E0209 18:50:48.160536 1986 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:50:48.160948 kubelet[1986]: E0209 18:50:48.160927 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:48.617048 kubelet[1986]: I0209 18:50:48.617008 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.616940055 pod.CreationTimestamp="2024-02-09 18:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:50:48.161010486 +0000 UTC m=+2.152518898" watchObservedRunningTime="2024-02-09 18:50:48.616940055 +0000 UTC m=+2.608448467" Feb 9 18:50:48.892419 kubelet[1986]: I0209 18:50:48.892291 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.892256254 pod.CreationTimestamp="2024-02-09 18:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:50:48.617599438 +0000 UTC m=+2.609107850" watchObservedRunningTime="2024-02-09 18:50:48.892256254 +0000 UTC m=+2.883764666" Feb 9 18:50:48.892419 kubelet[1986]: I0209 18:50:48.892349 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.892336813 pod.CreationTimestamp="2024-02-09 18:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:50:48.892183274 +0000 UTC m=+2.883691696" watchObservedRunningTime="2024-02-09 18:50:48.892336813 +0000 UTC m=+2.883845226" Feb 9 18:50:49.138808 kubelet[1986]: E0209 18:50:49.138773 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:49.139160 kubelet[1986]: E0209 18:50:49.139131 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:49.668304 kubelet[1986]: E0209 18:50:49.668274 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:54.566720 kubelet[1986]: E0209 18:50:54.566692 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:55.146409 kubelet[1986]: E0209 18:50:55.146377 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:55.774944 kubelet[1986]: E0209 18:50:55.770190 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:56.147069 kubelet[1986]: E0209 18:50:56.146960 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:56.147363 kubelet[1986]: E0209 18:50:56.147348 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:59.676051 kubelet[1986]: E0209 18:50:59.674767 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:00.643087 kubelet[1986]: I0209 18:51:00.643043 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:51:00.644543 kubelet[1986]: I0209 18:51:00.644515 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:51:00.649121 systemd[1]: Created slice kubepods-besteffort-pod8176e806_1d01_429c_9bd2_ac6d088af7e8.slice. Feb 9 18:51:00.660676 systemd[1]: Created slice kubepods-burstable-pod21d58b88_6d1f_4298_ab21_e18d0993480a.slice. Feb 9 18:51:00.661600 kubelet[1986]: I0209 18:51:00.661577 1986 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:51:00.661882 env[1119]: time="2024-02-09T18:51:00.661833298Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:51:00.662095 kubelet[1986]: I0209 18:51:00.662068 1986 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:51:00.674246 kubelet[1986]: I0209 18:51:00.674201 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8176e806-1d01-429c-9bd2-ac6d088af7e8-kube-proxy\") pod \"kube-proxy-xvmzq\" (UID: \"8176e806-1d01-429c-9bd2-ac6d088af7e8\") " pod="kube-system/kube-proxy-xvmzq" Feb 9 18:51:00.674246 kubelet[1986]: I0209 18:51:00.674236 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/21d58b88-6d1f-4298-ab21-e18d0993480a-cni\") pod \"kube-flannel-ds-44wr9\" (UID: \"21d58b88-6d1f-4298-ab21-e18d0993480a\") " pod="kube-flannel/kube-flannel-ds-44wr9" Feb 9 18:51:00.674425 kubelet[1986]: I0209 18:51:00.674264 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/21d58b88-6d1f-4298-ab21-e18d0993480a-flannel-cfg\") pod \"kube-flannel-ds-44wr9\" (UID: \"21d58b88-6d1f-4298-ab21-e18d0993480a\") " pod="kube-flannel/kube-flannel-ds-44wr9" Feb 9 18:51:00.674425 kubelet[1986]: I0209 18:51:00.674297 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21d58b88-6d1f-4298-ab21-e18d0993480a-xtables-lock\") pod \"kube-flannel-ds-44wr9\" (UID: \"21d58b88-6d1f-4298-ab21-e18d0993480a\") " pod="kube-flannel/kube-flannel-ds-44wr9" Feb 9 18:51:00.674425 kubelet[1986]: I0209 18:51:00.674325 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8176e806-1d01-429c-9bd2-ac6d088af7e8-lib-modules\") pod \"kube-proxy-xvmzq\" (UID: \"8176e806-1d01-429c-9bd2-ac6d088af7e8\") " pod="kube-system/kube-proxy-xvmzq" Feb 9 18:51:00.674425 kubelet[1986]: I0209 18:51:00.674345 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/21d58b88-6d1f-4298-ab21-e18d0993480a-run\") pod \"kube-flannel-ds-44wr9\" (UID: \"21d58b88-6d1f-4298-ab21-e18d0993480a\") " pod="kube-flannel/kube-flannel-ds-44wr9" Feb 9 18:51:00.674425 kubelet[1986]: I0209 18:51:00.674364 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttdl\" (UniqueName: \"kubernetes.io/projected/21d58b88-6d1f-4298-ab21-e18d0993480a-kube-api-access-kttdl\") pod \"kube-flannel-ds-44wr9\" (UID: \"21d58b88-6d1f-4298-ab21-e18d0993480a\") " pod="kube-flannel/kube-flannel-ds-44wr9" Feb 9 18:51:00.674560 kubelet[1986]: I0209 18:51:00.674381 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/21d58b88-6d1f-4298-ab21-e18d0993480a-cni-plugin\") pod \"kube-flannel-ds-44wr9\" (UID: \"21d58b88-6d1f-4298-ab21-e18d0993480a\") " pod="kube-flannel/kube-flannel-ds-44wr9" Feb 9 18:51:00.674560 kubelet[1986]: I0209 18:51:00.674401 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8176e806-1d01-429c-9bd2-ac6d088af7e8-xtables-lock\") pod \"kube-proxy-xvmzq\" (UID: \"8176e806-1d01-429c-9bd2-ac6d088af7e8\") " pod="kube-system/kube-proxy-xvmzq" Feb 9 18:51:00.674560 kubelet[1986]: I0209 18:51:00.674423 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp9fw\" (UniqueName: \"kubernetes.io/projected/8176e806-1d01-429c-9bd2-ac6d088af7e8-kube-api-access-bp9fw\") pod \"kube-proxy-xvmzq\" (UID: \"8176e806-1d01-429c-9bd2-ac6d088af7e8\") " pod="kube-system/kube-proxy-xvmzq" Feb 9 18:51:00.958161 kubelet[1986]: E0209 18:51:00.958063 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:00.959053 env[1119]: time="2024-02-09T18:51:00.959021347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvmzq,Uid:8176e806-1d01-429c-9bd2-ac6d088af7e8,Namespace:kube-system,Attempt:0,}" Feb 9 18:51:00.963350 kubelet[1986]: E0209 18:51:00.963336 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:00.963772 env[1119]: time="2024-02-09T18:51:00.963727431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-44wr9,Uid:21d58b88-6d1f-4298-ab21-e18d0993480a,Namespace:kube-flannel,Attempt:0,}" Feb 9 18:51:00.975065 env[1119]: time="2024-02-09T18:51:00.974380743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:51:00.975065 env[1119]: time="2024-02-09T18:51:00.974413038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:51:00.975065 env[1119]: time="2024-02-09T18:51:00.974422139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:51:00.975065 env[1119]: time="2024-02-09T18:51:00.974545405Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdae1f9bdb33921169e4df10c8bdbd6661f91e25926b8f68d47fd69dd8630cd pid=2079 runtime=io.containerd.runc.v2 Feb 9 18:51:00.980665 env[1119]: time="2024-02-09T18:51:00.980555399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:51:00.980665 env[1119]: time="2024-02-09T18:51:00.980586401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:51:00.981092 env[1119]: time="2024-02-09T18:51:00.980822430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:51:00.981092 env[1119]: time="2024-02-09T18:51:00.981039463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc pid=2102 runtime=io.containerd.runc.v2 Feb 9 18:51:00.990721 systemd[1]: Started cri-containerd-4bdae1f9bdb33921169e4df10c8bdbd6661f91e25926b8f68d47fd69dd8630cd.scope. Feb 9 18:51:00.994887 systemd[1]: Started cri-containerd-183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc.scope. Feb 9 18:51:01.014474 env[1119]: time="2024-02-09T18:51:01.010731823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvmzq,Uid:8176e806-1d01-429c-9bd2-ac6d088af7e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bdae1f9bdb33921169e4df10c8bdbd6661f91e25926b8f68d47fd69dd8630cd\"" Feb 9 18:51:01.014474 env[1119]: time="2024-02-09T18:51:01.013476390Z" level=info msg="CreateContainer within sandbox \"4bdae1f9bdb33921169e4df10c8bdbd6661f91e25926b8f68d47fd69dd8630cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:51:01.014674 kubelet[1986]: E0209 18:51:01.011421 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:01.029837 env[1119]: time="2024-02-09T18:51:01.029787377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-44wr9,Uid:21d58b88-6d1f-4298-ab21-e18d0993480a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\"" Feb 9 18:51:01.030377 kubelet[1986]: E0209 18:51:01.030359 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:01.031121 env[1119]: time="2024-02-09T18:51:01.031059073Z" level=info msg="CreateContainer within sandbox \"4bdae1f9bdb33921169e4df10c8bdbd6661f91e25926b8f68d47fd69dd8630cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e911e4451385cf4c4e396a058c420b105e2c01673802248ca6074fa23a0c1576\"" Feb 9 18:51:01.031485 env[1119]: time="2024-02-09T18:51:01.031434807Z" level=info msg="StartContainer for \"e911e4451385cf4c4e396a058c420b105e2c01673802248ca6074fa23a0c1576\"" Feb 9 18:51:01.031609 env[1119]: time="2024-02-09T18:51:01.031576955Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 9 18:51:01.044855 systemd[1]: Started cri-containerd-e911e4451385cf4c4e396a058c420b105e2c01673802248ca6074fa23a0c1576.scope. Feb 9 18:51:01.069306 env[1119]: time="2024-02-09T18:51:01.069245082Z" level=info msg="StartContainer for \"e911e4451385cf4c4e396a058c420b105e2c01673802248ca6074fa23a0c1576\" returns successfully" Feb 9 18:51:01.140003 update_engine[1106]: I0209 18:51:01.139952 1106 update_attempter.cc:509] Updating boot flags... Feb 9 18:51:01.154411 kubelet[1986]: E0209 18:51:01.154394 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:02.155496 kubelet[1986]: E0209 18:51:02.155459 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:02.931000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635909286.mount: Deactivated successfully. Feb 9 18:51:03.270741 env[1119]: time="2024-02-09T18:51:03.270671762Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:03.322849 env[1119]: time="2024-02-09T18:51:03.322805782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcecffc7ad4af70c8b436d45688771e0562cbd20f55d98581ba22cf13aad360d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:03.356342 env[1119]: time="2024-02-09T18:51:03.356289078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:03.451934 env[1119]: time="2024-02-09T18:51:03.451874718Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:03.452494 env[1119]: time="2024-02-09T18:51:03.452460121Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:fcecffc7ad4af70c8b436d45688771e0562cbd20f55d98581ba22cf13aad360d\"" Feb 9 18:51:03.454098 env[1119]: time="2024-02-09T18:51:03.454047411Z" level=info msg="CreateContainer within sandbox \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 18:51:03.868967 env[1119]: time="2024-02-09T18:51:03.868908160Z" level=info msg="CreateContainer within sandbox \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139\"" Feb 9 18:51:03.869499 env[1119]: time="2024-02-09T18:51:03.869469580Z" level=info msg="StartContainer for \"402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139\"" Feb 9 18:51:03.885707 systemd[1]: Started cri-containerd-402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139.scope. Feb 9 18:51:03.906338 env[1119]: time="2024-02-09T18:51:03.906285893Z" level=info msg="StartContainer for \"402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139\" returns successfully" Feb 9 18:51:03.906655 systemd[1]: cri-containerd-402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139.scope: Deactivated successfully. Feb 9 18:51:03.960618 env[1119]: time="2024-02-09T18:51:03.960560826Z" level=info msg="shim disconnected" id=402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139 Feb 9 18:51:03.960793 env[1119]: time="2024-02-09T18:51:03.960616842Z" level=warning msg="cleaning up after shim disconnected" id=402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139 namespace=k8s.io Feb 9 18:51:03.960793 env[1119]: time="2024-02-09T18:51:03.960632818Z" level=info msg="cleaning up dead shim" Feb 9 18:51:03.967199 env[1119]: time="2024-02-09T18:51:03.967176195Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:51:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2353 runtime=io.containerd.runc.v2\n" Feb 9 18:51:04.159174 kubelet[1986]: E0209 18:51:04.159067 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:04.160467 env[1119]: time="2024-02-09T18:51:04.160409856Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 9 18:51:04.168142 kubelet[1986]: I0209 18:51:04.168106 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xvmzq" podStartSLOduration=4.168061439 pod.CreationTimestamp="2024-02-09 18:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:51:01.164601078 +0000 UTC m=+15.156109501" watchObservedRunningTime="2024-02-09 18:51:04.168061439 +0000 UTC m=+18.159569851" Feb 9 18:51:04.859407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-402ebc057f262b041d00fd97e64906551f02fe04b1947ee30180f495d5777139-rootfs.mount: Deactivated successfully. Feb 9 18:51:05.888762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130669141.mount: Deactivated successfully. Feb 9 18:51:06.570068 env[1119]: time="2024-02-09T18:51:06.570008473Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:06.571722 env[1119]: time="2024-02-09T18:51:06.571670436Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b5c6c9203f83e9a48e9d0b0fb7a38196c8412f458953ca98a4feac3515c6abb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:06.573533 env[1119]: time="2024-02-09T18:51:06.573496111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:06.575091 env[1119]: time="2024-02-09T18:51:06.575056970Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:06.575656 env[1119]: time="2024-02-09T18:51:06.575624674Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:b5c6c9203f83e9a48e9d0b0fb7a38196c8412f458953ca98a4feac3515c6abb1\"" Feb 9 18:51:06.577113 env[1119]: time="2024-02-09T18:51:06.577078148Z" level=info msg="CreateContainer within sandbox \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 18:51:06.589173 env[1119]: time="2024-02-09T18:51:06.589112786Z" level=info msg="CreateContainer within sandbox \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710\"" Feb 9 18:51:06.589661 env[1119]: time="2024-02-09T18:51:06.589630801Z" level=info msg="StartContainer for \"ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710\"" Feb 9 18:51:06.603703 systemd[1]: Started cri-containerd-ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710.scope. Feb 9 18:51:06.625166 systemd[1]: cri-containerd-ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710.scope: Deactivated successfully. Feb 9 18:51:06.628152 env[1119]: time="2024-02-09T18:51:06.628118054Z" level=info msg="StartContainer for \"ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710\" returns successfully" Feb 9 18:51:06.675247 kubelet[1986]: I0209 18:51:06.675215 1986 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:51:06.799130 kubelet[1986]: I0209 18:51:06.799088 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:51:06.799319 kubelet[1986]: I0209 18:51:06.799270 1986 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:51:06.805143 systemd[1]: Created slice kubepods-burstable-pod253ba62b_1586_4917_9635_a006996c6e51.slice. Feb 9 18:51:06.808678 systemd[1]: Created slice kubepods-burstable-pode1078cee_6cdf_4208_b318_58ff0a4497b2.slice. Feb 9 18:51:06.815162 kubelet[1986]: I0209 18:51:06.815123 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1078cee-6cdf-4208-b318-58ff0a4497b2-config-volume\") pod \"coredns-787d4945fb-2ph7v\" (UID: \"e1078cee-6cdf-4208-b318-58ff0a4497b2\") " pod="kube-system/coredns-787d4945fb-2ph7v" Feb 9 18:51:06.815162 kubelet[1986]: I0209 18:51:06.815160 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr78w\" (UniqueName: \"kubernetes.io/projected/e1078cee-6cdf-4208-b318-58ff0a4497b2-kube-api-access-mr78w\") pod \"coredns-787d4945fb-2ph7v\" (UID: \"e1078cee-6cdf-4208-b318-58ff0a4497b2\") " pod="kube-system/coredns-787d4945fb-2ph7v" Feb 9 18:51:06.815289 kubelet[1986]: I0209 18:51:06.815183 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5qh\" (UniqueName: \"kubernetes.io/projected/253ba62b-1586-4917-9635-a006996c6e51-kube-api-access-mj5qh\") pod \"coredns-787d4945fb-qb5r5\" (UID: \"253ba62b-1586-4917-9635-a006996c6e51\") " pod="kube-system/coredns-787d4945fb-qb5r5" Feb 9 18:51:06.815289 kubelet[1986]: I0209 18:51:06.815204 1986 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/253ba62b-1586-4917-9635-a006996c6e51-config-volume\") pod \"coredns-787d4945fb-qb5r5\" (UID: \"253ba62b-1586-4917-9635-a006996c6e51\") " pod="kube-system/coredns-787d4945fb-qb5r5" Feb 9 18:51:06.867292 env[1119]: time="2024-02-09T18:51:06.867173786Z" level=info msg="shim disconnected" id=ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710 Feb 9 18:51:06.867507 env[1119]: time="2024-02-09T18:51:06.867488743Z" level=warning msg="cleaning up after shim disconnected" id=ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710 namespace=k8s.io Feb 9 18:51:06.867637 env[1119]: time="2024-02-09T18:51:06.867608177Z" level=info msg="cleaning up dead shim" Feb 9 18:51:06.874777 env[1119]: time="2024-02-09T18:51:06.874716557Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:51:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2409 runtime=io.containerd.runc.v2\n" Feb 9 18:51:06.888452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca96a0ad14c5c63b30b3408e304248099916336024f177a72a3d14ab695b6710-rootfs.mount: Deactivated successfully. Feb 9 18:51:07.107923 kubelet[1986]: E0209 18:51:07.107883 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:07.108478 env[1119]: time="2024-02-09T18:51:07.108422431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qb5r5,Uid:253ba62b-1586-4917-9635-a006996c6e51,Namespace:kube-system,Attempt:0,}" Feb 9 18:51:07.110695 kubelet[1986]: E0209 18:51:07.110673 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:07.111150 env[1119]: time="2024-02-09T18:51:07.111099748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2ph7v,Uid:e1078cee-6cdf-4208-b318-58ff0a4497b2,Namespace:kube-system,Attempt:0,}" Feb 9 18:51:07.139044 env[1119]: time="2024-02-09T18:51:07.138927345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qb5r5,Uid:253ba62b-1586-4917-9635-a006996c6e51,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fea7640211ae7ea1dffb18c3737295969612e261230f8dc6b3502a56ffc6c47\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:51:07.139182 kubelet[1986]: E0209 18:51:07.139150 1986 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fea7640211ae7ea1dffb18c3737295969612e261230f8dc6b3502a56ffc6c47\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:51:07.139228 kubelet[1986]: E0209 18:51:07.139217 1986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fea7640211ae7ea1dffb18c3737295969612e261230f8dc6b3502a56ffc6c47\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-qb5r5" Feb 9 18:51:07.139260 kubelet[1986]: E0209 18:51:07.139241 1986 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fea7640211ae7ea1dffb18c3737295969612e261230f8dc6b3502a56ffc6c47\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-qb5r5" Feb 9 18:51:07.139291 kubelet[1986]: E0209 18:51:07.139285 1986 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-qb5r5_kube-system(253ba62b-1586-4917-9635-a006996c6e51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-qb5r5_kube-system(253ba62b-1586-4917-9635-a006996c6e51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fea7640211ae7ea1dffb18c3737295969612e261230f8dc6b3502a56ffc6c47\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-qb5r5" podUID=253ba62b-1586-4917-9635-a006996c6e51 Feb 9 18:51:07.140792 env[1119]: time="2024-02-09T18:51:07.140727757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2ph7v,Uid:e1078cee-6cdf-4208-b318-58ff0a4497b2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6440ddff8ad814634484a878bcd5174c7d8e0f52d1b5055e9fedc511930d7f6a\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:51:07.140970 kubelet[1986]: E0209 18:51:07.140951 1986 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6440ddff8ad814634484a878bcd5174c7d8e0f52d1b5055e9fedc511930d7f6a\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:51:07.141035 kubelet[1986]: E0209 18:51:07.140988 1986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6440ddff8ad814634484a878bcd5174c7d8e0f52d1b5055e9fedc511930d7f6a\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-2ph7v" Feb 9 18:51:07.141035 kubelet[1986]: E0209 18:51:07.141013 1986 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6440ddff8ad814634484a878bcd5174c7d8e0f52d1b5055e9fedc511930d7f6a\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-2ph7v" Feb 9 18:51:07.141097 kubelet[1986]: E0209 18:51:07.141059 1986 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-2ph7v_kube-system(e1078cee-6cdf-4208-b318-58ff0a4497b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-2ph7v_kube-system(e1078cee-6cdf-4208-b318-58ff0a4497b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6440ddff8ad814634484a878bcd5174c7d8e0f52d1b5055e9fedc511930d7f6a\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-2ph7v" podUID=e1078cee-6cdf-4208-b318-58ff0a4497b2 Feb 9 18:51:07.164997 kubelet[1986]: E0209 18:51:07.164239 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:07.165884 env[1119]: time="2024-02-09T18:51:07.165857161Z" level=info msg="CreateContainer within sandbox \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 18:51:07.179180 env[1119]: time="2024-02-09T18:51:07.179117260Z" level=info msg="CreateContainer within sandbox \"183c8d2f9ffbc6e8e729ce3566d57b7c41af6428cb396e3b7092c750eab872bc\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"554c164de240d9902a097c0b492f1a00e223993f6dfa48b3ce9e410e5d599b85\"" Feb 9 18:51:07.179612 env[1119]: time="2024-02-09T18:51:07.179585377Z" level=info msg="StartContainer for \"554c164de240d9902a097c0b492f1a00e223993f6dfa48b3ce9e410e5d599b85\"" Feb 9 18:51:07.194309 systemd[1]: Started cri-containerd-554c164de240d9902a097c0b492f1a00e223993f6dfa48b3ce9e410e5d599b85.scope. Feb 9 18:51:07.224240 env[1119]: time="2024-02-09T18:51:07.224180061Z" level=info msg="StartContainer for \"554c164de240d9902a097c0b492f1a00e223993f6dfa48b3ce9e410e5d599b85\" returns successfully" Feb 9 18:51:07.889265 systemd[1]: run-netns-cni\x2d993f3e78\x2d6d9f\x2d0154\x2d9d62\x2d9af7db4c0b59.mount: Deactivated successfully. Feb 9 18:51:07.889349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6440ddff8ad814634484a878bcd5174c7d8e0f52d1b5055e9fedc511930d7f6a-shm.mount: Deactivated successfully. Feb 9 18:51:07.889405 systemd[1]: run-netns-cni\x2d831387ae\x2d62d7\x2d8afc\x2d5285\x2d8edbf45f89b9.mount: Deactivated successfully. Feb 9 18:51:07.889461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fea7640211ae7ea1dffb18c3737295969612e261230f8dc6b3502a56ffc6c47-shm.mount: Deactivated successfully. Feb 9 18:51:08.167192 kubelet[1986]: E0209 18:51:08.167093 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:08.466922 systemd-networkd[1012]: flannel.1: Link UP Feb 9 18:51:08.466931 systemd-networkd[1012]: flannel.1: Gained carrier Feb 9 18:51:09.168417 kubelet[1986]: E0209 18:51:09.168392 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:09.527599 systemd-networkd[1012]: flannel.1: Gained IPv6LL Feb 9 18:51:20.130924 kubelet[1986]: E0209 18:51:20.130888 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:20.131345 env[1119]: time="2024-02-09T18:51:20.131255484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2ph7v,Uid:e1078cee-6cdf-4208-b318-58ff0a4497b2,Namespace:kube-system,Attempt:0,}" Feb 9 18:51:20.145254 systemd-networkd[1012]: cni0: Link UP Feb 9 18:51:20.145264 systemd-networkd[1012]: cni0: Gained carrier Feb 9 18:51:20.147928 systemd-networkd[1012]: cni0: Lost carrier Feb 9 18:51:20.149665 systemd-networkd[1012]: vethdf1809e1: Link UP Feb 9 18:51:20.151061 kernel: cni0: port 1(vethdf1809e1) entered blocking state Feb 9 18:51:20.151189 kernel: cni0: port 1(vethdf1809e1) entered disabled state Feb 9 18:51:20.151905 kernel: device vethdf1809e1 entered promiscuous mode Feb 9 18:51:20.153271 kernel: cni0: port 1(vethdf1809e1) entered blocking state Feb 9 18:51:20.153986 kernel: cni0: port 1(vethdf1809e1) entered forwarding state Feb 9 18:51:20.154005 kernel: cni0: port 1(vethdf1809e1) entered disabled state Feb 9 18:51:20.163602 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdf1809e1: link becomes ready Feb 9 18:51:20.163648 kernel: cni0: port 1(vethdf1809e1) entered blocking state Feb 9 18:51:20.163665 kernel: cni0: port 1(vethdf1809e1) entered forwarding state Feb 9 18:51:20.164431 systemd-networkd[1012]: vethdf1809e1: Gained carrier Feb 9 18:51:20.164628 systemd-networkd[1012]: cni0: Gained carrier Feb 9 18:51:20.169858 env[1119]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001228e8), "name":"cbr0", "type":"bridge"} Feb 9 18:51:20.178731 env[1119]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T18:51:20.178663571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:51:20.178731 env[1119]: time="2024-02-09T18:51:20.178711660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:51:20.178731 env[1119]: time="2024-02-09T18:51:20.178727423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:51:20.179011 env[1119]: time="2024-02-09T18:51:20.178971377Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bd698f15ce9ae68800a5d7e5b15f1dc42d1d8d66e692f2a1864cb93d2a7fa82 pid=2682 runtime=io.containerd.runc.v2 Feb 9 18:51:20.190540 systemd[1]: Started cri-containerd-8bd698f15ce9ae68800a5d7e5b15f1dc42d1d8d66e692f2a1864cb93d2a7fa82.scope. Feb 9 18:51:20.201298 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:51:20.222301 env[1119]: time="2024-02-09T18:51:20.222247530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2ph7v,Uid:e1078cee-6cdf-4208-b318-58ff0a4497b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bd698f15ce9ae68800a5d7e5b15f1dc42d1d8d66e692f2a1864cb93d2a7fa82\"" Feb 9 18:51:20.223048 kubelet[1986]: E0209 18:51:20.223021 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:20.226639 env[1119]: time="2024-02-09T18:51:20.226612266Z" level=info msg="CreateContainer within sandbox \"8bd698f15ce9ae68800a5d7e5b15f1dc42d1d8d66e692f2a1864cb93d2a7fa82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:51:20.241854 env[1119]: time="2024-02-09T18:51:20.241809219Z" level=info msg="CreateContainer within sandbox \"8bd698f15ce9ae68800a5d7e5b15f1dc42d1d8d66e692f2a1864cb93d2a7fa82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0dab940c96b4b32afe63fbf0fd62fb0f6f90e6f6710adb23cc5728f54c0a267a\"" Feb 9 18:51:20.243809 env[1119]: time="2024-02-09T18:51:20.243769933Z" level=info msg="StartContainer for \"0dab940c96b4b32afe63fbf0fd62fb0f6f90e6f6710adb23cc5728f54c0a267a\"" Feb 9 18:51:20.256514 systemd[1]: Started cri-containerd-0dab940c96b4b32afe63fbf0fd62fb0f6f90e6f6710adb23cc5728f54c0a267a.scope. Feb 9 18:51:20.283365 env[1119]: time="2024-02-09T18:51:20.283308578Z" level=info msg="StartContainer for \"0dab940c96b4b32afe63fbf0fd62fb0f6f90e6f6710adb23cc5728f54c0a267a\" returns successfully" Feb 9 18:51:21.141286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425856262.mount: Deactivated successfully. Feb 9 18:51:21.187489 kubelet[1986]: E0209 18:51:21.187461 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:21.194837 kubelet[1986]: I0209 18:51:21.194811 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-44wr9" podStartSLOduration=-9.22337201566e+09 pod.CreationTimestamp="2024-02-09 18:51:00 +0000 UTC" firstStartedPulling="2024-02-09 18:51:01.03081896 +0000 UTC m=+15.022327372" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:51:08.176178618 +0000 UTC m=+22.167687030" watchObservedRunningTime="2024-02-09 18:51:21.194776443 +0000 UTC m=+35.186284855" Feb 9 18:51:21.194987 kubelet[1986]: I0209 18:51:21.194900 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-2ph7v" podStartSLOduration=21.194882662 pod.CreationTimestamp="2024-02-09 18:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:51:21.194553964 +0000 UTC m=+35.186062667" watchObservedRunningTime="2024-02-09 18:51:21.194882662 +0000 UTC m=+35.186391074" Feb 9 18:51:21.303558 systemd-networkd[1012]: cni0: Gained IPv6LL Feb 9 18:51:21.431554 systemd-networkd[1012]: vethdf1809e1: Gained IPv6LL Feb 9 18:51:22.130896 kubelet[1986]: E0209 18:51:22.130855 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:22.131255 env[1119]: time="2024-02-09T18:51:22.131213662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qb5r5,Uid:253ba62b-1586-4917-9635-a006996c6e51,Namespace:kube-system,Attempt:0,}" Feb 9 18:51:22.159578 systemd-networkd[1012]: veth2571dc93: Link UP Feb 9 18:51:22.161744 kernel: cni0: port 2(veth2571dc93) entered blocking state Feb 9 18:51:22.161805 kernel: cni0: port 2(veth2571dc93) entered disabled state Feb 9 18:51:22.161832 kernel: device veth2571dc93 entered promiscuous mode Feb 9 18:51:22.166122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:51:22.166172 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2571dc93: link becomes ready Feb 9 18:51:22.166197 kernel: cni0: port 2(veth2571dc93) entered blocking state Feb 9 18:51:22.167642 kernel: cni0: port 2(veth2571dc93) entered forwarding state Feb 9 18:51:22.167894 systemd-networkd[1012]: veth2571dc93: Gained carrier Feb 9 18:51:22.169588 env[1119]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000b48e8), "name":"cbr0", "type":"bridge"} Feb 9 18:51:22.178511 env[1119]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T18:51:22.178424836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:51:22.178645 env[1119]: time="2024-02-09T18:51:22.178485551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:51:22.178645 env[1119]: time="2024-02-09T18:51:22.178494960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:51:22.178762 env[1119]: time="2024-02-09T18:51:22.178631551Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58a3b8e61c74e8ab068a8c6aae1cbb0bf463ecb67f2451676d8afbcc673b4584 pid=2845 runtime=io.containerd.runc.v2 Feb 9 18:51:22.189366 kubelet[1986]: E0209 18:51:22.189345 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:22.189900 systemd[1]: run-containerd-runc-k8s.io-58a3b8e61c74e8ab068a8c6aae1cbb0bf463ecb67f2451676d8afbcc673b4584-runc.69yIRw.mount: Deactivated successfully. Feb 9 18:51:22.191105 systemd[1]: Started cri-containerd-58a3b8e61c74e8ab068a8c6aae1cbb0bf463ecb67f2451676d8afbcc673b4584.scope. Feb 9 18:51:22.203519 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:51:22.224611 env[1119]: time="2024-02-09T18:51:22.224565731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qb5r5,Uid:253ba62b-1586-4917-9635-a006996c6e51,Namespace:kube-system,Attempt:0,} returns sandbox id \"58a3b8e61c74e8ab068a8c6aae1cbb0bf463ecb67f2451676d8afbcc673b4584\"" Feb 9 18:51:22.225817 kubelet[1986]: E0209 18:51:22.225355 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:22.229920 env[1119]: time="2024-02-09T18:51:22.229882963Z" level=info msg="CreateContainer within sandbox \"58a3b8e61c74e8ab068a8c6aae1cbb0bf463ecb67f2451676d8afbcc673b4584\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:51:22.243942 env[1119]: time="2024-02-09T18:51:22.243891154Z" level=info msg="CreateContainer within sandbox \"58a3b8e61c74e8ab068a8c6aae1cbb0bf463ecb67f2451676d8afbcc673b4584\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e38c4745ae974914c3b264c404fb8f62f59b143d8b5883c939d07b81b289c936\"" Feb 9 18:51:22.244354 env[1119]: time="2024-02-09T18:51:22.244326097Z" level=info msg="StartContainer for \"e38c4745ae974914c3b264c404fb8f62f59b143d8b5883c939d07b81b289c936\"" Feb 9 18:51:22.256609 systemd[1]: Started cri-containerd-e38c4745ae974914c3b264c404fb8f62f59b143d8b5883c939d07b81b289c936.scope. Feb 9 18:51:22.279337 env[1119]: time="2024-02-09T18:51:22.279295474Z" level=info msg="StartContainer for \"e38c4745ae974914c3b264c404fb8f62f59b143d8b5883c939d07b81b289c936\" returns successfully" Feb 9 18:51:23.192015 kubelet[1986]: E0209 18:51:23.191986 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:23.192349 kubelet[1986]: E0209 18:51:23.192163 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:23.199338 kubelet[1986]: I0209 18:51:23.199299 1986 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qb5r5" podStartSLOduration=23.199274006 pod.CreationTimestamp="2024-02-09 18:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:51:23.19890546 +0000 UTC m=+37.190413862" watchObservedRunningTime="2024-02-09 18:51:23.199274006 +0000 UTC m=+37.190782418" Feb 9 18:51:23.287565 systemd-networkd[1012]: veth2571dc93: Gained IPv6LL Feb 9 18:51:24.193283 kubelet[1986]: E0209 18:51:24.193257 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:25.195250 kubelet[1986]: E0209 18:51:25.195209 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:26.195674 kubelet[1986]: E0209 18:51:26.195650 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:27.560599 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:48082.service. Feb 9 18:51:27.600312 sshd[2990]: Accepted publickey for core from 10.0.0.1 port 48082 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:27.601463 sshd[2990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:27.604188 systemd-logind[1102]: New session 6 of user core. Feb 9 18:51:27.605048 systemd[1]: Started session-6.scope. Feb 9 18:51:27.714639 sshd[2990]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:27.716888 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:48082.service: Deactivated successfully. Feb 9 18:51:27.717662 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:51:27.718198 systemd-logind[1102]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:51:27.719045 systemd-logind[1102]: Removed session 6. Feb 9 18:51:32.719295 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:48090.service. Feb 9 18:51:32.757004 sshd[3024]: Accepted publickey for core from 10.0.0.1 port 48090 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:32.757962 sshd[3024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:32.760978 systemd-logind[1102]: New session 7 of user core. Feb 9 18:51:32.761709 systemd[1]: Started session-7.scope. Feb 9 18:51:32.855106 sshd[3024]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:32.856916 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:48090.service: Deactivated successfully. Feb 9 18:51:32.857568 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:51:32.858293 systemd-logind[1102]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:51:32.858867 systemd-logind[1102]: Removed session 7. Feb 9 18:51:37.860064 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:55846.service. Feb 9 18:51:38.035564 sshd[3056]: Accepted publickey for core from 10.0.0.1 port 55846 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:38.036561 sshd[3056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:38.040027 systemd-logind[1102]: New session 8 of user core. Feb 9 18:51:38.040970 systemd[1]: Started session-8.scope. Feb 9 18:51:38.177173 sshd[3056]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:38.179615 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:55846.service: Deactivated successfully. Feb 9 18:51:38.180250 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:51:38.180998 systemd-logind[1102]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:51:38.181686 systemd-logind[1102]: Removed session 8. Feb 9 18:51:43.182489 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:55852.service. Feb 9 18:51:43.220556 sshd[3088]: Accepted publickey for core from 10.0.0.1 port 55852 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:43.221783 sshd[3088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:43.227348 systemd[1]: Started session-9.scope. Feb 9 18:51:43.228724 systemd-logind[1102]: New session 9 of user core. Feb 9 18:51:43.333205 sshd[3088]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:43.336601 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:55852.service: Deactivated successfully. Feb 9 18:51:43.337324 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:51:43.337958 systemd-logind[1102]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:51:43.339499 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:55866.service. Feb 9 18:51:43.340836 systemd-logind[1102]: Removed session 9. Feb 9 18:51:43.376404 sshd[3102]: Accepted publickey for core from 10.0.0.1 port 55866 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:43.377346 sshd[3102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:43.381273 systemd-logind[1102]: New session 10 of user core. Feb 9 18:51:43.382491 systemd[1]: Started session-10.scope. Feb 9 18:51:43.593533 sshd[3102]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:43.595283 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:55874.service. Feb 9 18:51:43.597103 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:55866.service: Deactivated successfully. Feb 9 18:51:43.597689 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:51:43.600078 systemd-logind[1102]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:51:43.600928 systemd-logind[1102]: Removed session 10. Feb 9 18:51:43.643417 sshd[3112]: Accepted publickey for core from 10.0.0.1 port 55874 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:43.645814 sshd[3112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:43.649405 systemd-logind[1102]: New session 11 of user core. Feb 9 18:51:43.650155 systemd[1]: Started session-11.scope. Feb 9 18:51:43.751100 sshd[3112]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:43.753333 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:55874.service: Deactivated successfully. Feb 9 18:51:43.753998 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:51:43.754596 systemd-logind[1102]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:51:43.755199 systemd-logind[1102]: Removed session 11. Feb 9 18:51:48.755190 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:49482.service. Feb 9 18:51:48.792320 sshd[3149]: Accepted publickey for core from 10.0.0.1 port 49482 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:48.793278 sshd[3149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:48.796102 systemd-logind[1102]: New session 12 of user core. Feb 9 18:51:48.796872 systemd[1]: Started session-12.scope. Feb 9 18:51:48.892725 sshd[3149]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:48.895717 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:49482.service: Deactivated successfully. Feb 9 18:51:48.896256 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:51:48.896827 systemd-logind[1102]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:51:48.897921 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:49494.service. Feb 9 18:51:48.898578 systemd-logind[1102]: Removed session 12. Feb 9 18:51:48.936104 sshd[3162]: Accepted publickey for core from 10.0.0.1 port 49494 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:48.937691 sshd[3162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:48.941370 systemd-logind[1102]: New session 13 of user core. Feb 9 18:51:48.942076 systemd[1]: Started session-13.scope. Feb 9 18:51:49.082835 sshd[3162]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:49.086342 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:49510.service. Feb 9 18:51:49.086730 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:49494.service: Deactivated successfully. Feb 9 18:51:49.087252 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:51:49.087802 systemd-logind[1102]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:51:49.088533 systemd-logind[1102]: Removed session 13. Feb 9 18:51:49.123641 sshd[3174]: Accepted publickey for core from 10.0.0.1 port 49510 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:49.124583 sshd[3174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:49.127529 systemd-logind[1102]: New session 14 of user core. Feb 9 18:51:49.128311 systemd[1]: Started session-14.scope. Feb 9 18:51:50.088117 sshd[3174]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:50.092199 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:49522.service. Feb 9 18:51:50.092753 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:49510.service: Deactivated successfully. Feb 9 18:51:50.093552 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:51:50.094973 systemd-logind[1102]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:51:50.096002 systemd-logind[1102]: Removed session 14. Feb 9 18:51:50.135048 sshd[3223]: Accepted publickey for core from 10.0.0.1 port 49522 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:50.136062 sshd[3223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:50.139394 systemd-logind[1102]: New session 15 of user core. Feb 9 18:51:50.140183 systemd[1]: Started session-15.scope. Feb 9 18:51:50.340292 sshd[3223]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:50.344279 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:49532.service. Feb 9 18:51:50.348725 systemd-logind[1102]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:51:50.349312 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:49522.service: Deactivated successfully. Feb 9 18:51:50.350036 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:51:50.350790 systemd-logind[1102]: Removed session 15. Feb 9 18:51:50.385685 sshd[3270]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:50.386943 sshd[3270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:50.390337 systemd-logind[1102]: New session 16 of user core. Feb 9 18:51:50.391328 systemd[1]: Started session-16.scope. Feb 9 18:51:50.495927 sshd[3270]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:50.498089 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:49532.service: Deactivated successfully. Feb 9 18:51:50.499081 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:51:50.499760 systemd-logind[1102]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:51:50.500387 systemd-logind[1102]: Removed session 16. Feb 9 18:51:55.500245 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:49534.service. Feb 9 18:51:55.538145 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 49534 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:51:55.539073 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:51:55.541952 systemd-logind[1102]: New session 17 of user core. Feb 9 18:51:55.542697 systemd[1]: Started session-17.scope. Feb 9 18:51:55.639394 sshd[3302]: pam_unix(sshd:session): session closed for user core Feb 9 18:51:55.641567 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:49534.service: Deactivated successfully. Feb 9 18:51:55.642216 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:51:55.642780 systemd-logind[1102]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:51:55.643324 systemd-logind[1102]: Removed session 17. Feb 9 18:52:00.643525 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:41186.service. Feb 9 18:52:00.680923 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 41186 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:52:00.681728 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:52:00.684604 systemd-logind[1102]: New session 18 of user core. Feb 9 18:52:00.685499 systemd[1]: Started session-18.scope. Feb 9 18:52:00.778484 sshd[3360]: pam_unix(sshd:session): session closed for user core Feb 9 18:52:00.780413 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:41186.service: Deactivated successfully. Feb 9 18:52:00.781186 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:52:00.781811 systemd-logind[1102]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:52:00.782486 systemd-logind[1102]: Removed session 18. Feb 9 18:52:03.131332 kubelet[1986]: E0209 18:52:03.131302 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:52:05.782448 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:41188.service. Feb 9 18:52:05.820764 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 41188 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:52:05.821887 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:52:05.824935 systemd-logind[1102]: New session 19 of user core. Feb 9 18:52:05.825892 systemd[1]: Started session-19.scope. Feb 9 18:52:05.921161 sshd[3393]: pam_unix(sshd:session): session closed for user core Feb 9 18:52:05.923080 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:41188.service: Deactivated successfully. Feb 9 18:52:05.923721 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:52:05.924151 systemd-logind[1102]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:52:05.924758 systemd-logind[1102]: Removed session 19. Feb 9 18:52:10.925133 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:40128.service. Feb 9 18:52:10.964841 sshd[3424]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:52:10.965617 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:52:10.968369 systemd-logind[1102]: New session 20 of user core. Feb 9 18:52:10.969253 systemd[1]: Started session-20.scope. Feb 9 18:52:11.066596 sshd[3424]: pam_unix(sshd:session): session closed for user core Feb 9 18:52:11.068779 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:40128.service: Deactivated successfully. Feb 9 18:52:11.069362 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:52:11.070049 systemd-logind[1102]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:52:11.070741 systemd-logind[1102]: Removed session 20. Feb 9 18:52:13.130690 kubelet[1986]: E0209 18:52:13.130658 1986 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"