Feb 13 06:06:46.557540 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 06:06:46.557552 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 06:06:46.557559 kernel: BIOS-provided physical RAM map: Feb 13 06:06:46.557563 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 06:06:46.557566 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 06:06:46.557570 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 06:06:46.557574 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 06:06:46.557578 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 06:06:46.557581 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000082589fff] usable Feb 13 06:06:46.557585 kernel: BIOS-e820: [mem 0x000000008258a000-0x000000008258afff] ACPI NVS Feb 13 06:06:46.557590 kernel: BIOS-e820: [mem 0x000000008258b000-0x000000008258bfff] reserved Feb 13 06:06:46.557593 kernel: BIOS-e820: [mem 0x000000008258c000-0x000000008afccfff] usable Feb 13 06:06:46.557597 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 06:06:46.557601 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 06:06:46.557605 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 06:06:46.557611 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 06:06:46.557614 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 06:06:46.557619 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 06:06:46.557623 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 06:06:46.557627 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 06:06:46.557631 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 06:06:46.557634 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 06:06:46.557638 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 06:06:46.557642 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 06:06:46.557646 kernel: NX (Execute Disable) protection: active Feb 13 06:06:46.557650 kernel: SMBIOS 3.2.1 present. Feb 13 06:06:46.557655 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 06:06:46.557659 kernel: tsc: Detected 3400.000 MHz processor Feb 13 06:06:46.557663 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 06:06:46.557668 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 06:06:46.557672 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 06:06:46.557676 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 06:06:46.557681 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 06:06:46.557685 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 06:06:46.557689 kernel: Using GB pages for direct mapping Feb 13 06:06:46.557693 kernel: ACPI: Early table checksum verification disabled Feb 13 06:06:46.557698 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 06:06:46.557702 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 06:06:46.557706 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 06:06:46.557711 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 06:06:46.557716 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 06:06:46.557721 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 06:06:46.557726 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 06:06:46.557731 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 06:06:46.557735 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 06:06:46.557740 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 06:06:46.557744 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 06:06:46.557749 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 06:06:46.557753 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 06:06:46.557758 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:06:46.557763 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 06:06:46.557767 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 06:06:46.557772 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:06:46.557776 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:06:46.557781 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 06:06:46.557785 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 06:06:46.557790 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:06:46.557794 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:06:46.557799 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 06:06:46.557804 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 06:06:46.557808 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 06:06:46.557813 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 06:06:46.557817 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 06:06:46.557822 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 06:06:46.557826 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 06:06:46.557831 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 06:06:46.557835 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 06:06:46.557841 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 06:06:46.557845 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 06:06:46.557850 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 06:06:46.557854 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 06:06:46.557858 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 06:06:46.557863 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 06:06:46.557867 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 06:06:46.557872 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 06:06:46.557877 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 06:06:46.557882 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 06:06:46.557886 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 06:06:46.557891 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 06:06:46.557895 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 06:06:46.557899 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 06:06:46.557904 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 06:06:46.557908 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 06:06:46.557913 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 06:06:46.557918 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 06:06:46.557923 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 06:06:46.557927 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 06:06:46.557931 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 06:06:46.557936 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 06:06:46.557940 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 06:06:46.557945 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 06:06:46.557949 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 06:06:46.557954 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 06:06:46.557959 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 06:06:46.557963 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 06:06:46.557968 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 06:06:46.557972 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 06:06:46.557977 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 06:06:46.557981 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 06:06:46.557986 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 06:06:46.557990 kernel: No NUMA configuration found Feb 13 06:06:46.557995 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 06:06:46.558000 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 06:06:46.558005 kernel: Zone ranges: Feb 13 06:06:46.558009 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 06:06:46.558014 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 06:06:46.558018 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 06:06:46.558023 kernel: Movable zone start for each node Feb 13 06:06:46.558027 kernel: Early memory node ranges Feb 13 06:06:46.558031 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 06:06:46.558036 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 06:06:46.558041 kernel: node 0: [mem 0x0000000040400000-0x0000000082589fff] Feb 13 06:06:46.558046 kernel: node 0: [mem 0x000000008258c000-0x000000008afccfff] Feb 13 06:06:46.558050 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 06:06:46.558055 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 06:06:46.558059 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 06:06:46.558064 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 06:06:46.558069 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 06:06:46.558076 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 06:06:46.558082 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 06:06:46.558086 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 06:06:46.558091 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 06:06:46.558097 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 06:06:46.558102 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 06:06:46.558107 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 06:06:46.558111 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 06:06:46.558116 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 06:06:46.558121 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 06:06:46.558126 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 06:06:46.558131 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 06:06:46.558136 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 06:06:46.558141 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 06:06:46.558145 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 06:06:46.558150 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 06:06:46.558155 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 06:06:46.558160 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 06:06:46.558165 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 06:06:46.558169 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 06:06:46.558175 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 06:06:46.558179 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 06:06:46.558184 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 06:06:46.558189 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 06:06:46.558194 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 06:06:46.558198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 06:06:46.558203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 06:06:46.558208 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 06:06:46.558213 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 06:06:46.558218 kernel: TSC deadline timer available Feb 13 06:06:46.558223 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 06:06:46.558228 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 06:06:46.558233 kernel: Booting paravirtualized kernel on bare hardware Feb 13 06:06:46.558238 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 06:06:46.558243 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 06:06:46.558247 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 06:06:46.558252 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 06:06:46.558257 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 06:06:46.558262 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 06:06:46.558267 kernel: Policy zone: Normal Feb 13 06:06:46.558273 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 06:06:46.558298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 06:06:46.558303 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 06:06:46.558308 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 06:06:46.558313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 06:06:46.558318 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 06:06:46.558338 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 06:06:46.558343 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 06:06:46.558348 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 06:06:46.558353 kernel: rcu: Hierarchical RCU implementation. Feb 13 06:06:46.558358 kernel: rcu: RCU event tracing is enabled. Feb 13 06:06:46.558363 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 06:06:46.558368 kernel: Rude variant of Tasks RCU enabled. Feb 13 06:06:46.558372 kernel: Tracing variant of Tasks RCU enabled. Feb 13 06:06:46.558377 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 06:06:46.558383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 06:06:46.558388 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 06:06:46.558393 kernel: random: crng init done Feb 13 06:06:46.558397 kernel: Console: colour dummy device 80x25 Feb 13 06:06:46.558402 kernel: printk: console [tty0] enabled Feb 13 06:06:46.558407 kernel: printk: console [ttyS1] enabled Feb 13 06:06:46.558412 kernel: ACPI: Core revision 20210730 Feb 13 06:06:46.558417 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 06:06:46.558421 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 06:06:46.558427 kernel: DMAR: Host address width 39 Feb 13 06:06:46.558432 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 06:06:46.558437 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 06:06:46.558441 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 06:06:46.558446 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 06:06:46.558451 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 06:06:46.558456 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 06:06:46.558461 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 06:06:46.558465 kernel: x2apic enabled Feb 13 06:06:46.558471 kernel: Switched APIC routing to cluster x2apic. Feb 13 06:06:46.558476 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 06:06:46.558481 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 06:06:46.558486 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 06:06:46.558490 kernel: process: using mwait in idle threads Feb 13 06:06:46.558495 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 06:06:46.558500 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 06:06:46.558504 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 06:06:46.558509 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 06:06:46.558515 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 06:06:46.558520 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 06:06:46.558524 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 06:06:46.558529 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 06:06:46.558534 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 06:06:46.558538 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 06:06:46.558543 kernel: TAA: Mitigation: TSX disabled Feb 13 06:06:46.558548 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 06:06:46.558553 kernel: SRBDS: Mitigation: Microcode Feb 13 06:06:46.558557 kernel: GDS: Vulnerable: No microcode Feb 13 06:06:46.558562 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 06:06:46.558568 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 06:06:46.558572 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 06:06:46.558577 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 06:06:46.558582 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 06:06:46.558587 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 06:06:46.558591 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 06:06:46.558596 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 06:06:46.558601 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 06:06:46.558606 kernel: Freeing SMP alternatives memory: 32K Feb 13 06:06:46.558610 kernel: pid_max: default: 32768 minimum: 301 Feb 13 06:06:46.558615 kernel: LSM: Security Framework initializing Feb 13 06:06:46.558620 kernel: SELinux: Initializing. Feb 13 06:06:46.558625 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 06:06:46.558630 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 06:06:46.558635 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 06:06:46.558640 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 06:06:46.558644 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 06:06:46.558649 kernel: ... version: 4 Feb 13 06:06:46.558654 kernel: ... bit width: 48 Feb 13 06:06:46.558659 kernel: ... generic registers: 4 Feb 13 06:06:46.558664 kernel: ... value mask: 0000ffffffffffff Feb 13 06:06:46.558668 kernel: ... max period: 00007fffffffffff Feb 13 06:06:46.558674 kernel: ... fixed-purpose events: 3 Feb 13 06:06:46.558679 kernel: ... event mask: 000000070000000f Feb 13 06:06:46.558683 kernel: signal: max sigframe size: 2032 Feb 13 06:06:46.558688 kernel: rcu: Hierarchical SRCU implementation. Feb 13 06:06:46.558693 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 06:06:46.558698 kernel: smp: Bringing up secondary CPUs ... Feb 13 06:06:46.558703 kernel: x86: Booting SMP configuration: Feb 13 06:06:46.558708 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 06:06:46.558713 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 06:06:46.558718 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 06:06:46.558723 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 06:06:46.558728 kernel: smpboot: Max logical packages: 1 Feb 13 06:06:46.558732 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 06:06:46.558737 kernel: devtmpfs: initialized Feb 13 06:06:46.558742 kernel: x86/mm: Memory block size: 128MB Feb 13 06:06:46.558747 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8258a000-0x8258afff] (4096 bytes) Feb 13 06:06:46.558752 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 06:06:46.558757 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 06:06:46.558762 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 06:06:46.558767 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 06:06:46.558772 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 06:06:46.558777 kernel: audit: initializing netlink subsys (disabled) Feb 13 06:06:46.558781 kernel: audit: type=2000 audit(1707804401.040:1): state=initialized audit_enabled=0 res=1 Feb 13 06:06:46.558786 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 06:06:46.558791 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 06:06:46.558796 kernel: cpuidle: using governor menu Feb 13 06:06:46.558801 kernel: ACPI: bus type PCI registered Feb 13 06:06:46.558806 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 06:06:46.558811 kernel: dca service started, version 1.12.1 Feb 13 06:06:46.558816 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 06:06:46.558820 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 06:06:46.558825 kernel: PCI: Using configuration type 1 for base access Feb 13 06:06:46.558830 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 06:06:46.558835 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 06:06:46.558840 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 06:06:46.558845 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 06:06:46.558850 kernel: ACPI: Added _OSI(Module Device) Feb 13 06:06:46.558855 kernel: ACPI: Added _OSI(Processor Device) Feb 13 06:06:46.558860 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 06:06:46.558864 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 06:06:46.558869 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 06:06:46.558874 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 06:06:46.558879 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 06:06:46.558884 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 06:06:46.558889 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:06:46.558894 kernel: ACPI: SSDT 0xFFFFA01BC0212500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 06:06:46.558899 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 06:06:46.558904 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:06:46.558909 kernel: ACPI: SSDT 0xFFFFA01BC1AE2800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 06:06:46.558913 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:06:46.558918 kernel: ACPI: SSDT 0xFFFFA01BC1A5D800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 06:06:46.558923 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:06:46.558928 kernel: ACPI: SSDT 0xFFFFA01BC1A5C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 06:06:46.558932 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:06:46.558938 kernel: ACPI: SSDT 0xFFFFA01BC0149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 06:06:46.558942 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:06:46.558947 kernel: ACPI: SSDT 0xFFFFA01BC1AE5000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 06:06:46.558952 kernel: ACPI: Interpreter enabled Feb 13 06:06:46.558957 kernel: ACPI: PM: (supports S0 S5) Feb 13 06:06:46.558962 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 06:06:46.558966 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 06:06:46.558971 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 06:06:46.558976 kernel: HEST: Table parsing has been initialized. Feb 13 06:06:46.558981 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 06:06:46.558986 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 06:06:46.558991 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 06:06:46.558996 kernel: ACPI: PM: Power Resource [USBC] Feb 13 06:06:46.559001 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 06:06:46.559005 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 06:06:46.559010 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 06:06:46.559015 kernel: ACPI: PM: Power Resource [WRST] Feb 13 06:06:46.559020 kernel: ACPI: PM: Power Resource [FN00] Feb 13 06:06:46.559025 kernel: ACPI: PM: Power Resource [FN01] Feb 13 06:06:46.559030 kernel: ACPI: PM: Power Resource [FN02] Feb 13 06:06:46.559035 kernel: ACPI: PM: Power Resource [FN03] Feb 13 06:06:46.559039 kernel: ACPI: PM: Power Resource [FN04] Feb 13 06:06:46.559044 kernel: ACPI: PM: Power Resource [PIN] Feb 13 06:06:46.559049 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 06:06:46.559113 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 06:06:46.559158 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 06:06:46.559199 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 06:06:46.559207 kernel: PCI host bridge to bus 0000:00 Feb 13 06:06:46.559250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 06:06:46.559307 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 06:06:46.559357 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 06:06:46.559393 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 06:06:46.559429 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 06:06:46.559466 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 06:06:46.559516 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 06:06:46.559564 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 06:06:46.559607 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.559653 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 06:06:46.559695 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 06:06:46.559743 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 06:06:46.559785 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 06:06:46.559831 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 06:06:46.559872 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 06:06:46.559915 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 06:06:46.559959 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 06:06:46.560001 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 06:06:46.560042 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 06:06:46.560088 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 06:06:46.560129 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 06:06:46.560176 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 06:06:46.560217 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 06:06:46.560261 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 06:06:46.560305 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 06:06:46.560347 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 06:06:46.560390 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 06:06:46.560432 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 06:06:46.560471 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 06:06:46.560516 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 06:06:46.560558 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 06:06:46.560599 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 06:06:46.560642 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 06:06:46.560682 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 06:06:46.560723 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 06:06:46.560763 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 06:06:46.560804 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 06:06:46.560850 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 06:06:46.560893 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 06:06:46.560934 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 06:06:46.560978 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 06:06:46.561020 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.561067 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 06:06:46.561110 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.561155 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 06:06:46.561197 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.561242 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 06:06:46.561286 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.561332 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 06:06:46.561375 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.561420 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 06:06:46.561462 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 06:06:46.561509 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 06:06:46.561554 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 06:06:46.561595 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 06:06:46.561635 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 06:06:46.561681 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 06:06:46.561723 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 06:06:46.561770 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 06:06:46.561816 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 06:06:46.561858 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 06:06:46.561901 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 06:06:46.561944 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 06:06:46.561986 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 06:06:46.562033 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 06:06:46.562076 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 06:06:46.562120 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 06:06:46.562162 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 06:06:46.562204 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 06:06:46.562246 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 06:06:46.562290 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 06:06:46.562331 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 06:06:46.562372 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 06:06:46.562413 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 06:06:46.562461 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 06:06:46.562503 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 06:06:46.562546 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 06:06:46.562588 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 06:06:46.562631 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.562672 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 06:06:46.562712 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 06:06:46.562755 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 06:06:46.562801 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 06:06:46.562845 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 06:06:46.562939 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 06:06:46.562984 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 06:06:46.563025 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 06:06:46.563067 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 06:06:46.563107 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 06:06:46.563150 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 06:06:46.563191 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 06:06:46.563238 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 06:06:46.563300 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 06:06:46.563364 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 06:06:46.563408 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 06:06:46.563449 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 06:06:46.563492 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 06:06:46.563534 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:06:46.563581 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 06:06:46.563630 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 06:06:46.563675 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 06:06:46.563719 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 06:06:46.563764 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 06:06:46.563807 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 06:06:46.563853 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 06:06:46.563898 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 06:06:46.563941 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 06:06:46.563982 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 06:06:46.564026 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:06:46.564033 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 06:06:46.564039 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 06:06:46.564045 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 06:06:46.564051 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 06:06:46.564056 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 06:06:46.564061 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 06:06:46.564066 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 06:06:46.564071 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 06:06:46.564077 kernel: iommu: Default domain type: Translated Feb 13 06:06:46.564082 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 06:06:46.564125 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 06:06:46.564171 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 06:06:46.564215 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 06:06:46.564222 kernel: vgaarb: loaded Feb 13 06:06:46.564228 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 06:06:46.564233 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 06:06:46.564238 kernel: PTP clock support registered Feb 13 06:06:46.564243 kernel: PCI: Using ACPI for IRQ routing Feb 13 06:06:46.564248 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 06:06:46.564253 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 06:06:46.564260 kernel: e820: reserve RAM buffer [mem 0x8258a000-0x83ffffff] Feb 13 06:06:46.564265 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 06:06:46.564270 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 06:06:46.564277 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 06:06:46.564282 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 06:06:46.564305 kernel: clocksource: Switched to clocksource tsc-early Feb 13 06:06:46.564311 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 06:06:46.564316 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 06:06:46.564321 kernel: pnp: PnP ACPI init Feb 13 06:06:46.564381 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 06:06:46.564422 kernel: pnp 00:02: [dma 0 disabled] Feb 13 06:06:46.564463 kernel: pnp 00:03: [dma 0 disabled] Feb 13 06:06:46.564506 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 06:06:46.564543 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 06:06:46.564583 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 06:06:46.564625 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 06:06:46.564663 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 06:06:46.564699 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 06:06:46.564736 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 06:06:46.564772 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 06:06:46.564808 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 06:06:46.564848 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 06:06:46.564886 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 06:06:46.564926 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 06:06:46.564963 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 06:06:46.565000 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 06:06:46.565036 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 06:06:46.565071 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 06:06:46.565108 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 06:06:46.565146 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 06:06:46.565187 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 06:06:46.565194 kernel: pnp: PnP ACPI: found 10 devices Feb 13 06:06:46.565200 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 06:06:46.565205 kernel: NET: Registered PF_INET protocol family Feb 13 06:06:46.565210 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 06:06:46.565215 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 06:06:46.565221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 06:06:46.565227 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 06:06:46.565233 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 06:06:46.565238 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 06:06:46.565243 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 06:06:46.565248 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 06:06:46.565253 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 06:06:46.565259 kernel: NET: Registered PF_XDP protocol family Feb 13 06:06:46.565320 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 06:06:46.565383 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 06:06:46.565424 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 06:06:46.565467 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 06:06:46.565510 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 06:06:46.565552 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 06:06:46.565595 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 06:06:46.565637 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 06:06:46.565678 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 06:06:46.565721 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 06:06:46.565763 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 06:06:46.565803 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 06:06:46.565845 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 06:06:46.565885 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 06:06:46.565928 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 06:06:46.565969 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 06:06:46.566010 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 06:06:46.566052 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 06:06:46.566094 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 06:06:46.566137 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 06:06:46.566179 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:06:46.566220 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 06:06:46.566262 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 06:06:46.566345 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:06:46.566383 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 06:06:46.566418 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 06:06:46.566454 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 06:06:46.566489 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 06:06:46.566524 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 06:06:46.566560 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 06:06:46.566601 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 06:06:46.566642 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 06:06:46.566685 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 06:06:46.566724 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 06:06:46.566766 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 06:06:46.566804 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 06:06:46.566847 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 06:06:46.566886 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 06:06:46.566926 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 06:06:46.566965 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 06:06:46.566972 kernel: PCI: CLS 64 bytes, default 64 Feb 13 06:06:46.566978 kernel: DMAR: No ATSR found Feb 13 06:06:46.566983 kernel: DMAR: No SATC found Feb 13 06:06:46.566988 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 06:06:46.567029 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 06:06:46.567073 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 06:06:46.567114 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 06:06:46.567154 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 06:06:46.567196 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 06:06:46.567236 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 06:06:46.567296 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 06:06:46.567351 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 06:06:46.567392 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 06:06:46.567435 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 06:06:46.567475 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 06:06:46.567516 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 06:06:46.567557 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 06:06:46.567597 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 06:06:46.567639 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 06:06:46.567680 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 06:06:46.567721 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 06:06:46.567763 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 06:06:46.567804 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 06:06:46.567844 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 06:06:46.567885 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 06:06:46.567928 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 06:06:46.567970 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 06:06:46.568013 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 06:06:46.568056 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 06:06:46.568101 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 06:06:46.568146 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 06:06:46.568153 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 06:06:46.568159 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 06:06:46.568164 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 06:06:46.568169 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 06:06:46.568174 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 06:06:46.568180 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 06:06:46.568186 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 06:06:46.568230 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 06:06:46.568238 kernel: Initialise system trusted keyrings Feb 13 06:06:46.568243 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 06:06:46.568248 kernel: Key type asymmetric registered Feb 13 06:06:46.568253 kernel: Asymmetric key parser 'x509' registered Feb 13 06:06:46.568258 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 06:06:46.568263 kernel: io scheduler mq-deadline registered Feb 13 06:06:46.568270 kernel: io scheduler kyber registered Feb 13 06:06:46.568277 kernel: io scheduler bfq registered Feb 13 06:06:46.568318 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 06:06:46.568361 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 06:06:46.568402 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 06:06:46.568443 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 06:06:46.568484 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 06:06:46.568524 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 06:06:46.568572 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 06:06:46.568580 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 06:06:46.568585 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 06:06:46.568591 kernel: pstore: Registered erst as persistent store backend Feb 13 06:06:46.568596 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 06:06:46.568601 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 06:06:46.568606 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 06:06:46.568611 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 06:06:46.568618 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 06:06:46.568660 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 06:06:46.568668 kernel: i8042: PNP: No PS/2 controller found. Feb 13 06:06:46.568706 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 06:06:46.568743 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 06:06:46.568781 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T06:06:45 UTC (1707804405) Feb 13 06:06:46.568817 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 06:06:46.568825 kernel: fail to initialize ptp_kvm Feb 13 06:06:46.568831 kernel: intel_pstate: Intel P-state driver initializing Feb 13 06:06:46.568836 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 06:06:46.568842 kernel: intel_pstate: HWP enabled Feb 13 06:06:46.568847 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 06:06:46.568852 kernel: vesafb: scrolling: redraw Feb 13 06:06:46.568857 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 06:06:46.568862 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000001bb28994, using 768k, total 768k Feb 13 06:06:46.568868 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 06:06:46.568873 kernel: fb0: VESA VGA frame buffer device Feb 13 06:06:46.568879 kernel: NET: Registered PF_INET6 protocol family Feb 13 06:06:46.568884 kernel: Segment Routing with IPv6 Feb 13 06:06:46.568889 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 06:06:46.568894 kernel: NET: Registered PF_PACKET protocol family Feb 13 06:06:46.568900 kernel: Key type dns_resolver registered Feb 13 06:06:46.568905 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 06:06:46.568910 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 06:06:46.568915 kernel: IPI shorthand broadcast: enabled Feb 13 06:06:46.568920 kernel: sched_clock: Marking stable (1680743402, 1339723660)->(4438707974, -1418240912) Feb 13 06:06:46.568926 kernel: registered taskstats version 1 Feb 13 06:06:46.568931 kernel: Loading compiled-in X.509 certificates Feb 13 06:06:46.568937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 06:06:46.568942 kernel: Key type .fscrypt registered Feb 13 06:06:46.568947 kernel: Key type fscrypt-provisioning registered Feb 13 06:06:46.568952 kernel: pstore: Using crash dump compression: deflate Feb 13 06:06:46.568957 kernel: ima: Allocated hash algorithm: sha1 Feb 13 06:06:46.568962 kernel: ima: No architecture policies found Feb 13 06:06:46.568968 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 06:06:46.568974 kernel: Write protecting the kernel read-only data: 28672k Feb 13 06:06:46.568979 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 06:06:46.568984 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 06:06:46.568989 kernel: Run /init as init process Feb 13 06:06:46.568995 kernel: with arguments: Feb 13 06:06:46.569000 kernel: /init Feb 13 06:06:46.569005 kernel: with environment: Feb 13 06:06:46.569010 kernel: HOME=/ Feb 13 06:06:46.569015 kernel: TERM=linux Feb 13 06:06:46.569021 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 06:06:46.569027 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 06:06:46.569034 systemd[1]: Detected architecture x86-64. Feb 13 06:06:46.569040 systemd[1]: Running in initrd. Feb 13 06:06:46.569045 systemd[1]: No hostname configured, using default hostname. Feb 13 06:06:46.569050 systemd[1]: Hostname set to . Feb 13 06:06:46.569056 systemd[1]: Initializing machine ID from random generator. Feb 13 06:06:46.569062 systemd[1]: Queued start job for default target initrd.target. Feb 13 06:06:46.569067 systemd[1]: Started systemd-ask-password-console.path. Feb 13 06:06:46.569073 systemd[1]: Reached target cryptsetup.target. Feb 13 06:06:46.569078 systemd[1]: Reached target paths.target. Feb 13 06:06:46.569083 systemd[1]: Reached target slices.target. Feb 13 06:06:46.569089 systemd[1]: Reached target swap.target. Feb 13 06:06:46.569094 systemd[1]: Reached target timers.target. Feb 13 06:06:46.569099 systemd[1]: Listening on iscsid.socket. Feb 13 06:06:46.569106 systemd[1]: Listening on iscsiuio.socket. Feb 13 06:06:46.569111 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 06:06:46.569117 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 06:06:46.569122 systemd[1]: Listening on systemd-journald.socket. Feb 13 06:06:46.569128 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 06:06:46.569133 systemd[1]: Listening on systemd-networkd.socket. Feb 13 06:06:46.569139 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 06:06:46.569144 kernel: clocksource: Switched to clocksource tsc Feb 13 06:06:46.569150 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 06:06:46.569155 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 06:06:46.569161 systemd[1]: Reached target sockets.target. Feb 13 06:06:46.569166 systemd[1]: Starting kmod-static-nodes.service... Feb 13 06:06:46.569172 systemd[1]: Finished network-cleanup.service. Feb 13 06:06:46.569177 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 06:06:46.569183 systemd[1]: Starting systemd-journald.service... Feb 13 06:06:46.569188 systemd[1]: Starting systemd-modules-load.service... Feb 13 06:06:46.569195 systemd-journald[267]: Journal started Feb 13 06:06:46.569221 systemd-journald[267]: Runtime Journal (/run/log/journal/9310a378398549e381b12b71e97b587c) is 8.0M, max 640.1M, 632.1M free. Feb 13 06:06:46.571859 systemd-modules-load[268]: Inserted module 'overlay' Feb 13 06:06:46.630366 kernel: audit: type=1334 audit(1707804406.577:2): prog-id=6 op=LOAD Feb 13 06:06:46.630377 systemd[1]: Starting systemd-resolved.service... Feb 13 06:06:46.630385 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 06:06:46.577000 audit: BPF prog-id=6 op=LOAD Feb 13 06:06:46.662279 kernel: Bridge firewalling registered Feb 13 06:06:46.662311 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 06:06:46.678063 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 13 06:06:46.683764 systemd-resolved[270]: Positive Trust Anchors: Feb 13 06:06:46.717352 kernel: SCSI subsystem initialized Feb 13 06:06:46.717365 systemd[1]: Started systemd-journald.service. Feb 13 06:06:46.683769 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 06:06:46.834354 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 06:06:46.834366 kernel: audit: type=1130 audit(1707804406.742:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.834373 kernel: device-mapper: uevent: version 1.0.3 Feb 13 06:06:46.834380 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 06:06:46.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.683788 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 06:06:46.907492 kernel: audit: type=1130 audit(1707804406.841:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.685395 systemd-resolved[270]: Defaulting to hostname 'linux'. Feb 13 06:06:46.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.743474 systemd[1]: Started systemd-resolved.service. Feb 13 06:06:47.009488 kernel: audit: type=1130 audit(1707804406.914:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.009502 kernel: audit: type=1130 audit(1707804406.965:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.835534 systemd-modules-load[268]: Inserted module 'dm_multipath' Feb 13 06:06:47.063057 kernel: audit: type=1130 audit(1707804407.018:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.862604 systemd[1]: Finished kmod-static-nodes.service. Feb 13 06:06:47.117366 kernel: audit: type=1130 audit(1707804407.071:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:46.935711 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 06:06:46.966558 systemd[1]: Finished systemd-modules-load.service. Feb 13 06:06:47.018570 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 06:06:47.071551 systemd[1]: Reached target nss-lookup.target. Feb 13 06:06:47.125933 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 06:06:47.147886 systemd[1]: Starting systemd-sysctl.service... Feb 13 06:06:47.148179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 06:06:47.151066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 06:06:47.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.151795 systemd[1]: Finished systemd-sysctl.service. Feb 13 06:06:47.200361 kernel: audit: type=1130 audit(1707804407.149:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.213606 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 06:06:47.278401 kernel: audit: type=1130 audit(1707804407.213:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.270933 systemd[1]: Starting dracut-cmdline.service... Feb 13 06:06:47.292364 dracut-cmdline[294]: dracut-dracut-053 Feb 13 06:06:47.292364 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 06:06:47.292364 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 06:06:47.360354 kernel: Loading iSCSI transport class v2.0-870. Feb 13 06:06:47.360366 kernel: iscsi: registered transport (tcp) Feb 13 06:06:47.410870 kernel: iscsi: registered transport (qla4xxx) Feb 13 06:06:47.410888 kernel: QLogic iSCSI HBA Driver Feb 13 06:06:47.427601 systemd[1]: Finished dracut-cmdline.service. Feb 13 06:06:47.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:47.428149 systemd[1]: Starting dracut-pre-udev.service... Feb 13 06:06:47.483354 kernel: raid6: avx2x4 gen() 45892 MB/s Feb 13 06:06:47.518354 kernel: raid6: avx2x4 xor() 20715 MB/s Feb 13 06:06:47.553350 kernel: raid6: avx2x2 gen() 53805 MB/s Feb 13 06:06:47.588309 kernel: raid6: avx2x2 xor() 32126 MB/s Feb 13 06:06:47.623350 kernel: raid6: avx2x1 gen() 45338 MB/s Feb 13 06:06:47.657353 kernel: raid6: avx2x1 xor() 27898 MB/s Feb 13 06:06:47.691348 kernel: raid6: sse2x4 gen() 21385 MB/s Feb 13 06:06:47.725311 kernel: raid6: sse2x4 xor() 11951 MB/s Feb 13 06:06:47.759352 kernel: raid6: sse2x2 gen() 21706 MB/s Feb 13 06:06:47.793349 kernel: raid6: sse2x2 xor() 13439 MB/s Feb 13 06:06:47.827352 kernel: raid6: sse2x1 gen() 18300 MB/s Feb 13 06:06:47.878895 kernel: raid6: sse2x1 xor() 8930 MB/s Feb 13 06:06:47.878911 kernel: raid6: using algorithm avx2x2 gen() 53805 MB/s Feb 13 06:06:47.878918 kernel: raid6: .... xor() 32126 MB/s, rmw enabled Feb 13 06:06:47.896956 kernel: raid6: using avx2x2 recovery algorithm Feb 13 06:06:47.942332 kernel: xor: automatically using best checksumming function avx Feb 13 06:06:48.021308 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 06:06:48.026452 systemd[1]: Finished dracut-pre-udev.service. Feb 13 06:06:48.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:48.035000 audit: BPF prog-id=7 op=LOAD Feb 13 06:06:48.035000 audit: BPF prog-id=8 op=LOAD Feb 13 06:06:48.036358 systemd[1]: Starting systemd-udevd.service... Feb 13 06:06:48.044365 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 13 06:06:48.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:48.051524 systemd[1]: Started systemd-udevd.service. Feb 13 06:06:48.092394 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Feb 13 06:06:48.067975 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 06:06:48.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:48.097558 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 06:06:48.109497 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 06:06:48.158513 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 06:06:48.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:48.194289 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 06:06:48.194333 kernel: libata version 3.00 loaded. Feb 13 06:06:48.229502 kernel: ACPI: bus type USB registered Feb 13 06:06:48.229536 kernel: usbcore: registered new interface driver usbfs Feb 13 06:06:48.229544 kernel: usbcore: registered new interface driver hub Feb 13 06:06:48.247230 kernel: usbcore: registered new device driver usb Feb 13 06:06:48.290327 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 06:06:48.290361 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 06:06:48.328338 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 06:06:48.329317 kernel: AES CTR mode by8 optimization enabled Feb 13 06:06:48.329332 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 06:06:48.382721 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 06:06:48.382736 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 06:06:48.382802 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 06:06:48.382813 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 06:06:48.455515 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 06:06:48.455597 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 06:06:48.455661 kernel: pps pps0: new PPS source ptp0 Feb 13 06:06:48.455731 kernel: scsi host0: ahci Feb 13 06:06:48.455801 kernel: scsi host1: ahci Feb 13 06:06:48.455865 kernel: scsi host2: ahci Feb 13 06:06:48.455926 kernel: scsi host3: ahci Feb 13 06:06:48.455988 kernel: scsi host4: ahci Feb 13 06:06:48.456047 kernel: scsi host5: ahci Feb 13 06:06:48.456109 kernel: scsi host6: ahci Feb 13 06:06:48.456169 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 06:06:48.456178 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 06:06:48.456187 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 06:06:48.456195 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 06:06:48.456203 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 06:06:48.456211 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 06:06:48.456218 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 06:06:48.482880 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 06:06:48.482955 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 06:06:48.495631 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 06:06:48.495740 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 06:06:48.508115 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 06:06:48.520045 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b6 Feb 13 06:06:48.531622 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 06:06:48.531810 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 06:06:48.542686 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 06:06:48.569329 kernel: hub 1-0:1.0: USB hub found Feb 13 06:06:48.605353 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 06:06:48.605430 kernel: hub 1-0:1.0: 16 ports detected Feb 13 06:06:48.605489 kernel: pps pps1: new PPS source ptp2 Feb 13 06:06:48.605543 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 06:06:48.605598 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 06:06:48.605649 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b7 Feb 13 06:06:48.605698 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 06:06:48.605747 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 06:06:48.651822 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 06:06:48.651897 kernel: hub 2-0:1.0: USB hub found Feb 13 06:06:48.773280 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 06:06:48.773299 kernel: hub 2-0:1.0: 10 ports detected Feb 13 06:06:48.785285 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 06:06:48.813930 kernel: usb: port power management may be unreliable Feb 13 06:06:48.814330 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 06:06:48.854329 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 06:06:48.854404 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 06:06:48.883342 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 06:06:48.883415 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 06:06:48.914637 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 06:06:48.914707 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 06:06:48.997364 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 06:06:48.997430 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 06:06:49.119306 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 06:06:49.135338 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 06:06:49.152285 kernel: hub 1-14:1.0: USB hub found Feb 13 06:06:49.181433 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 06:06:49.181457 kernel: hub 1-14:1.0: 4 ports detected Feb 13 06:06:49.181625 kernel: ata1.00: Features: NCQ-prio Feb 13 06:06:49.181641 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 06:06:49.209312 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 06:06:49.209387 kernel: ata2.00: Features: NCQ-prio Feb 13 06:06:49.225279 kernel: ata1.00: configured for UDMA/133 Feb 13 06:06:49.244280 kernel: port_module: 9 callbacks suppressed Feb 13 06:06:49.244296 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 06:06:49.244364 kernel: ata2.00: configured for UDMA/133 Feb 13 06:06:49.245313 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 06:06:49.273341 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 06:06:49.273412 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 06:06:49.396320 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 06:06:49.417413 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 06:06:49.417454 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 06:06:49.417555 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:49.417563 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 06:06:49.417682 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 06:06:49.417775 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 06:06:49.417832 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 06:06:49.417889 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 06:06:49.417956 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 06:06:49.418046 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:49.420345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 06:06:49.420379 kernel: GPT:9289727 != 937703087 Feb 13 06:06:49.420403 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 06:06:49.420410 kernel: GPT:9289727 != 937703087 Feb 13 06:06:49.420415 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 06:06:49.420423 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 06:06:49.420430 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:49.420435 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 06:06:49.479348 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 06:06:49.479437 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 06:06:49.502355 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 06:06:49.502432 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 06:06:49.659833 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 06:06:49.659854 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 06:06:49.659929 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 06:06:49.792038 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 06:06:49.807010 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 06:06:49.807025 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 06:06:49.840283 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 06:06:49.849596 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 06:06:49.888814 kernel: usbcore: registered new interface driver usbhid Feb 13 06:06:49.888825 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (532) Feb 13 06:06:49.888832 kernel: usbhid: USB HID core driver Feb 13 06:06:49.921389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 06:06:49.957341 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 06:06:49.957355 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 06:06:49.945686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 06:06:49.970377 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 06:06:50.078747 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 06:06:50.078838 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 06:06:50.078847 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 06:06:50.046352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 06:06:50.100401 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:50.089168 systemd[1]: Starting disk-uuid.service... Feb 13 06:06:50.141404 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 06:06:50.141414 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:50.141455 disk-uuid[688]: Primary Header is updated. Feb 13 06:06:50.141455 disk-uuid[688]: Secondary Entries is updated. Feb 13 06:06:50.141455 disk-uuid[688]: Secondary Header is updated. Feb 13 06:06:50.201311 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 06:06:50.201322 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:50.201329 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 06:06:51.187325 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:06:51.207124 disk-uuid[689]: The operation has completed successfully. Feb 13 06:06:51.215405 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 06:06:51.245822 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 06:06:51.341781 kernel: audit: type=1130 audit(1707804411.253:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.341796 kernel: audit: type=1131 audit(1707804411.253:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.245881 systemd[1]: Finished disk-uuid.service. Feb 13 06:06:51.371370 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 06:06:51.253988 systemd[1]: Starting verity-setup.service... Feb 13 06:06:51.403741 systemd[1]: Found device dev-mapper-usr.device. Feb 13 06:06:51.413290 systemd[1]: Mounting sysusr-usr.mount... Feb 13 06:06:51.419590 systemd[1]: Finished verity-setup.service. Feb 13 06:06:51.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.487281 kernel: audit: type=1130 audit(1707804411.437:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.543179 systemd[1]: Mounted sysusr-usr.mount. Feb 13 06:06:51.558394 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 06:06:51.551547 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 06:06:51.642687 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 06:06:51.642702 kernel: BTRFS info (device sdb6): using free space tree Feb 13 06:06:51.642709 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 06:06:51.642716 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 06:06:51.551941 systemd[1]: Starting ignition-setup.service... Feb 13 06:06:51.574695 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 06:06:51.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.650822 systemd[1]: Finished ignition-setup.service. Feb 13 06:06:51.774968 kernel: audit: type=1130 audit(1707804411.667:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.774983 kernel: audit: type=1130 audit(1707804411.724:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.667711 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 06:06:51.806638 kernel: audit: type=1334 audit(1707804411.782:24): prog-id=9 op=LOAD Feb 13 06:06:51.782000 audit: BPF prog-id=9 op=LOAD Feb 13 06:06:51.725924 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 06:06:51.784152 systemd[1]: Starting systemd-networkd.service... Feb 13 06:06:51.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.874352 kernel: audit: type=1130 audit(1707804411.820:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.872496 ignition[867]: Ignition 2.14.0 Feb 13 06:06:51.820512 systemd-networkd[871]: lo: Link UP Feb 13 06:06:51.872501 ignition[867]: Stage: fetch-offline Feb 13 06:06:51.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.820514 systemd-networkd[871]: lo: Gained carrier Feb 13 06:06:52.048090 kernel: audit: type=1130 audit(1707804411.909:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:52.048106 kernel: audit: type=1130 audit(1707804411.973:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:52.048114 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 06:06:51.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.872527 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 06:06:52.073129 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 06:06:51.820788 systemd-networkd[871]: Enumeration completed Feb 13 06:06:51.872540 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 06:06:51.820833 systemd[1]: Started systemd-networkd.service. Feb 13 06:06:51.875343 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 06:06:52.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.821451 systemd-networkd[871]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 06:06:52.135316 iscsid[899]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 06:06:52.135316 iscsid[899]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 06:06:52.135316 iscsid[899]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 06:06:52.135316 iscsid[899]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 06:06:52.135316 iscsid[899]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 06:06:52.135316 iscsid[899]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 06:06:52.135316 iscsid[899]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 06:06:52.290480 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 06:06:52.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:52.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:51.875421 ignition[867]: parsed url from cmdline: "" Feb 13 06:06:51.821486 systemd[1]: Reached target network.target. Feb 13 06:06:51.875422 ignition[867]: no config URL provided Feb 13 06:06:51.882907 systemd[1]: Starting iscsiuio.service... Feb 13 06:06:51.875425 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 06:06:51.895519 systemd[1]: Started iscsiuio.service. Feb 13 06:06:51.880785 ignition[867]: parsing config with SHA512: 1df444b15a9a3e448b32b7b30a1dc49b7fca2e19be9fc494631d9a8ce16a6ac735906da35b82e036d71fa74a1e9ba7c4d73a2a34b4a50c487ad7d15ac3da00ea Feb 13 06:06:51.901058 unknown[867]: fetched base config from "system" Feb 13 06:06:51.901430 ignition[867]: fetch-offline: fetch-offline passed Feb 13 06:06:51.901062 unknown[867]: fetched user config from "system" Feb 13 06:06:51.901433 ignition[867]: POST message to Packet Timeline Feb 13 06:06:51.911141 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 06:06:51.901437 ignition[867]: POST Status error: resource requires networking Feb 13 06:06:51.995126 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 06:06:51.901466 ignition[867]: Ignition finished successfully Feb 13 06:06:51.995574 systemd[1]: Starting ignition-kargs.service... Feb 13 06:06:52.052166 ignition[889]: Ignition 2.14.0 Feb 13 06:06:52.049650 systemd-networkd[871]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 06:06:52.052170 ignition[889]: Stage: kargs Feb 13 06:06:52.062857 systemd[1]: Starting iscsid.service... Feb 13 06:06:52.052225 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 06:06:52.092561 systemd[1]: Started iscsid.service. Feb 13 06:06:52.052235 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 06:06:52.106864 systemd[1]: Starting dracut-initqueue.service... Feb 13 06:06:52.053554 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 06:06:52.122540 systemd[1]: Finished dracut-initqueue.service. Feb 13 06:06:52.055373 ignition[889]: kargs: kargs passed Feb 13 06:06:52.143660 systemd[1]: Reached target remote-fs-pre.target. Feb 13 06:06:52.055376 ignition[889]: POST message to Packet Timeline Feb 13 06:06:52.154567 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 06:06:52.055386 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 06:06:52.187596 systemd[1]: Reached target remote-fs.target. Feb 13 06:06:52.058912 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53884->[::1]:53: read: connection refused Feb 13 06:06:52.215162 systemd[1]: Starting dracut-pre-mount.service... Feb 13 06:06:52.259319 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 06:06:52.229544 systemd[1]: Finished dracut-pre-mount.service. Feb 13 06:06:52.259727 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36220->[::1]:53: read: connection refused Feb 13 06:06:52.259377 systemd-networkd[871]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 06:06:52.287804 systemd-networkd[871]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 06:06:52.316956 systemd-networkd[871]: enp1s0f1np1: Link UP Feb 13 06:06:52.317139 systemd-networkd[871]: enp1s0f1np1: Gained carrier Feb 13 06:06:52.326772 systemd-networkd[871]: enp1s0f0np0: Link UP Feb 13 06:06:52.327139 systemd-networkd[871]: eno2: Link UP Feb 13 06:06:52.327503 systemd-networkd[871]: eno1: Link UP Feb 13 06:06:52.659984 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 06:06:52.661048 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45955->[::1]:53: read: connection refused Feb 13 06:06:53.121110 systemd-networkd[871]: enp1s0f0np0: Gained carrier Feb 13 06:06:53.130530 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 06:06:53.155504 systemd-networkd[871]: enp1s0f0np0: DHCPv4 address 145.40.90.207/31, gateway 145.40.90.206 acquired from 145.40.83.140 Feb 13 06:06:53.461620 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 06:06:53.462939 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53624->[::1]:53: read: connection refused Feb 13 06:06:53.865873 systemd-networkd[871]: enp1s0f1np1: Gained IPv6LL Feb 13 06:06:54.441862 systemd-networkd[871]: enp1s0f0np0: Gained IPv6LL Feb 13 06:06:55.063368 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 06:06:55.064717 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48445->[::1]:53: read: connection refused Feb 13 06:06:58.268339 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 06:06:58.307038 ignition[889]: GET result: OK Feb 13 06:06:58.519606 ignition[889]: Ignition finished successfully Feb 13 06:06:58.523992 systemd[1]: Finished ignition-kargs.service. Feb 13 06:06:58.611343 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 06:06:58.611361 kernel: audit: type=1130 audit(1707804418.533:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:58.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:58.543843 ignition[918]: Ignition 2.14.0 Feb 13 06:06:58.536604 systemd[1]: Starting ignition-disks.service... Feb 13 06:06:58.543846 ignition[918]: Stage: disks Feb 13 06:06:58.543899 ignition[918]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 06:06:58.543907 ignition[918]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 06:06:58.545237 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 06:06:58.546836 ignition[918]: disks: disks passed Feb 13 06:06:58.546839 ignition[918]: POST message to Packet Timeline Feb 13 06:06:58.546849 ignition[918]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 06:06:58.570384 ignition[918]: GET result: OK Feb 13 06:06:58.773637 ignition[918]: Ignition finished successfully Feb 13 06:06:58.776518 systemd[1]: Finished ignition-disks.service. Feb 13 06:06:58.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:58.788905 systemd[1]: Reached target initrd-root-device.target. Feb 13 06:06:58.876461 kernel: audit: type=1130 audit(1707804418.787:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:58.862481 systemd[1]: Reached target local-fs-pre.target. Feb 13 06:06:58.862597 systemd[1]: Reached target local-fs.target. Feb 13 06:06:58.885459 systemd[1]: Reached target sysinit.target. Feb 13 06:06:58.885568 systemd[1]: Reached target basic.target. Feb 13 06:06:58.908243 systemd[1]: Starting systemd-fsck-root.service... Feb 13 06:06:58.929536 systemd-fsck[936]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 06:06:58.952628 systemd[1]: Finished systemd-fsck-root.service. Feb 13 06:06:59.045326 kernel: audit: type=1130 audit(1707804418.961:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.045355 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 06:06:58.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:58.962970 systemd[1]: Mounting sysroot.mount... Feb 13 06:06:59.053968 systemd[1]: Mounted sysroot.mount. Feb 13 06:06:59.069612 systemd[1]: Reached target initrd-root-fs.target. Feb 13 06:06:59.078218 systemd[1]: Mounting sysroot-usr.mount... Feb 13 06:06:59.092177 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 06:06:59.112132 systemd[1]: Starting flatcar-static-network.service... Feb 13 06:06:59.126502 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 06:06:59.126633 systemd[1]: Reached target ignition-diskful.target. Feb 13 06:06:59.145486 systemd[1]: Mounted sysroot-usr.mount. Feb 13 06:06:59.168348 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 06:06:59.313267 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (947) Feb 13 06:06:59.313289 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 06:06:59.313301 kernel: BTRFS info (device sdb6): using free space tree Feb 13 06:06:59.313314 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 06:06:59.313322 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 06:06:59.313384 coreos-metadata[943]: Feb 13 06:06:59.256 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 06:06:59.313384 coreos-metadata[943]: Feb 13 06:06:59.288 INFO Fetch successful Feb 13 06:06:59.313384 coreos-metadata[943]: Feb 13 06:06:59.305 INFO wrote hostname ci-3510.3.2-a-25d9a0518b to /sysroot/etc/hostname Feb 13 06:06:59.559065 kernel: audit: type=1130 audit(1707804419.321:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.559079 kernel: audit: type=1130 audit(1707804419.383:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.559087 kernel: audit: type=1130 audit(1707804419.445:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.559094 kernel: audit: type=1131 audit(1707804419.445:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.559166 coreos-metadata[944]: Feb 13 06:06:59.256 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 06:06:59.559166 coreos-metadata[944]: Feb 13 06:06:59.279 INFO Fetch successful Feb 13 06:06:59.181419 systemd[1]: Starting initrd-setup-root.service... Feb 13 06:06:59.246992 systemd[1]: Finished initrd-setup-root.service. Feb 13 06:06:59.619324 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 06:06:59.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.322633 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 06:06:59.692355 kernel: audit: type=1130 audit(1707804419.627:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.692372 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Feb 13 06:06:59.383580 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 06:06:59.712480 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 06:06:59.722465 ignition[1020]: INFO : Ignition 2.14.0 Feb 13 06:06:59.722465 ignition[1020]: INFO : Stage: mount Feb 13 06:06:59.722465 ignition[1020]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 06:06:59.722465 ignition[1020]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 06:06:59.722465 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 06:06:59.722465 ignition[1020]: INFO : mount: mount passed Feb 13 06:06:59.722465 ignition[1020]: INFO : POST message to Packet Timeline Feb 13 06:06:59.722465 ignition[1020]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 06:06:59.722465 ignition[1020]: INFO : GET result: OK Feb 13 06:06:59.383619 systemd[1]: Finished flatcar-static-network.service. Feb 13 06:06:59.821615 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 06:06:59.446537 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 06:06:59.567848 systemd[1]: Starting ignition-mount.service... Feb 13 06:06:59.583356 systemd[1]: Starting sysroot-boot.service... Feb 13 06:06:59.604621 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 06:06:59.604663 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 06:06:59.612612 systemd[1]: Finished sysroot-boot.service. Feb 13 06:06:59.881565 ignition[1020]: INFO : Ignition finished successfully Feb 13 06:06:59.882773 systemd[1]: Finished ignition-mount.service. Feb 13 06:06:59.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.900680 systemd[1]: Starting ignition-files.service... Feb 13 06:06:59.983372 kernel: audit: type=1130 audit(1707804419.897:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:06:59.977307 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 06:07:00.028378 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1037) Feb 13 06:07:00.028388 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 06:07:00.062825 kernel: BTRFS info (device sdb6): using free space tree Feb 13 06:07:00.062840 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 06:07:00.111356 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 06:07:00.112755 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 06:07:00.131404 ignition[1056]: INFO : Ignition 2.14.0 Feb 13 06:07:00.131404 ignition[1056]: INFO : Stage: files Feb 13 06:07:00.131404 ignition[1056]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 06:07:00.131404 ignition[1056]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 06:07:00.131404 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 06:07:00.131404 ignition[1056]: DEBUG : files: compiled without relabeling support, skipping Feb 13 06:07:00.131404 ignition[1056]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 06:07:00.131404 ignition[1056]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 06:07:00.133476 unknown[1056]: wrote ssh authorized keys file for user: core Feb 13 06:07:00.233601 ignition[1056]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 06:07:00.233601 ignition[1056]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 06:07:00.233601 ignition[1056]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 06:07:00.233601 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 06:07:00.233601 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 06:07:00.457598 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 06:07:00.525790 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 06:07:00.542593 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 06:07:00.542593 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 13 06:07:01.008853 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 06:07:01.104924 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 13 06:07:01.104924 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 06:07:01.148501 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 06:07:01.148501 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 13 06:07:01.533900 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 06:07:01.585271 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 13 06:07:01.610529 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 06:07:01.610529 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 06:07:01.610529 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 13 06:07:01.763138 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 06:07:01.946904 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 13 06:07:01.972538 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 06:07:01.972538 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 06:07:01.972538 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 13 06:07:02.021363 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 06:07:02.418562 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 13 06:07:02.418562 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 06:07:02.418562 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 13 06:07:02.474398 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 13 06:07:02.474398 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 13 06:07:02.753669 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 13 06:07:02.753669 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 13 06:07:02.753669 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 06:07:02.813511 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 06:07:02.813511 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 06:07:02.813511 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 06:07:03.157045 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 06:07:03.243565 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 06:07:03.268409 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1428349686" Feb 13 06:07:03.268409 ignition[1056]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1428349686": device or resource busy Feb 13 06:07:03.268409 ignition[1056]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1428349686", trying btrfs: device or resource busy Feb 13 06:07:03.538588 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1059) Feb 13 06:07:03.538688 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1428349686" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1428349686" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1428349686" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1428349686" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 13 06:07:03.538688 ignition[1056]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 13 06:07:04.178584 kernel: audit: type=1130 audit(1707804423.672:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.178683 kernel: audit: type=1130 audit(1707804423.791:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.178731 kernel: audit: type=1130 audit(1707804423.859:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.178772 kernel: audit: type=1131 audit(1707804423.859:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.178811 kernel: audit: type=1130 audit(1707804424.016:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.178850 kernel: audit: type=1131 audit(1707804424.016:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1d): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1d): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 06:07:04.179497 ignition[1056]: INFO : files: files passed Feb 13 06:07:04.179497 ignition[1056]: INFO : POST message to Packet Timeline Feb 13 06:07:04.179497 ignition[1056]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 06:07:04.179497 ignition[1056]: INFO : GET result: OK Feb 13 06:07:04.179497 ignition[1056]: INFO : Ignition finished successfully Feb 13 06:07:04.742524 kernel: audit: type=1130 audit(1707804424.185:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.742546 kernel: audit: type=1131 audit(1707804424.355:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.742554 kernel: audit: type=1131 audit(1707804424.653:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.658725 systemd[1]: Finished ignition-files.service. Feb 13 06:07:04.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.681754 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 06:07:04.830486 kernel: audit: type=1131 audit(1707804424.750:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.830519 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 06:07:03.748497 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 06:07:03.748822 systemd[1]: Starting ignition-quench.service... Feb 13 06:07:03.759653 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 06:07:03.792677 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 06:07:04.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.792743 systemd[1]: Finished ignition-quench.service. Feb 13 06:07:04.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.860527 systemd[1]: Reached target ignition-complete.target. Feb 13 06:07:04.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.982875 systemd[1]: Starting initrd-parse-etc.service... Feb 13 06:07:04.975461 iscsid[899]: iscsid shutting down. Feb 13 06:07:03.999077 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 06:07:05.004466 ignition[1105]: INFO : Ignition 2.14.0 Feb 13 06:07:05.004466 ignition[1105]: INFO : Stage: umount Feb 13 06:07:05.004466 ignition[1105]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 06:07:05.004466 ignition[1105]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 06:07:05.004466 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 06:07:05.004466 ignition[1105]: INFO : umount: umount passed Feb 13 06:07:05.004466 ignition[1105]: INFO : POST message to Packet Timeline Feb 13 06:07:05.004466 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 06:07:05.004466 ignition[1105]: INFO : GET result: OK Feb 13 06:07:05.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:03.999123 systemd[1]: Finished initrd-parse-etc.service. Feb 13 06:07:05.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.017668 systemd[1]: Reached target initrd-fs.target. Feb 13 06:07:05.174602 ignition[1105]: INFO : Ignition finished successfully Feb 13 06:07:04.141465 systemd[1]: Reached target initrd.target. Feb 13 06:07:04.141580 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 06:07:04.142015 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 06:07:05.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.166046 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 06:07:05.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.187009 systemd[1]: Starting initrd-cleanup.service... Feb 13 06:07:05.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.260000 audit: BPF prog-id=6 op=UNLOAD Feb 13 06:07:04.255424 systemd[1]: Stopped target nss-lookup.target. Feb 13 06:07:04.290654 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 06:07:05.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.305743 systemd[1]: Stopped target timers.target. Feb 13 06:07:05.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.334816 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 06:07:05.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.335072 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 06:07:05.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.357232 systemd[1]: Stopped target initrd.target. Feb 13 06:07:04.432567 systemd[1]: Stopped target basic.target. Feb 13 06:07:05.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.445728 systemd[1]: Stopped target ignition-complete.target. Feb 13 06:07:05.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.464696 systemd[1]: Stopped target ignition-diskful.target. Feb 13 06:07:05.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.492665 systemd[1]: Stopped target initrd-root-device.target. Feb 13 06:07:04.513848 systemd[1]: Stopped target remote-fs.target. Feb 13 06:07:05.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.535946 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 06:07:04.557987 systemd[1]: Stopped target sysinit.target. Feb 13 06:07:04.581961 systemd[1]: Stopped target local-fs.target. Feb 13 06:07:04.605949 systemd[1]: Stopped target local-fs-pre.target. Feb 13 06:07:05.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.621968 systemd[1]: Stopped target swap.target. Feb 13 06:07:05.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.636876 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 06:07:05.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.637237 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 06:07:04.655194 systemd[1]: Stopped target cryptsetup.target. Feb 13 06:07:05.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.733630 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 06:07:05.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.733713 systemd[1]: Stopped dracut-initqueue.service. Feb 13 06:07:05.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.751701 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 06:07:05.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:05.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.751773 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 06:07:04.820686 systemd[1]: Stopped target paths.target. Feb 13 06:07:04.837510 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 06:07:04.842467 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 06:07:04.860652 systemd[1]: Stopped target slices.target. Feb 13 06:07:04.884665 systemd[1]: Stopped target sockets.target. Feb 13 06:07:04.900765 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 06:07:04.900973 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 06:07:04.919069 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 06:07:04.919451 systemd[1]: Stopped ignition-files.service. Feb 13 06:07:04.936050 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 06:07:04.936455 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 06:07:05.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:04.954306 systemd[1]: Stopping ignition-mount.service... Feb 13 06:07:04.967666 systemd[1]: Stopping iscsid.service... Feb 13 06:07:04.983034 systemd[1]: Stopping sysroot-boot.service... Feb 13 06:07:04.996454 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 06:07:04.996598 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 06:07:05.012700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 06:07:05.012865 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 06:07:05.033494 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 06:07:05.033865 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 06:07:05.033909 systemd[1]: Stopped iscsid.service. Feb 13 06:07:05.072889 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 06:07:05.072968 systemd[1]: Stopped sysroot-boot.service. Feb 13 06:07:05.091206 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 06:07:05.091369 systemd[1]: Closed iscsid.socket. Feb 13 06:07:05.105741 systemd[1]: Stopping iscsiuio.service... Feb 13 06:07:05.120929 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 06:07:05.121151 systemd[1]: Stopped iscsiuio.service. Feb 13 06:07:05.138031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 06:07:05.138239 systemd[1]: Finished initrd-cleanup.service. Feb 13 06:07:05.154729 systemd[1]: Stopped target network.target. Feb 13 06:07:05.167625 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 06:07:05.167728 systemd[1]: Closed iscsiuio.socket. Feb 13 06:07:05.181927 systemd[1]: Stopping systemd-networkd.service... Feb 13 06:07:05.191427 systemd-networkd[871]: enp1s0f1np1: DHCPv6 lease lost Feb 13 06:07:05.196840 systemd[1]: Stopping systemd-resolved.service... Feb 13 06:07:05.206439 systemd-networkd[871]: enp1s0f0np0: DHCPv6 lease lost Feb 13 06:07:05.211213 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 06:07:05.780000 audit: BPF prog-id=9 op=UNLOAD Feb 13 06:07:05.211482 systemd[1]: Stopped systemd-resolved.service. Feb 13 06:07:05.231959 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 06:07:05.232338 systemd[1]: Stopped systemd-networkd.service. Feb 13 06:07:05.247147 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 06:07:05.247425 systemd[1]: Stopped ignition-mount.service. Feb 13 06:07:05.261956 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 06:07:05.262042 systemd[1]: Closed systemd-networkd.socket. Feb 13 06:07:05.276523 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 06:07:05.276647 systemd[1]: Stopped ignition-disks.service. Feb 13 06:07:05.291601 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 06:07:05.291727 systemd[1]: Stopped ignition-kargs.service. Feb 13 06:07:05.307695 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 06:07:05.307845 systemd[1]: Stopped ignition-setup.service. Feb 13 06:07:05.322677 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 06:07:05.322816 systemd[1]: Stopped initrd-setup-root.service. Feb 13 06:07:05.341370 systemd[1]: Stopping network-cleanup.service... Feb 13 06:07:05.353486 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 06:07:05.353727 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 06:07:05.368808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 06:07:05.368957 systemd[1]: Stopped systemd-sysctl.service. Feb 13 06:07:05.782285 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Feb 13 06:07:05.383927 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 06:07:05.384069 systemd[1]: Stopped systemd-modules-load.service. Feb 13 06:07:05.400977 systemd[1]: Stopping systemd-udevd.service... Feb 13 06:07:05.419538 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 06:07:05.421123 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 06:07:05.421455 systemd[1]: Stopped systemd-udevd.service. Feb 13 06:07:05.434500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 06:07:05.434630 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 06:07:05.447598 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 06:07:05.447699 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 06:07:05.463521 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 06:07:05.463648 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 06:07:05.483470 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 06:07:05.483500 systemd[1]: Stopped dracut-cmdline.service. Feb 13 06:07:05.498529 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 06:07:05.498583 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 06:07:05.514415 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 06:07:05.529360 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 06:07:05.529392 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 06:07:05.543443 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 06:07:05.543476 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 06:07:05.558402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 06:07:05.558446 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 06:07:05.575931 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 06:07:05.576704 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 06:07:05.576824 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 06:07:05.681546 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 06:07:05.681779 systemd[1]: Stopped network-cleanup.service. Feb 13 06:07:05.700922 systemd[1]: Reached target initrd-switch-root.target. Feb 13 06:07:05.718382 systemd[1]: Starting initrd-switch-root.service... Feb 13 06:07:05.739350 systemd[1]: Switching root. Feb 13 06:07:05.782917 systemd-journald[267]: Journal stopped Feb 13 06:07:09.703976 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 06:07:09.703989 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 06:07:09.703997 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 06:07:09.704003 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 06:07:09.704008 kernel: SELinux: policy capability open_perms=1 Feb 13 06:07:09.704013 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 06:07:09.704019 kernel: SELinux: policy capability always_check_network=0 Feb 13 06:07:09.704024 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 06:07:09.704029 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 06:07:09.704035 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 06:07:09.704040 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 06:07:09.704046 systemd[1]: Successfully loaded SELinux policy in 326.905ms. Feb 13 06:07:09.704053 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.814ms. Feb 13 06:07:09.704059 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 06:07:09.704067 systemd[1]: Detected architecture x86-64. Feb 13 06:07:09.704073 systemd[1]: Detected first boot. Feb 13 06:07:09.704079 systemd[1]: Hostname set to . Feb 13 06:07:09.704085 systemd[1]: Initializing machine ID from random generator. Feb 13 06:07:09.704091 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 06:07:09.704096 systemd[1]: Populated /etc with preset unit settings. Feb 13 06:07:09.704102 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:07:09.704109 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:07:09.704116 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:07:09.704122 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 06:07:09.704128 systemd[1]: Stopped initrd-switch-root.service. Feb 13 06:07:09.704135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 06:07:09.704142 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 06:07:09.704149 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 06:07:09.704155 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 06:07:09.704161 systemd[1]: Created slice system-getty.slice. Feb 13 06:07:09.704167 systemd[1]: Created slice system-modprobe.slice. Feb 13 06:07:09.704173 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 06:07:09.704180 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 06:07:09.704186 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 06:07:09.704192 systemd[1]: Created slice user.slice. Feb 13 06:07:09.704198 systemd[1]: Started systemd-ask-password-console.path. Feb 13 06:07:09.704204 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 06:07:09.704211 systemd[1]: Set up automount boot.automount. Feb 13 06:07:09.704217 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 06:07:09.704223 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 06:07:09.704231 systemd[1]: Stopped target initrd-fs.target. Feb 13 06:07:09.704237 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 06:07:09.704244 systemd[1]: Reached target integritysetup.target. Feb 13 06:07:09.704250 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 06:07:09.704257 systemd[1]: Reached target remote-fs.target. Feb 13 06:07:09.704263 systemd[1]: Reached target slices.target. Feb 13 06:07:09.704270 systemd[1]: Reached target swap.target. Feb 13 06:07:09.704278 systemd[1]: Reached target torcx.target. Feb 13 06:07:09.704285 systemd[1]: Reached target veritysetup.target. Feb 13 06:07:09.704310 systemd[1]: Listening on systemd-coredump.socket. Feb 13 06:07:09.704334 systemd[1]: Listening on systemd-initctl.socket. Feb 13 06:07:09.704340 systemd[1]: Listening on systemd-networkd.socket. Feb 13 06:07:09.704363 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 06:07:09.704384 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 06:07:09.704391 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 06:07:09.704397 systemd[1]: Mounting dev-hugepages.mount... Feb 13 06:07:09.704403 systemd[1]: Mounting dev-mqueue.mount... Feb 13 06:07:09.704410 systemd[1]: Mounting media.mount... Feb 13 06:07:09.704417 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 06:07:09.704424 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 06:07:09.704430 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 06:07:09.704436 systemd[1]: Mounting tmp.mount... Feb 13 06:07:09.704443 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 06:07:09.704449 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 06:07:09.704455 systemd[1]: Starting kmod-static-nodes.service... Feb 13 06:07:09.704462 systemd[1]: Starting modprobe@configfs.service... Feb 13 06:07:09.704468 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 06:07:09.704475 systemd[1]: Starting modprobe@drm.service... Feb 13 06:07:09.704482 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 06:07:09.704489 systemd[1]: Starting modprobe@fuse.service... Feb 13 06:07:09.704495 kernel: fuse: init (API version 7.34) Feb 13 06:07:09.704501 systemd[1]: Starting modprobe@loop.service... Feb 13 06:07:09.704507 kernel: loop: module loaded Feb 13 06:07:09.704513 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 06:07:09.704520 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 06:07:09.704527 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 06:07:09.704534 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 06:07:09.704540 kernel: kauditd_printk_skb: 72 callbacks suppressed Feb 13 06:07:09.704546 kernel: audit: type=1131 audit(1707804429.345:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.704552 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 06:07:09.704558 kernel: audit: type=1131 audit(1707804429.432:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.704564 systemd[1]: Stopped systemd-journald.service. Feb 13 06:07:09.704571 kernel: audit: type=1130 audit(1707804429.497:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.704578 kernel: audit: type=1131 audit(1707804429.497:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.704584 kernel: audit: type=1334 audit(1707804429.582:119): prog-id=21 op=LOAD Feb 13 06:07:09.704589 kernel: audit: type=1334 audit(1707804429.600:120): prog-id=22 op=LOAD Feb 13 06:07:09.704595 kernel: audit: type=1334 audit(1707804429.618:121): prog-id=23 op=LOAD Feb 13 06:07:09.704601 kernel: audit: type=1334 audit(1707804429.636:122): prog-id=19 op=UNLOAD Feb 13 06:07:09.704606 systemd[1]: Starting systemd-journald.service... Feb 13 06:07:09.704613 kernel: audit: type=1334 audit(1707804429.636:123): prog-id=20 op=UNLOAD Feb 13 06:07:09.704618 systemd[1]: Starting systemd-modules-load.service... Feb 13 06:07:09.704626 kernel: audit: type=1305 audit(1707804429.700:124): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 06:07:09.704634 systemd-journald[1257]: Journal started Feb 13 06:07:09.704658 systemd-journald[1257]: Runtime Journal (/run/log/journal/b070419e75bc43798f61cda74a328887) is 8.0M, max 640.1M, 632.1M free. Feb 13 06:07:06.198000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 06:07:06.468000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 06:07:06.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 06:07:06.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 06:07:06.471000 audit: BPF prog-id=10 op=LOAD Feb 13 06:07:06.471000 audit: BPF prog-id=10 op=UNLOAD Feb 13 06:07:06.471000 audit: BPF prog-id=11 op=LOAD Feb 13 06:07:06.471000 audit: BPF prog-id=11 op=UNLOAD Feb 13 06:07:06.540000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 06:07:06.540000 audit[1147]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001258dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:07:06.540000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 06:07:06.566000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 06:07:06.566000 audit[1147]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001259b5 a2=1ed a3=0 items=2 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:07:06.566000 audit: CWD cwd="/" Feb 13 06:07:06.566000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:06.566000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:06.566000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 06:07:08.086000 audit: BPF prog-id=12 op=LOAD Feb 13 06:07:08.086000 audit: BPF prog-id=3 op=UNLOAD Feb 13 06:07:08.086000 audit: BPF prog-id=13 op=LOAD Feb 13 06:07:08.086000 audit: BPF prog-id=14 op=LOAD Feb 13 06:07:08.086000 audit: BPF prog-id=4 op=UNLOAD Feb 13 06:07:08.086000 audit: BPF prog-id=5 op=UNLOAD Feb 13 06:07:08.086000 audit: BPF prog-id=15 op=LOAD Feb 13 06:07:08.086000 audit: BPF prog-id=12 op=UNLOAD Feb 13 06:07:08.087000 audit: BPF prog-id=16 op=LOAD Feb 13 06:07:08.087000 audit: BPF prog-id=17 op=LOAD Feb 13 06:07:08.087000 audit: BPF prog-id=13 op=UNLOAD Feb 13 06:07:08.087000 audit: BPF prog-id=14 op=UNLOAD Feb 13 06:07:08.087000 audit: BPF prog-id=18 op=LOAD Feb 13 06:07:08.087000 audit: BPF prog-id=15 op=UNLOAD Feb 13 06:07:08.087000 audit: BPF prog-id=19 op=LOAD Feb 13 06:07:08.087000 audit: BPF prog-id=20 op=LOAD Feb 13 06:07:08.087000 audit: BPF prog-id=16 op=UNLOAD Feb 13 06:07:08.087000 audit: BPF prog-id=17 op=UNLOAD Feb 13 06:07:08.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:08.137000 audit: BPF prog-id=18 op=UNLOAD Feb 13 06:07:08.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:08.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.582000 audit: BPF prog-id=21 op=LOAD Feb 13 06:07:09.600000 audit: BPF prog-id=22 op=LOAD Feb 13 06:07:09.618000 audit: BPF prog-id=23 op=LOAD Feb 13 06:07:09.636000 audit: BPF prog-id=19 op=UNLOAD Feb 13 06:07:09.636000 audit: BPF prog-id=20 op=UNLOAD Feb 13 06:07:09.700000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 06:07:06.538545 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:07:08.085894 systemd[1]: Queued start job for default target multi-user.target. Feb 13 06:07:06.539033 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 06:07:08.089432 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 06:07:06.539047 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 06:07:06.539069 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 06:07:06.539076 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 06:07:06.539098 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 06:07:06.539107 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 06:07:06.539241 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 06:07:06.539270 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 06:07:06.539282 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 06:07:06.539794 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 06:07:06.539818 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 06:07:06.539832 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 06:07:06.539842 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 06:07:06.539853 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 06:07:06.539862 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 06:07:07.740898 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:07:07.741038 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:07:07.741093 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:07:07.741185 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:07:07.741215 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 06:07:07.741248 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T06:07:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 06:07:09.700000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe4fc4ba60 a2=4000 a3=7ffe4fc4bafc items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:07:09.700000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 06:07:09.782358 systemd[1]: Starting systemd-network-generator.service... Feb 13 06:07:09.809280 systemd[1]: Starting systemd-remount-fs.service... Feb 13 06:07:09.836330 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 06:07:09.879107 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 06:07:09.879130 systemd[1]: Stopped verity-setup.service. Feb 13 06:07:09.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.924280 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 06:07:09.944447 systemd[1]: Started systemd-journald.service. Feb 13 06:07:09.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:09.952894 systemd[1]: Mounted dev-hugepages.mount. Feb 13 06:07:09.961560 systemd[1]: Mounted dev-mqueue.mount. Feb 13 06:07:09.968512 systemd[1]: Mounted media.mount. Feb 13 06:07:09.975537 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 06:07:09.984509 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 06:07:09.993514 systemd[1]: Mounted tmp.mount. Feb 13 06:07:10.000604 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 06:07:10.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.009645 systemd[1]: Finished kmod-static-nodes.service. Feb 13 06:07:10.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.018681 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 06:07:10.018805 systemd[1]: Finished modprobe@configfs.service. Feb 13 06:07:10.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.027718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 06:07:10.027876 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 06:07:10.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.036854 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 06:07:10.037046 systemd[1]: Finished modprobe@drm.service. Feb 13 06:07:10.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.046270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 06:07:10.046598 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 06:07:10.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.055116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 06:07:10.055434 systemd[1]: Finished modprobe@fuse.service. Feb 13 06:07:10.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.064082 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 06:07:10.064400 systemd[1]: Finished modprobe@loop.service. Feb 13 06:07:10.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.073105 systemd[1]: Finished systemd-modules-load.service. Feb 13 06:07:10.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.082075 systemd[1]: Finished systemd-network-generator.service. Feb 13 06:07:10.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.092063 systemd[1]: Finished systemd-remount-fs.service. Feb 13 06:07:10.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.102048 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 06:07:10.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.112579 systemd[1]: Reached target network-pre.target. Feb 13 06:07:10.125159 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 06:07:10.133949 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 06:07:10.142482 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 06:07:10.143473 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 06:07:10.150927 systemd[1]: Starting systemd-journal-flush.service... Feb 13 06:07:10.155117 systemd-journald[1257]: Time spent on flushing to /var/log/journal/b070419e75bc43798f61cda74a328887 is 15.518ms for 1630 entries. Feb 13 06:07:10.155117 systemd-journald[1257]: System Journal (/var/log/journal/b070419e75bc43798f61cda74a328887) is 8.0M, max 195.6M, 187.6M free. Feb 13 06:07:10.201800 systemd-journald[1257]: Received client request to flush runtime journal. Feb 13 06:07:10.167388 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 06:07:10.167860 systemd[1]: Starting systemd-random-seed.service... Feb 13 06:07:10.184393 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 06:07:10.184903 systemd[1]: Starting systemd-sysctl.service... Feb 13 06:07:10.191846 systemd[1]: Starting systemd-sysusers.service... Feb 13 06:07:10.199875 systemd[1]: Starting systemd-udev-settle.service... Feb 13 06:07:10.208433 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 06:07:10.216475 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 06:07:10.224495 systemd[1]: Finished systemd-journal-flush.service. Feb 13 06:07:10.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.233464 systemd[1]: Finished systemd-random-seed.service. Feb 13 06:07:10.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.241486 systemd[1]: Finished systemd-sysctl.service. Feb 13 06:07:10.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.249470 systemd[1]: Finished systemd-sysusers.service. Feb 13 06:07:10.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.258469 systemd[1]: Reached target first-boot-complete.target. Feb 13 06:07:10.268015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 06:07:10.278331 udevadm[1274]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 06:07:10.287601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 06:07:10.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.450589 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 06:07:10.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.459000 audit: BPF prog-id=24 op=LOAD Feb 13 06:07:10.459000 audit: BPF prog-id=25 op=LOAD Feb 13 06:07:10.459000 audit: BPF prog-id=7 op=UNLOAD Feb 13 06:07:10.459000 audit: BPF prog-id=8 op=UNLOAD Feb 13 06:07:10.460671 systemd[1]: Starting systemd-udevd.service... Feb 13 06:07:10.472159 systemd-udevd[1277]: Using default interface naming scheme 'v252'. Feb 13 06:07:10.492096 systemd[1]: Started systemd-udevd.service. Feb 13 06:07:10.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.502219 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 06:07:10.501000 audit: BPF prog-id=26 op=LOAD Feb 13 06:07:10.503478 systemd[1]: Starting systemd-networkd.service... Feb 13 06:07:10.528289 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 06:07:10.545000 audit: BPF prog-id=27 op=LOAD Feb 13 06:07:10.545000 audit: BPF prog-id=28 op=LOAD Feb 13 06:07:10.545000 audit: BPF prog-id=29 op=LOAD Feb 13 06:07:10.567769 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 06:07:10.567845 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 06:07:10.568203 systemd[1]: Starting systemd-userdbd.service... Feb 13 06:07:10.568287 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 06:07:10.572280 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1340) Feb 13 06:07:10.572310 kernel: ACPI: button: Power Button [PWRF] Feb 13 06:07:10.644231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 06:07:10.654803 systemd[1]: Started systemd-userdbd.service. Feb 13 06:07:10.536000 audit[1298]: AVC avc: denied { confidentiality } for pid=1298 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 06:07:10.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.675282 kernel: IPMI message handler: version 39.2 Feb 13 06:07:10.536000 audit[1298]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f8df2e39010 a1=4d8bc a2=7f8df4ad2bc5 a3=5 items=42 ppid=1277 pid=1298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:07:10.536000 audit: CWD cwd="/" Feb 13 06:07:10.536000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=1 name=(null) inode=24840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=2 name=(null) inode=24840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=3 name=(null) inode=24841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=4 name=(null) inode=24840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=5 name=(null) inode=24842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=6 name=(null) inode=24840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=7 name=(null) inode=24843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=8 name=(null) inode=24843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=9 name=(null) inode=24844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=10 name=(null) inode=24843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=11 name=(null) inode=24845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=12 name=(null) inode=24843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=13 name=(null) inode=24846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=14 name=(null) inode=24843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=15 name=(null) inode=24847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=16 name=(null) inode=24843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=17 name=(null) inode=24848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=18 name=(null) inode=24840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=19 name=(null) inode=24849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=20 name=(null) inode=24849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=21 name=(null) inode=24850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=22 name=(null) inode=24849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=23 name=(null) inode=24851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=24 name=(null) inode=24849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=25 name=(null) inode=24852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=26 name=(null) inode=24849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=27 name=(null) inode=24853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=28 name=(null) inode=24849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=29 name=(null) inode=24854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=30 name=(null) inode=24840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=31 name=(null) inode=24855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=32 name=(null) inode=24855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=33 name=(null) inode=24856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=34 name=(null) inode=24855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=35 name=(null) inode=24857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=36 name=(null) inode=24855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=37 name=(null) inode=24858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=38 name=(null) inode=24855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=39 name=(null) inode=24859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=40 name=(null) inode=24855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PATH item=41 name=(null) inode=24860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:07:10.536000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 06:07:10.702320 kernel: ipmi device interface Feb 13 06:07:10.702347 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 06:07:10.722282 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 06:07:10.743283 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 06:07:10.743392 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 06:07:10.743486 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 06:07:10.846255 kernel: ipmi_si: IPMI System Interface driver Feb 13 06:07:10.846295 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 06:07:10.846390 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 06:07:10.848545 systemd-networkd[1319]: bond0: netdev ready Feb 13 06:07:10.850489 systemd-networkd[1319]: lo: Link UP Feb 13 06:07:10.850492 systemd-networkd[1319]: lo: Gained carrier Feb 13 06:07:10.850952 systemd-networkd[1319]: Enumeration completed Feb 13 06:07:10.851029 systemd[1]: Started systemd-networkd.service. Feb 13 06:07:10.851224 systemd-networkd[1319]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 06:07:10.851951 systemd-networkd[1319]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:79.network. Feb 13 06:07:10.886261 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 06:07:10.886289 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 06:07:10.886388 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 06:07:10.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:10.954288 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 06:07:10.996576 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 06:07:10.996695 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 06:07:10.996711 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 06:07:11.039892 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 06:07:11.040148 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 06:07:11.059279 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 06:07:11.076279 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 06:07:11.112336 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 06:07:11.131278 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 06:07:11.132442 systemd-networkd[1319]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 13 06:07:11.184300 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 06:07:11.223550 kernel: intel_rapl_common: Found RAPL domain package Feb 13 06:07:11.223583 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 06:07:11.223670 kernel: intel_rapl_common: Found RAPL domain core Feb 13 06:07:11.258024 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 06:07:11.292325 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 06:07:11.292352 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 06:07:11.313343 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 06:07:11.352472 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 06:07:11.372342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 06:07:11.381683 systemd-networkd[1319]: bond0: Link UP Feb 13 06:07:11.381885 systemd-networkd[1319]: enp1s0f1np1: Link UP Feb 13 06:07:11.382014 systemd-networkd[1319]: enp1s0f1np1: Gained carrier Feb 13 06:07:11.382969 systemd-networkd[1319]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 13 06:07:11.413281 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.433279 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.433408 systemd[1]: Finished systemd-udev-settle.service. Feb 13 06:07:11.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:11.448998 systemd[1]: Starting lvm2-activation-early.service... Feb 13 06:07:11.454278 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.464008 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 06:07:11.475284 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.495279 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.514279 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.533280 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.537765 systemd[1]: Finished lvm2-activation-early.service. Feb 13 06:07:11.552279 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.572323 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.591327 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:11.593429 systemd[1]: Reached target cryptsetup.target. Feb 13 06:07:11.610356 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.629279 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.648280 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.657724 systemd[1]: Starting lvm2-activation.service... Feb 13 06:07:11.659456 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 06:07:11.665289 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.683280 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.701304 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.718448 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.735332 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.736717 systemd[1]: Finished lvm2-activation.service. Feb 13 06:07:11.753311 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:11.753437 systemd[1]: Reached target local-fs-pre.target. Feb 13 06:07:11.770344 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.771575 systemd-networkd[1319]: enp1s0f0np0: Link UP Feb 13 06:07:11.771726 systemd-networkd[1319]: bond0: Gained carrier Feb 13 06:07:11.771806 systemd-networkd[1319]: enp1s0f0np0: Gained carrier Feb 13 06:07:11.804725 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 06:07:11.804747 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave Feb 13 06:07:11.804764 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 06:07:11.814237 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 06:07:11.814251 systemd[1]: Reached target local-fs.target. Feb 13 06:07:11.841281 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 06:07:11.841328 kernel: bond0: active interface up! Feb 13 06:07:11.854382 systemd-networkd[1319]: enp1s0f1np1: Link DOWN Feb 13 06:07:11.854385 systemd-networkd[1319]: enp1s0f1np1: Lost carrier Feb 13 06:07:11.858512 systemd[1]: Reached target machines.target. Feb 13 06:07:11.866975 systemd[1]: Starting ldconfig.service... Feb 13 06:07:11.873664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 06:07:11.873689 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 06:07:11.874219 systemd[1]: Starting systemd-boot-update.service... Feb 13 06:07:11.881806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 06:07:11.891854 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 06:07:11.891953 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 06:07:11.891977 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 06:07:11.892560 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 06:07:11.892837 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1383 (bootctl) Feb 13 06:07:11.893513 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 06:07:11.900927 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 06:07:11.907010 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 06:07:11.912775 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 06:07:11.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:11.917760 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 06:07:12.030350 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 06:07:12.034135 systemd-networkd[1319]: enp1s0f1np1: Link UP Feb 13 06:07:12.034930 systemd-networkd[1319]: enp1s0f1np1: Gained carrier Feb 13 06:07:12.094181 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms Feb 13 06:07:12.094220 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave Feb 13 06:07:12.323318 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 06:07:12.487735 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 06:07:12.488161 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 06:07:12.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:12.508116 systemd-fsck[1391]: fsck.fat 4.2 (2021-01-31) Feb 13 06:07:12.508116 systemd-fsck[1391]: /dev/sdb1: 789 files, 115339/258078 clusters Feb 13 06:07:12.509071 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 06:07:12.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:12.520129 systemd[1]: Mounting boot.mount... Feb 13 06:07:12.533118 systemd[1]: Mounted boot.mount. Feb 13 06:07:12.549935 systemd[1]: Finished systemd-boot-update.service. Feb 13 06:07:12.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:12.581496 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 06:07:12.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:07:12.590112 systemd[1]: Starting audit-rules.service... Feb 13 06:07:12.596888 systemd[1]: Starting clean-ca-certificates.service... Feb 13 06:07:12.605821 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 06:07:12.611000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 06:07:12.611000 audit[1411]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3014c270 a2=420 a3=0 items=0 ppid=1394 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:07:12.611000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 06:07:12.612851 augenrules[1411]: No rules Feb 13 06:07:12.615229 systemd[1]: Starting systemd-resolved.service... Feb 13 06:07:12.623200 systemd[1]: Starting systemd-timesyncd.service... Feb 13 06:07:12.630884 systemd[1]: Starting systemd-update-utmp.service... Feb 13 06:07:12.637590 systemd[1]: Finished audit-rules.service. Feb 13 06:07:12.644450 systemd[1]: Finished clean-ca-certificates.service. Feb 13 06:07:12.653432 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 06:07:12.665473 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 06:07:12.665971 systemd[1]: Finished systemd-update-utmp.service. Feb 13 06:07:12.692278 systemd[1]: Started systemd-timesyncd.service. Feb 13 06:07:12.693733 systemd-resolved[1416]: Positive Trust Anchors: Feb 13 06:07:12.693737 systemd-resolved[1416]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 06:07:12.693756 systemd-resolved[1416]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 06:07:12.698237 systemd-resolved[1416]: Using system hostname 'ci-3510.3.2-a-25d9a0518b'. Feb 13 06:07:12.700384 systemd[1]: Started systemd-resolved.service. Feb 13 06:07:12.708337 systemd[1]: Reached target network.target. Feb 13 06:07:12.713028 ldconfig[1382]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 06:07:12.716365 systemd[1]: Reached target nss-lookup.target. Feb 13 06:07:12.724368 systemd[1]: Reached target time-set.target. Feb 13 06:07:12.732492 systemd[1]: Finished ldconfig.service. Feb 13 06:07:12.740035 systemd[1]: Starting systemd-update-done.service... Feb 13 06:07:12.746463 systemd[1]: Finished systemd-update-done.service. Feb 13 06:07:12.754371 systemd[1]: Reached target sysinit.target. Feb 13 06:07:12.762350 systemd[1]: Started motdgen.path. Feb 13 06:07:12.769324 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 06:07:12.779392 systemd[1]: Started logrotate.timer. Feb 13 06:07:12.786356 systemd[1]: Started mdadm.timer. Feb 13 06:07:12.793309 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 06:07:12.801308 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 06:07:12.801321 systemd[1]: Reached target paths.target. Feb 13 06:07:12.808305 systemd[1]: Reached target timers.target. Feb 13 06:07:12.815533 systemd[1]: Listening on dbus.socket. Feb 13 06:07:12.824933 systemd[1]: Starting docker.socket... Feb 13 06:07:12.832682 systemd[1]: Listening on sshd.socket. Feb 13 06:07:12.839401 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 06:07:12.839669 systemd[1]: Listening on docker.socket. Feb 13 06:07:12.846405 systemd[1]: Reached target sockets.target. Feb 13 06:07:12.854360 systemd[1]: Reached target basic.target. Feb 13 06:07:12.861377 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 06:07:12.861391 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 06:07:12.861833 systemd[1]: Starting containerd.service... Feb 13 06:07:12.868779 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 06:07:12.877921 systemd[1]: Starting coreos-metadata.service... Feb 13 06:07:12.884823 systemd[1]: Starting dbus.service... Feb 13 06:07:12.890815 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 06:07:12.896114 jq[1431]: false Feb 13 06:07:12.896422 coreos-metadata[1424]: Feb 13 06:07:12.896 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 06:07:12.897848 systemd[1]: Starting extend-filesystems.service... Feb 13 06:07:12.902076 dbus-daemon[1430]: [system] SELinux support is enabled Feb 13 06:07:12.904399 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 06:07:12.904952 systemd[1]: Starting motdgen.service... Feb 13 06:07:12.905443 extend-filesystems[1433]: Found sda Feb 13 06:07:12.905443 extend-filesystems[1433]: Found sdb Feb 13 06:07:12.944404 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 13 06:07:12.913037 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 06:07:12.944494 coreos-metadata[1427]: Feb 13 06:07:12.906 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb1 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb2 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb3 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found usr Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb4 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb6 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb7 Feb 13 06:07:12.944601 extend-filesystems[1433]: Found sdb9 Feb 13 06:07:12.944601 extend-filesystems[1433]: Checking size of /dev/sdb9 Feb 13 06:07:12.944601 extend-filesystems[1433]: Resized partition /dev/sdb9 Feb 13 06:07:12.932784 systemd[1]: Starting prepare-critools.service... Feb 13 06:07:13.050541 extend-filesystems[1448]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 06:07:12.951959 systemd[1]: Starting prepare-helm.service... Feb 13 06:07:12.969917 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 06:07:12.988908 systemd[1]: Starting sshd-keygen.service... Feb 13 06:07:13.003660 systemd[1]: Starting systemd-logind.service... Feb 13 06:07:13.016323 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 06:07:13.016889 systemd[1]: Starting tcsd.service... Feb 13 06:07:13.072803 jq[1464]: true Feb 13 06:07:13.028024 systemd-logind[1461]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 06:07:13.073004 update_engine[1463]: I0213 06:07:13.072449 1463 main.cc:92] Flatcar Update Engine starting Feb 13 06:07:13.028033 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 06:07:13.028043 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 06:07:13.028145 systemd-logind[1461]: New seat seat0. Feb 13 06:07:13.028835 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 06:07:13.029244 systemd[1]: Starting update-engine.service... Feb 13 06:07:13.042849 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 06:07:13.064627 systemd[1]: Started dbus.service. Feb 13 06:07:13.065379 systemd-networkd[1319]: bond0: Gained IPv6LL Feb 13 06:07:13.075972 update_engine[1463]: I0213 06:07:13.075933 1463 update_check_scheduler.cc:74] Next update check in 10m9s Feb 13 06:07:13.081058 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 06:07:13.081145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 06:07:13.081318 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 06:07:13.081396 systemd[1]: Finished motdgen.service. Feb 13 06:07:13.089527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 06:07:13.089609 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 06:07:13.093899 tar[1466]: ./ Feb 13 06:07:13.093899 tar[1466]: ./loopback Feb 13 06:07:13.100203 jq[1472]: true Feb 13 06:07:13.100501 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 06:07:13.101613 tar[1467]: crictl Feb 13 06:07:13.103488 tar[1468]: linux-amd64/helm Feb 13 06:07:13.105272 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 06:07:13.105398 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 06:07:13.110654 systemd[1]: Started update-engine.service. Feb 13 06:07:13.111876 env[1473]: time="2024-02-13T06:07:13.111853218Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 06:07:13.112822 tar[1466]: ./bandwidth Feb 13 06:07:13.120622 env[1473]: time="2024-02-13T06:07:13.120598864Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 06:07:13.121764 env[1473]: time="2024-02-13T06:07:13.121749692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:07:13.122357 systemd[1]: Started systemd-logind.service. Feb 13 06:07:13.122455 env[1473]: time="2024-02-13T06:07:13.122401357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 06:07:13.122455 env[1473]: time="2024-02-13T06:07:13.122421050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:07:13.122576 env[1473]: time="2024-02-13T06:07:13.122536319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 06:07:13.122576 env[1473]: time="2024-02-13T06:07:13.122547728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 06:07:13.122576 env[1473]: time="2024-02-13T06:07:13.122559780Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 06:07:13.122576 env[1473]: time="2024-02-13T06:07:13.122567440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 06:07:13.124451 env[1473]: time="2024-02-13T06:07:13.124405915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:07:13.124581 env[1473]: time="2024-02-13T06:07:13.124540689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:07:13.124673 env[1473]: time="2024-02-13T06:07:13.124632715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 06:07:13.124673 env[1473]: time="2024-02-13T06:07:13.124644425Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 06:07:13.126687 env[1473]: time="2024-02-13T06:07:13.126646511Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 06:07:13.126687 env[1473]: time="2024-02-13T06:07:13.126661630Z" level=info msg="metadata content store policy set" policy=shared Feb 13 06:07:13.126809 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 06:07:13.130483 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 06:07:13.133203 env[1473]: time="2024-02-13T06:07:13.133189733Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 06:07:13.133228 env[1473]: time="2024-02-13T06:07:13.133209341Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 06:07:13.133228 env[1473]: time="2024-02-13T06:07:13.133220767Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 06:07:13.133259 env[1473]: time="2024-02-13T06:07:13.133240582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133259 env[1473]: time="2024-02-13T06:07:13.133252181Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133303 env[1473]: time="2024-02-13T06:07:13.133263068Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133303 env[1473]: time="2024-02-13T06:07:13.133278266Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133303 env[1473]: time="2024-02-13T06:07:13.133291937Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133353 env[1473]: time="2024-02-13T06:07:13.133303853Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133353 env[1473]: time="2024-02-13T06:07:13.133324219Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133353 env[1473]: time="2024-02-13T06:07:13.133336188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133353 env[1473]: time="2024-02-13T06:07:13.133347166Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 06:07:13.133421 env[1473]: time="2024-02-13T06:07:13.133410577Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 06:07:13.133514 env[1473]: time="2024-02-13T06:07:13.133474594Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 06:07:13.133704 env[1473]: time="2024-02-13T06:07:13.133670022Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 06:07:13.133704 env[1473]: time="2024-02-13T06:07:13.133691266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133744 env[1473]: time="2024-02-13T06:07:13.133704209Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 06:07:13.133744 env[1473]: time="2024-02-13T06:07:13.133738958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133776 env[1473]: time="2024-02-13T06:07:13.133750763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133776 env[1473]: time="2024-02-13T06:07:13.133760661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133776 env[1473]: time="2024-02-13T06:07:13.133771017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133820 env[1473]: time="2024-02-13T06:07:13.133782620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133820 env[1473]: time="2024-02-13T06:07:13.133793807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133820 env[1473]: time="2024-02-13T06:07:13.133805029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133865 env[1473]: time="2024-02-13T06:07:13.133819726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133865 env[1473]: time="2024-02-13T06:07:13.133832368Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 06:07:13.133924 env[1473]: time="2024-02-13T06:07:13.133914566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133946 env[1473]: time="2024-02-13T06:07:13.133928265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133946 env[1473]: time="2024-02-13T06:07:13.133939941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.133979 env[1473]: time="2024-02-13T06:07:13.133950338Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 06:07:13.133979 env[1473]: time="2024-02-13T06:07:13.133961702Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 06:07:13.133979 env[1473]: time="2024-02-13T06:07:13.133970909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 06:07:13.134025 env[1473]: time="2024-02-13T06:07:13.133987033Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 06:07:13.134025 env[1473]: time="2024-02-13T06:07:13.134012862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 06:07:13.134217 env[1473]: time="2024-02-13T06:07:13.134176728Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.134301296Z" level=info msg="Connect containerd service" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.134511400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.134972018Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135064447Z" level=info msg="Start subscribing containerd event" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135099090Z" level=info msg="Start recovering state" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135117775Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135150457Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135154708Z" level=info msg="Start event monitor" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135168116Z" level=info msg="Start snapshots syncer" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135173946Z" level=info msg="Start cni network conf syncer for default" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135178392Z" level=info msg="Start streaming server" Feb 13 06:07:13.136096 env[1473]: time="2024-02-13T06:07:13.135183769Z" level=info msg="containerd successfully booted in 0.023668s" Feb 13 06:07:13.140420 systemd[1]: Started containerd.service. Feb 13 06:07:13.148887 tar[1466]: ./ptp Feb 13 06:07:13.149142 systemd[1]: Started locksmithd.service. Feb 13 06:07:13.155396 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 06:07:13.155492 systemd[1]: Reached target system-config.target. Feb 13 06:07:13.163394 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 06:07:13.163482 systemd[1]: Reached target user-config.target. Feb 13 06:07:13.172816 tar[1466]: ./vlan Feb 13 06:07:13.195678 tar[1466]: ./host-device Feb 13 06:07:13.204891 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 06:07:13.217867 tar[1466]: ./tuning Feb 13 06:07:13.237512 tar[1466]: ./vrf Feb 13 06:07:13.258170 tar[1466]: ./sbr Feb 13 06:07:13.278381 tar[1466]: ./tap Feb 13 06:07:13.301640 tar[1466]: ./dhcp Feb 13 06:07:13.310321 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 06:07:13.357487 tar[1468]: linux-amd64/LICENSE Feb 13 06:07:13.357559 tar[1468]: linux-amd64/README.md Feb 13 06:07:13.359884 tar[1466]: ./static Feb 13 06:07:13.360037 systemd[1]: Finished prepare-helm.service. Feb 13 06:07:13.375850 tar[1466]: ./firewall Feb 13 06:07:13.376631 systemd[1]: Finished prepare-critools.service. Feb 13 06:07:13.400251 tar[1466]: ./macvlan Feb 13 06:07:13.414310 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 13 06:07:13.441762 extend-filesystems[1448]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 13 06:07:13.441762 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 06:07:13.441762 extend-filesystems[1448]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 13 06:07:13.480325 extend-filesystems[1433]: Resized filesystem in /dev/sdb9 Feb 13 06:07:13.488359 tar[1466]: ./dummy Feb 13 06:07:13.488359 tar[1466]: ./bridge Feb 13 06:07:13.442171 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 06:07:13.442284 systemd[1]: Finished extend-filesystems.service. Feb 13 06:07:13.495228 tar[1466]: ./ipvlan Feb 13 06:07:13.517248 tar[1466]: ./portmap Feb 13 06:07:13.537949 tar[1466]: ./host-local Feb 13 06:07:13.561949 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 06:07:13.604776 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 06:07:13.616270 systemd[1]: Finished sshd-keygen.service. Feb 13 06:07:13.625214 systemd[1]: Starting issuegen.service... Feb 13 06:07:13.633586 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 06:07:13.633684 systemd[1]: Finished issuegen.service. Feb 13 06:07:13.642238 systemd[1]: Starting systemd-user-sessions.service... Feb 13 06:07:13.651677 systemd[1]: Finished systemd-user-sessions.service. Feb 13 06:07:13.662184 systemd[1]: Started getty@tty1.service. Feb 13 06:07:13.671268 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 06:07:13.679598 systemd[1]: Reached target getty.target. Feb 13 06:07:18.700381 login[1537]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 13 06:07:18.701685 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 06:07:18.708441 systemd[1]: Created slice user-500.slice. Feb 13 06:07:18.708942 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 06:07:18.709931 systemd-logind[1461]: New session 2 of user core. Feb 13 06:07:18.713997 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 06:07:18.714658 systemd[1]: Starting user@500.service... Feb 13 06:07:18.716477 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:07:18.825765 coreos-metadata[1424]: Feb 13 06:07:18.825 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 06:07:18.826020 coreos-metadata[1427]: Feb 13 06:07:18.825 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 06:07:18.854029 systemd[1540]: Queued start job for default target default.target. Feb 13 06:07:18.854272 systemd[1540]: Reached target paths.target. Feb 13 06:07:18.854288 systemd[1540]: Reached target sockets.target. Feb 13 06:07:18.854297 systemd[1540]: Reached target timers.target. Feb 13 06:07:18.854304 systemd[1540]: Reached target basic.target. Feb 13 06:07:18.854323 systemd[1540]: Reached target default.target. Feb 13 06:07:18.854337 systemd[1540]: Startup finished in 134ms. Feb 13 06:07:18.854365 systemd[1]: Started user@500.service. Feb 13 06:07:18.854890 systemd[1]: Started session-2.scope. Feb 13 06:07:19.504297 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 06:07:19.511280 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 06:07:19.701047 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 06:07:19.712019 systemd-logind[1461]: New session 1 of user core. Feb 13 06:07:19.714372 systemd[1]: Started session-1.scope. Feb 13 06:07:19.826363 coreos-metadata[1424]: Feb 13 06:07:19.826 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 06:07:19.826764 coreos-metadata[1427]: Feb 13 06:07:19.826 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 06:07:19.859569 systemd[1]: Created slice system-sshd.slice. Feb 13 06:07:19.860071 systemd[1]: Started sshd@0-145.40.90.207:22-43.153.36.182:33376.service. Feb 13 06:07:19.874497 coreos-metadata[1424]: Feb 13 06:07:19.874 INFO Fetch successful Feb 13 06:07:19.876499 coreos-metadata[1427]: Feb 13 06:07:19.876 INFO Fetch successful Feb 13 06:07:19.897252 unknown[1424]: wrote ssh authorized keys file for user: core Feb 13 06:07:19.898683 systemd[1]: Finished coreos-metadata.service. Feb 13 06:07:19.899465 systemd[1]: Started packet-phone-home.service. Feb 13 06:07:19.906037 curl[1566]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 06:07:19.906176 curl[1566]: Dload Upload Total Spent Left Speed Feb 13 06:07:19.909964 update-ssh-keys[1565]: Updated "/home/core/.ssh/authorized_keys" Feb 13 06:07:19.911102 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 06:07:19.912035 systemd[1]: Reached target multi-user.target. Feb 13 06:07:19.915217 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 06:07:19.934743 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 06:07:19.935139 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 06:07:19.935668 systemd[1]: Startup finished in 1.850s (kernel) + 20.018s (initrd) + 14.085s (userspace) = 35.954s. Feb 13 06:07:20.073867 systemd[1]: Started sshd@1-145.40.90.207:22-139.178.68.195:55722.service. Feb 13 06:07:20.124912 curl[1566]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 06:07:20.125390 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 06:07:20.127787 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 55722 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:07:20.128495 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:07:20.130841 systemd-logind[1461]: New session 3 of user core. Feb 13 06:07:20.131234 systemd[1]: Started session-3.scope. Feb 13 06:07:20.178765 systemd[1]: Started sshd@2-145.40.90.207:22-139.178.68.195:55736.service. Feb 13 06:07:20.190604 systemd-timesyncd[1417]: Contacted time server 72.46.53.234:123 (0.flatcar.pool.ntp.org). Feb 13 06:07:20.190653 systemd-timesyncd[1417]: Initial clock synchronization to Tue 2024-02-13 06:07:20.545578 UTC. Feb 13 06:07:20.207937 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 55736 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:07:20.208604 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:07:20.210928 systemd-logind[1461]: New session 4 of user core. Feb 13 06:07:20.211406 systemd[1]: Started session-4.scope. Feb 13 06:07:20.248750 sshd[1561]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:07:20.261399 sshd[1574]: pam_unix(sshd:session): session closed for user core Feb 13 06:07:20.263379 systemd[1]: sshd@2-145.40.90.207:22-139.178.68.195:55736.service: Deactivated successfully. Feb 13 06:07:20.263822 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 06:07:20.264255 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Feb 13 06:07:20.265049 systemd[1]: Started sshd@3-145.40.90.207:22-139.178.68.195:55748.service. Feb 13 06:07:20.265731 systemd-logind[1461]: Removed session 4. Feb 13 06:07:20.336981 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 55748 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:07:20.339224 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:07:20.346793 systemd-logind[1461]: New session 5 of user core. Feb 13 06:07:20.348384 systemd[1]: Started session-5.scope. Feb 13 06:07:20.409998 sshd[1580]: pam_unix(sshd:session): session closed for user core Feb 13 06:07:20.411733 systemd[1]: sshd@3-145.40.90.207:22-139.178.68.195:55748.service: Deactivated successfully. Feb 13 06:07:20.412044 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 06:07:20.412311 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Feb 13 06:07:20.412819 systemd[1]: Started sshd@4-145.40.90.207:22-139.178.68.195:55762.service. Feb 13 06:07:20.413184 systemd-logind[1461]: Removed session 5. Feb 13 06:07:20.441549 sshd[1587]: Accepted publickey for core from 139.178.68.195 port 55762 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:07:20.442333 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:07:20.444965 systemd-logind[1461]: New session 6 of user core. Feb 13 06:07:20.445534 systemd[1]: Started session-6.scope. Feb 13 06:07:20.509725 sshd[1587]: pam_unix(sshd:session): session closed for user core Feb 13 06:07:20.516245 systemd[1]: sshd@4-145.40.90.207:22-139.178.68.195:55762.service: Deactivated successfully. Feb 13 06:07:20.517797 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 06:07:20.519382 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Feb 13 06:07:20.522018 systemd[1]: Started sshd@5-145.40.90.207:22-139.178.68.195:55772.service. Feb 13 06:07:20.524311 systemd-logind[1461]: Removed session 6. Feb 13 06:07:20.575245 sshd[1593]: Accepted publickey for core from 139.178.68.195 port 55772 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:07:20.575857 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:07:20.578202 systemd-logind[1461]: New session 7 of user core. Feb 13 06:07:20.578673 systemd[1]: Started session-7.scope. Feb 13 06:07:20.639966 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 06:07:20.640252 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 06:07:22.108144 systemd[1]: Started sshd@6-145.40.90.207:22-103.86.198.162:47260.service. Feb 13 06:07:22.592804 sshd[1561]: Failed password for root from 43.153.36.182 port 33376 ssh2 Feb 13 06:07:22.875398 sshd[1602]: Connection closed by 103.86.198.162 port 47260 [preauth] Feb 13 06:07:22.877210 systemd[1]: sshd@6-145.40.90.207:22-103.86.198.162:47260.service: Deactivated successfully. Feb 13 06:07:24.541060 sshd[1561]: Received disconnect from 43.153.36.182 port 33376:11: Bye Bye [preauth] Feb 13 06:07:24.541060 sshd[1561]: Disconnected from authenticating user root 43.153.36.182 port 33376 [preauth] Feb 13 06:07:24.543739 systemd[1]: sshd@0-145.40.90.207:22-43.153.36.182:33376.service: Deactivated successfully. Feb 13 06:07:24.744273 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 06:07:24.748672 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 06:07:24.748878 systemd[1]: Reached target network-online.target. Feb 13 06:07:24.749627 systemd[1]: Starting docker.service... Feb 13 06:07:24.770577 env[1621]: time="2024-02-13T06:07:24.770517471Z" level=info msg="Starting up" Feb 13 06:07:24.771236 env[1621]: time="2024-02-13T06:07:24.771197037Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 06:07:24.771236 env[1621]: time="2024-02-13T06:07:24.771204994Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 06:07:24.771236 env[1621]: time="2024-02-13T06:07:24.771215342Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 06:07:24.771236 env[1621]: time="2024-02-13T06:07:24.771220897Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 06:07:24.772210 env[1621]: time="2024-02-13T06:07:24.772200983Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 06:07:24.772210 env[1621]: time="2024-02-13T06:07:24.772208921Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 06:07:24.772265 env[1621]: time="2024-02-13T06:07:24.772216161Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 06:07:24.772265 env[1621]: time="2024-02-13T06:07:24.772221066Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 06:07:24.786661 env[1621]: time="2024-02-13T06:07:24.786620452Z" level=info msg="Loading containers: start." Feb 13 06:07:24.894374 kernel: Initializing XFRM netlink socket Feb 13 06:07:24.950520 env[1621]: time="2024-02-13T06:07:24.950498193Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 13 06:07:24.997165 systemd-networkd[1319]: docker0: Link UP Feb 13 06:07:25.001927 env[1621]: time="2024-02-13T06:07:25.001910944Z" level=info msg="Loading containers: done." Feb 13 06:07:25.007707 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck560063440-merged.mount: Deactivated successfully. Feb 13 06:07:25.007880 env[1621]: time="2024-02-13T06:07:25.007794272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 06:07:25.007945 env[1621]: time="2024-02-13T06:07:25.007899996Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 13 06:07:25.007996 env[1621]: time="2024-02-13T06:07:25.007953313Z" level=info msg="Daemon has completed initialization" Feb 13 06:07:25.015684 systemd[1]: Started docker.service. Feb 13 06:07:25.019460 env[1621]: time="2024-02-13T06:07:25.019409703Z" level=info msg="API listen on /run/docker.sock" Feb 13 06:07:25.035829 systemd[1]: Reloading. Feb 13 06:07:25.093416 /usr/lib/systemd/system-generators/torcx-generator[1775]: time="2024-02-13T06:07:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:07:25.093432 /usr/lib/systemd/system-generators/torcx-generator[1775]: time="2024-02-13T06:07:25Z" level=info msg="torcx already run" Feb 13 06:07:25.145207 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:07:25.145215 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:07:25.157727 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:07:25.211640 systemd[1]: Started kubelet.service. Feb 13 06:07:25.234986 kubelet[1832]: E0213 06:07:25.234921 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 06:07:25.236109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 06:07:25.236179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 06:07:25.919585 env[1473]: time="2024-02-13T06:07:25.919443097Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 13 06:07:26.693121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582099782.mount: Deactivated successfully. Feb 13 06:07:27.970593 env[1473]: time="2024-02-13T06:07:27.970522761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:27.971202 env[1473]: time="2024-02-13T06:07:27.971159218Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:27.972658 env[1473]: time="2024-02-13T06:07:27.972624406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:27.973601 env[1473]: time="2024-02-13T06:07:27.973555188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:27.974435 env[1473]: time="2024-02-13T06:07:27.974400526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 13 06:07:27.980077 env[1473]: time="2024-02-13T06:07:27.980061632Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 13 06:07:29.595624 env[1473]: time="2024-02-13T06:07:29.595563078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:29.596199 env[1473]: time="2024-02-13T06:07:29.596139283Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:29.597093 env[1473]: time="2024-02-13T06:07:29.597046505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:29.598134 env[1473]: time="2024-02-13T06:07:29.598094469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:29.598580 env[1473]: time="2024-02-13T06:07:29.598537649Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 13 06:07:29.604569 env[1473]: time="2024-02-13T06:07:29.604541541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 13 06:07:30.152564 systemd[1]: Started sshd@7-145.40.90.207:22-139.59.81.65:45030.service. Feb 13 06:07:30.719169 env[1473]: time="2024-02-13T06:07:30.719092367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:30.720337 env[1473]: time="2024-02-13T06:07:30.720274539Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:30.722632 env[1473]: time="2024-02-13T06:07:30.722585018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:30.724454 env[1473]: time="2024-02-13T06:07:30.724404529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:30.725272 env[1473]: time="2024-02-13T06:07:30.725212489Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 13 06:07:30.735541 env[1473]: time="2024-02-13T06:07:30.735492507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 13 06:07:31.541711 sshd[1892]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:07:31.617984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406681832.mount: Deactivated successfully. Feb 13 06:07:31.963406 env[1473]: time="2024-02-13T06:07:31.963363669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:31.963990 env[1473]: time="2024-02-13T06:07:31.963950118Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:31.965146 env[1473]: time="2024-02-13T06:07:31.965090519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:31.965926 env[1473]: time="2024-02-13T06:07:31.965877812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:31.966522 env[1473]: time="2024-02-13T06:07:31.966487624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 13 06:07:31.972938 env[1473]: time="2024-02-13T06:07:31.972920708Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 06:07:32.443976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544434124.mount: Deactivated successfully. Feb 13 06:07:32.456751 env[1473]: time="2024-02-13T06:07:32.456606911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:32.458998 env[1473]: time="2024-02-13T06:07:32.458878755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:32.462432 env[1473]: time="2024-02-13T06:07:32.462324656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:32.465931 env[1473]: time="2024-02-13T06:07:32.465828186Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:32.467765 env[1473]: time="2024-02-13T06:07:32.467636956Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 06:07:32.482866 env[1473]: time="2024-02-13T06:07:32.482849157Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 13 06:07:33.042057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276158764.mount: Deactivated successfully. Feb 13 06:07:33.394170 sshd[1892]: Failed password for root from 139.59.81.65 port 45030 ssh2 Feb 13 06:07:33.723691 systemd[1]: Started sshd@8-145.40.90.207:22-122.155.0.205:16663.service. Feb 13 06:07:33.938088 sshd[1892]: Received disconnect from 139.59.81.65 port 45030:11: Bye Bye [preauth] Feb 13 06:07:33.938088 sshd[1892]: Disconnected from authenticating user root 139.59.81.65 port 45030 [preauth] Feb 13 06:07:33.939864 systemd[1]: sshd@7-145.40.90.207:22-139.59.81.65:45030.service: Deactivated successfully. Feb 13 06:07:33.941134 systemd[1]: Started sshd@9-145.40.90.207:22-20.127.106.136:47374.service. Feb 13 06:07:34.357620 sshd[1926]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:07:34.894959 sshd[1922]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=122.155.0.205 user=root Feb 13 06:07:35.460001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 06:07:35.460133 systemd[1]: Stopped kubelet.service. Feb 13 06:07:35.460948 systemd[1]: Started kubelet.service. Feb 13 06:07:35.486096 kubelet[1929]: E0213 06:07:35.486009 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 06:07:35.489049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 06:07:35.489119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 06:07:36.059170 env[1473]: time="2024-02-13T06:07:36.059124106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:36.059725 env[1473]: time="2024-02-13T06:07:36.059673326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:36.061070 env[1473]: time="2024-02-13T06:07:36.061014714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:36.061867 env[1473]: time="2024-02-13T06:07:36.061821737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:36.062782 env[1473]: time="2024-02-13T06:07:36.062745514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 13 06:07:36.068772 env[1473]: time="2024-02-13T06:07:36.068756673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 13 06:07:36.229085 systemd[1]: Started sshd@10-145.40.90.207:22-139.59.22.185:35042.service. Feb 13 06:07:36.290078 sshd[1926]: Failed password for root from 20.127.106.136 port 47374 ssh2 Feb 13 06:07:36.551211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591444562.mount: Deactivated successfully. Feb 13 06:07:36.560669 sshd[1926]: Received disconnect from 20.127.106.136 port 47374:11: Bye Bye [preauth] Feb 13 06:07:36.560669 sshd[1926]: Disconnected from authenticating user root 20.127.106.136 port 47374 [preauth] Feb 13 06:07:36.561203 systemd[1]: sshd@9-145.40.90.207:22-20.127.106.136:47374.service: Deactivated successfully. Feb 13 06:07:36.819887 systemd[1]: Started sshd@11-145.40.90.207:22-154.222.225.117:45058.service. Feb 13 06:07:36.826450 sshd[1922]: Failed password for root from 122.155.0.205 port 16663 ssh2 Feb 13 06:07:37.031803 env[1473]: time="2024-02-13T06:07:37.031775221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:37.032498 env[1473]: time="2024-02-13T06:07:37.032485373Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:37.033267 env[1473]: time="2024-02-13T06:07:37.033254148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:37.034043 env[1473]: time="2024-02-13T06:07:37.034030974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:37.034727 env[1473]: time="2024-02-13T06:07:37.034713347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 13 06:07:37.248862 sshd[1922]: Received disconnect from 122.155.0.205 port 16663:11: Bye Bye [preauth] Feb 13 06:07:37.248862 sshd[1922]: Disconnected from authenticating user root 122.155.0.205 port 16663 [preauth] Feb 13 06:07:37.249657 systemd[1]: sshd@8-145.40.90.207:22-122.155.0.205:16663.service: Deactivated successfully. Feb 13 06:07:37.559225 sshd[1959]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.22.185 user=root Feb 13 06:07:37.559274 sshd[1959]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 06:07:37.767980 sshd[1963]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:07:38.535644 systemd[1]: Stopped kubelet.service. Feb 13 06:07:38.547261 systemd[1]: Reloading. Feb 13 06:07:38.607363 /usr/lib/systemd/system-generators/torcx-generator[2108]: time="2024-02-13T06:07:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:07:38.607387 /usr/lib/systemd/system-generators/torcx-generator[2108]: time="2024-02-13T06:07:38Z" level=info msg="torcx already run" Feb 13 06:07:38.659238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:07:38.659245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:07:38.671443 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:07:38.726737 systemd[1]: Started kubelet.service. Feb 13 06:07:38.749007 kubelet[2167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 06:07:38.749007 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 06:07:38.749007 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 06:07:38.749007 kubelet[2167]: I0213 06:07:38.748992 2167 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 06:07:38.990490 kubelet[2167]: I0213 06:07:38.990458 2167 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 13 06:07:38.990490 kubelet[2167]: I0213 06:07:38.990486 2167 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 06:07:38.990628 kubelet[2167]: I0213 06:07:38.990600 2167 server.go:895] "Client rotation is on, will bootstrap in background" Feb 13 06:07:38.992630 kubelet[2167]: I0213 06:07:38.992597 2167 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 06:07:38.994053 kubelet[2167]: E0213 06:07:38.994044 2167 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.90.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.012830 kubelet[2167]: I0213 06:07:39.012794 2167 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 06:07:39.012924 kubelet[2167]: I0213 06:07:39.012889 2167 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 06:07:39.013009 kubelet[2167]: I0213 06:07:39.012970 2167 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 06:07:39.013009 kubelet[2167]: I0213 06:07:39.012982 2167 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 06:07:39.013009 kubelet[2167]: I0213 06:07:39.012988 2167 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 06:07:39.013117 kubelet[2167]: I0213 06:07:39.013039 2167 state_mem.go:36] "Initialized new in-memory state store" Feb 13 06:07:39.013117 kubelet[2167]: I0213 06:07:39.013079 2167 kubelet.go:393] "Attempting to sync node with API server" Feb 13 06:07:39.013117 kubelet[2167]: I0213 06:07:39.013088 2167 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 06:07:39.013117 kubelet[2167]: I0213 06:07:39.013100 2167 kubelet.go:309] "Adding apiserver pod source" Feb 13 06:07:39.013117 kubelet[2167]: I0213 06:07:39.013107 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 06:07:39.013405 kubelet[2167]: W0213 06:07:39.013386 2167 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://145.40.90.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-25d9a0518b&limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.013483 kubelet[2167]: E0213 06:07:39.013412 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-25d9a0518b&limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.013483 kubelet[2167]: I0213 06:07:39.013451 2167 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 06:07:39.013553 kubelet[2167]: W0213 06:07:39.013457 2167 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://145.40.90.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.013553 kubelet[2167]: E0213 06:07:39.013533 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.013725 kubelet[2167]: W0213 06:07:39.013692 2167 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 06:07:39.014007 kubelet[2167]: I0213 06:07:39.014002 2167 server.go:1232] "Started kubelet" Feb 13 06:07:39.014065 kubelet[2167]: I0213 06:07:39.014052 2167 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 06:07:39.014113 kubelet[2167]: I0213 06:07:39.014069 2167 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 06:07:39.014219 kubelet[2167]: I0213 06:07:39.014210 2167 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 06:07:39.014219 kubelet[2167]: E0213 06:07:39.014210 2167 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 06:07:39.014292 kubelet[2167]: E0213 06:07:39.014228 2167 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 06:07:39.014292 kubelet[2167]: E0213 06:07:39.014196 2167 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-25d9a0518b.17b3571b6d5b304e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-25d9a0518b", UID:"ci-3510.3.2-a-25d9a0518b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-25d9a0518b"}, FirstTimestamp:time.Date(2024, time.February, 13, 6, 7, 39, 13976142, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 6, 7, 39, 13976142, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-25d9a0518b"}': 'Post "https://145.40.90.207:6443/api/v1/namespaces/default/events": dial tcp 145.40.90.207:6443: connect: connection refused'(may retry after sleeping) Feb 13 06:07:39.024046 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 06:07:39.024100 kubelet[2167]: I0213 06:07:39.024040 2167 server.go:462] "Adding debug handlers to kubelet server" Feb 13 06:07:39.024100 kubelet[2167]: I0213 06:07:39.024075 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 06:07:39.024212 kubelet[2167]: I0213 06:07:39.024204 2167 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 06:07:39.024246 kubelet[2167]: E0213 06:07:39.024219 2167 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-25d9a0518b\" not found" Feb 13 06:07:39.024246 kubelet[2167]: I0213 06:07:39.024231 2167 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 06:07:39.024312 kubelet[2167]: I0213 06:07:39.024283 2167 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 06:07:39.024711 kubelet[2167]: E0213 06:07:39.024701 2167 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-25d9a0518b?timeout=10s\": dial tcp 145.40.90.207:6443: connect: connection refused" interval="200ms" Feb 13 06:07:39.024829 kubelet[2167]: W0213 06:07:39.024796 2167 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.024923 kubelet[2167]: E0213 06:07:39.024910 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.037754 kubelet[2167]: I0213 06:07:39.037741 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 06:07:39.037754 kubelet[2167]: I0213 06:07:39.037752 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 06:07:39.037863 kubelet[2167]: I0213 06:07:39.037762 2167 state_mem.go:36] "Initialized new in-memory state store" Feb 13 06:07:39.038071 kubelet[2167]: I0213 06:07:39.038062 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 06:07:39.038510 kubelet[2167]: I0213 06:07:39.038501 2167 policy_none.go:49] "None policy: Start" Feb 13 06:07:39.038510 kubelet[2167]: I0213 06:07:39.038510 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 06:07:39.038557 kubelet[2167]: I0213 06:07:39.038522 2167 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 06:07:39.038557 kubelet[2167]: I0213 06:07:39.038534 2167 kubelet.go:2303] "Starting kubelet main sync loop" Feb 13 06:07:39.038601 kubelet[2167]: E0213 06:07:39.038558 2167 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 06:07:39.038863 kubelet[2167]: W0213 06:07:39.038828 2167 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.038895 kubelet[2167]: E0213 06:07:39.038872 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.038895 kubelet[2167]: I0213 06:07:39.038881 2167 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 06:07:39.038895 kubelet[2167]: I0213 06:07:39.038893 2167 state_mem.go:35] "Initializing new in-memory state store" Feb 13 06:07:39.041072 systemd[1]: Created slice kubepods.slice. Feb 13 06:07:39.043166 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 06:07:39.067122 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 06:07:39.068018 kubelet[2167]: I0213 06:07:39.068006 2167 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 06:07:39.068171 kubelet[2167]: I0213 06:07:39.068133 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 06:07:39.068414 kubelet[2167]: E0213 06:07:39.068401 2167 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-25d9a0518b\" not found" Feb 13 06:07:39.128328 kubelet[2167]: I0213 06:07:39.128239 2167 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.129131 kubelet[2167]: E0213 06:07:39.129039 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://145.40.90.207:6443/api/v1/nodes\": dial tcp 145.40.90.207:6443: connect: connection refused" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.139409 kubelet[2167]: I0213 06:07:39.139333 2167 topology_manager.go:215] "Topology Admit Handler" podUID="7521b36d30ba18742780652acf16d26f" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.142824 kubelet[2167]: I0213 06:07:39.142744 2167 topology_manager.go:215] "Topology Admit Handler" podUID="c4d5b4c536fe7fceb39292d6a85906d7" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.149176 kubelet[2167]: I0213 06:07:39.149103 2167 topology_manager.go:215] "Topology Admit Handler" podUID="36aa6311c39622229c21a2d8a426613c" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.162320 systemd[1]: Created slice kubepods-burstable-pod7521b36d30ba18742780652acf16d26f.slice. Feb 13 06:07:39.193782 systemd[1]: Created slice kubepods-burstable-podc4d5b4c536fe7fceb39292d6a85906d7.slice. Feb 13 06:07:39.202101 systemd[1]: Created slice kubepods-burstable-pod36aa6311c39622229c21a2d8a426613c.slice. Feb 13 06:07:39.225272 kubelet[2167]: I0213 06:07:39.225219 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225554 kubelet[2167]: I0213 06:07:39.225337 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225554 kubelet[2167]: I0213 06:07:39.225407 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4d5b4c536fe7fceb39292d6a85906d7-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" (UID: \"c4d5b4c536fe7fceb39292d6a85906d7\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225554 kubelet[2167]: I0213 06:07:39.225471 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4d5b4c536fe7fceb39292d6a85906d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" (UID: \"c4d5b4c536fe7fceb39292d6a85906d7\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225554 kubelet[2167]: I0213 06:07:39.225527 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225971 kubelet[2167]: I0213 06:07:39.225583 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225971 kubelet[2167]: I0213 06:07:39.225664 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225971 kubelet[2167]: I0213 06:07:39.225724 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7521b36d30ba18742780652acf16d26f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-25d9a0518b\" (UID: \"7521b36d30ba18742780652acf16d26f\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225971 kubelet[2167]: I0213 06:07:39.225777 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4d5b4c536fe7fceb39292d6a85906d7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" (UID: \"c4d5b4c536fe7fceb39292d6a85906d7\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.225971 kubelet[2167]: E0213 06:07:39.225871 2167 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-25d9a0518b?timeout=10s\": dial tcp 145.40.90.207:6443: connect: connection refused" interval="400ms" Feb 13 06:07:39.333218 kubelet[2167]: I0213 06:07:39.333009 2167 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.333857 kubelet[2167]: E0213 06:07:39.333780 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://145.40.90.207:6443/api/v1/nodes\": dial tcp 145.40.90.207:6443: connect: connection refused" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.487415 env[1473]: time="2024-02-13T06:07:39.487323900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-25d9a0518b,Uid:7521b36d30ba18742780652acf16d26f,Namespace:kube-system,Attempt:0,}" Feb 13 06:07:39.499973 env[1473]: time="2024-02-13T06:07:39.499898912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-25d9a0518b,Uid:c4d5b4c536fe7fceb39292d6a85906d7,Namespace:kube-system,Attempt:0,}" Feb 13 06:07:39.507159 env[1473]: time="2024-02-13T06:07:39.507083176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-25d9a0518b,Uid:36aa6311c39622229c21a2d8a426613c,Namespace:kube-system,Attempt:0,}" Feb 13 06:07:39.626928 kubelet[2167]: E0213 06:07:39.626745 2167 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-25d9a0518b?timeout=10s\": dial tcp 145.40.90.207:6443: connect: connection refused" interval="800ms" Feb 13 06:07:39.738017 kubelet[2167]: I0213 06:07:39.737959 2167 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.738762 kubelet[2167]: E0213 06:07:39.738681 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://145.40.90.207:6443/api/v1/nodes\": dial tcp 145.40.90.207:6443: connect: connection refused" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:39.901124 sshd[1959]: Failed password for root from 139.59.22.185 port 35042 ssh2 Feb 13 06:07:39.932585 kubelet[2167]: W0213 06:07:39.932445 2167 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.933400 kubelet[2167]: E0213 06:07:39.932606 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.937235 kubelet[2167]: W0213 06:07:39.937129 2167 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:39.937460 kubelet[2167]: E0213 06:07:39.937261 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 06:07:40.042383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085339212.mount: Deactivated successfully. Feb 13 06:07:40.044217 env[1473]: time="2024-02-13T06:07:40.044170453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.045300 env[1473]: time="2024-02-13T06:07:40.045257629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.045913 env[1473]: time="2024-02-13T06:07:40.045873585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.046661 env[1473]: time="2024-02-13T06:07:40.046620269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.047337 env[1473]: time="2024-02-13T06:07:40.047285078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.048647 env[1473]: time="2024-02-13T06:07:40.048605942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.050311 env[1473]: time="2024-02-13T06:07:40.050292195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.050748 env[1473]: time="2024-02-13T06:07:40.050706013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.052092 env[1473]: time="2024-02-13T06:07:40.052047823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.052559 env[1473]: time="2024-02-13T06:07:40.052520427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.052945 env[1473]: time="2024-02-13T06:07:40.052912447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.053467 env[1473]: time="2024-02-13T06:07:40.053433949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:07:40.057581 env[1473]: time="2024-02-13T06:07:40.057548079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:07:40.057581 env[1473]: time="2024-02-13T06:07:40.057568302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:07:40.057581 env[1473]: time="2024-02-13T06:07:40.057575133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:07:40.057705 env[1473]: time="2024-02-13T06:07:40.057643330Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2e6fa7f7c75a187cb3c0e44e1bee1e9a089ebe79eb59aa6e5a6618b63455999 pid=2217 runtime=io.containerd.runc.v2 Feb 13 06:07:40.058716 env[1473]: time="2024-02-13T06:07:40.058654654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:07:40.058716 env[1473]: time="2024-02-13T06:07:40.058679373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:07:40.058716 env[1473]: time="2024-02-13T06:07:40.058694767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:07:40.058840 env[1473]: time="2024-02-13T06:07:40.058782353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbfc66085b9110eb2921381804838b33c014b3c0df2a530ebcc82a370919e32c pid=2233 runtime=io.containerd.runc.v2 Feb 13 06:07:40.059631 env[1473]: time="2024-02-13T06:07:40.059599272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:07:40.059631 env[1473]: time="2024-02-13T06:07:40.059623767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:07:40.059724 env[1473]: time="2024-02-13T06:07:40.059633739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:07:40.059758 env[1473]: time="2024-02-13T06:07:40.059736683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b2dadb076d1f16988a2911e21bef8ebd4691497816fed897849aa4a90039f01 pid=2246 runtime=io.containerd.runc.v2 Feb 13 06:07:40.063556 systemd[1]: Started cri-containerd-c2e6fa7f7c75a187cb3c0e44e1bee1e9a089ebe79eb59aa6e5a6618b63455999.scope. Feb 13 06:07:40.065544 systemd[1]: Started cri-containerd-8b2dadb076d1f16988a2911e21bef8ebd4691497816fed897849aa4a90039f01.scope. Feb 13 06:07:40.066288 systemd[1]: Started cri-containerd-cbfc66085b9110eb2921381804838b33c014b3c0df2a530ebcc82a370919e32c.scope. Feb 13 06:07:40.088101 env[1473]: time="2024-02-13T06:07:40.088064425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-25d9a0518b,Uid:36aa6311c39622229c21a2d8a426613c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbfc66085b9110eb2921381804838b33c014b3c0df2a530ebcc82a370919e32c\"" Feb 13 06:07:40.088200 env[1473]: time="2024-02-13T06:07:40.088065176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-25d9a0518b,Uid:7521b36d30ba18742780652acf16d26f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2e6fa7f7c75a187cb3c0e44e1bee1e9a089ebe79eb59aa6e5a6618b63455999\"" Feb 13 06:07:40.089324 env[1473]: time="2024-02-13T06:07:40.089308225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-25d9a0518b,Uid:c4d5b4c536fe7fceb39292d6a85906d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b2dadb076d1f16988a2911e21bef8ebd4691497816fed897849aa4a90039f01\"" Feb 13 06:07:40.090136 env[1473]: time="2024-02-13T06:07:40.090119904Z" level=info msg="CreateContainer within sandbox \"c2e6fa7f7c75a187cb3c0e44e1bee1e9a089ebe79eb59aa6e5a6618b63455999\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 06:07:40.090243 env[1473]: time="2024-02-13T06:07:40.090224898Z" level=info msg="CreateContainer within sandbox \"cbfc66085b9110eb2921381804838b33c014b3c0df2a530ebcc82a370919e32c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 06:07:40.090328 env[1473]: time="2024-02-13T06:07:40.090312685Z" level=info msg="CreateContainer within sandbox \"8b2dadb076d1f16988a2911e21bef8ebd4691497816fed897849aa4a90039f01\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 06:07:40.096682 env[1473]: time="2024-02-13T06:07:40.096636054Z" level=info msg="CreateContainer within sandbox \"c2e6fa7f7c75a187cb3c0e44e1bee1e9a089ebe79eb59aa6e5a6618b63455999\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"50ba89f0c8df517464f0cda43fda9869ac23948ade70bcd1f4388d71d759a997\"" Feb 13 06:07:40.096953 env[1473]: time="2024-02-13T06:07:40.096889830Z" level=info msg="StartContainer for \"50ba89f0c8df517464f0cda43fda9869ac23948ade70bcd1f4388d71d759a997\"" Feb 13 06:07:40.097772 env[1473]: time="2024-02-13T06:07:40.097727650Z" level=info msg="CreateContainer within sandbox \"cbfc66085b9110eb2921381804838b33c014b3c0df2a530ebcc82a370919e32c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3461b59d8afc2c154a3cb024e238eb20ce51b5cefafd63132b7c1a44fe63999\"" Feb 13 06:07:40.097923 env[1473]: time="2024-02-13T06:07:40.097874484Z" level=info msg="StartContainer for \"a3461b59d8afc2c154a3cb024e238eb20ce51b5cefafd63132b7c1a44fe63999\"" Feb 13 06:07:40.099043 env[1473]: time="2024-02-13T06:07:40.099024290Z" level=info msg="CreateContainer within sandbox \"8b2dadb076d1f16988a2911e21bef8ebd4691497816fed897849aa4a90039f01\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"284e5bf5b69835c0c5c1a5074b94ac9be18ca8c4baa501c6c992a49c5e094d19\"" Feb 13 06:07:40.099203 env[1473]: time="2024-02-13T06:07:40.099189083Z" level=info msg="StartContainer for \"284e5bf5b69835c0c5c1a5074b94ac9be18ca8c4baa501c6c992a49c5e094d19\"" Feb 13 06:07:40.105581 systemd[1]: Started cri-containerd-50ba89f0c8df517464f0cda43fda9869ac23948ade70bcd1f4388d71d759a997.scope. Feb 13 06:07:40.106206 systemd[1]: Started cri-containerd-a3461b59d8afc2c154a3cb024e238eb20ce51b5cefafd63132b7c1a44fe63999.scope. Feb 13 06:07:40.107710 systemd[1]: Started cri-containerd-284e5bf5b69835c0c5c1a5074b94ac9be18ca8c4baa501c6c992a49c5e094d19.scope. Feb 13 06:07:40.111365 sshd[1963]: Failed password for root from 154.222.225.117 port 45058 ssh2 Feb 13 06:07:40.130047 env[1473]: time="2024-02-13T06:07:40.130015291Z" level=info msg="StartContainer for \"50ba89f0c8df517464f0cda43fda9869ac23948ade70bcd1f4388d71d759a997\" returns successfully" Feb 13 06:07:40.130171 env[1473]: time="2024-02-13T06:07:40.130155931Z" level=info msg="StartContainer for \"a3461b59d8afc2c154a3cb024e238eb20ce51b5cefafd63132b7c1a44fe63999\" returns successfully" Feb 13 06:07:40.130671 env[1473]: time="2024-02-13T06:07:40.130653954Z" level=info msg="StartContainer for \"284e5bf5b69835c0c5c1a5074b94ac9be18ca8c4baa501c6c992a49c5e094d19\" returns successfully" Feb 13 06:07:40.540201 kubelet[2167]: I0213 06:07:40.540185 2167 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:40.867169 kubelet[2167]: E0213 06:07:40.867121 2167 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-25d9a0518b\" not found" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:40.967383 kubelet[2167]: I0213 06:07:40.967309 2167 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:41.014515 kubelet[2167]: I0213 06:07:41.014464 2167 apiserver.go:52] "Watching apiserver" Feb 13 06:07:41.024624 kubelet[2167]: I0213 06:07:41.024606 2167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 06:07:41.045717 kubelet[2167]: E0213 06:07:41.045699 2167 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-25d9a0518b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:41.045805 kubelet[2167]: E0213 06:07:41.045784 2167 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:41.045839 kubelet[2167]: E0213 06:07:41.045816 2167 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:41.971050 systemd[1]: Started sshd@12-145.40.90.207:22-165.154.0.66:48206.service. Feb 13 06:07:42.053711 kubelet[2167]: W0213 06:07:42.053605 2167 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 06:07:42.086941 sshd[1959]: Received disconnect from 139.59.22.185 port 35042:11: Bye Bye [preauth] Feb 13 06:07:42.086941 sshd[1959]: Disconnected from authenticating user root 139.59.22.185 port 35042 [preauth] Feb 13 06:07:42.089635 systemd[1]: sshd@10-145.40.90.207:22-139.59.22.185:35042.service: Deactivated successfully. Feb 13 06:07:42.216945 sshd[1963]: Received disconnect from 154.222.225.117 port 45058:11: Bye Bye [preauth] Feb 13 06:07:42.216945 sshd[1963]: Disconnected from authenticating user root 154.222.225.117 port 45058 [preauth] Feb 13 06:07:42.219325 systemd[1]: sshd@11-145.40.90.207:22-154.222.225.117:45058.service: Deactivated successfully. Feb 13 06:07:42.899590 sshd[2477]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:07:43.896275 systemd[1]: Reloading. Feb 13 06:07:43.950724 /usr/lib/systemd/system-generators/torcx-generator[2505]: time="2024-02-13T06:07:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:07:43.950740 /usr/lib/systemd/system-generators/torcx-generator[2505]: time="2024-02-13T06:07:43Z" level=info msg="torcx already run" Feb 13 06:07:44.004014 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:07:44.004022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:07:44.017526 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:07:44.082286 kubelet[2167]: I0213 06:07:44.082235 2167 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 06:07:44.082260 systemd[1]: Stopping kubelet.service... Feb 13 06:07:44.101676 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 06:07:44.101780 systemd[1]: Stopped kubelet.service. Feb 13 06:07:44.102662 systemd[1]: Started kubelet.service. Feb 13 06:07:44.125836 kubelet[2564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 06:07:44.125836 kubelet[2564]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 06:07:44.125836 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 06:07:44.125836 kubelet[2564]: I0213 06:07:44.125820 2564 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 06:07:44.128240 kubelet[2564]: I0213 06:07:44.128226 2564 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 13 06:07:44.128240 kubelet[2564]: I0213 06:07:44.128238 2564 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 06:07:44.128359 kubelet[2564]: I0213 06:07:44.128352 2564 server.go:895] "Client rotation is on, will bootstrap in background" Feb 13 06:07:44.129242 kubelet[2564]: I0213 06:07:44.129233 2564 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 06:07:44.129869 kubelet[2564]: I0213 06:07:44.129825 2564 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 06:07:44.163594 kubelet[2564]: I0213 06:07:44.163423 2564 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 06:07:44.163973 kubelet[2564]: I0213 06:07:44.163882 2564 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 06:07:44.164461 kubelet[2564]: I0213 06:07:44.164375 2564 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 06:07:44.164461 kubelet[2564]: I0213 06:07:44.164450 2564 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 06:07:44.165031 kubelet[2564]: I0213 06:07:44.164480 2564 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 06:07:44.165031 kubelet[2564]: I0213 06:07:44.164567 2564 state_mem.go:36] "Initialized new in-memory state store" Feb 13 06:07:44.165031 kubelet[2564]: I0213 06:07:44.164748 2564 kubelet.go:393] "Attempting to sync node with API server" Feb 13 06:07:44.165031 kubelet[2564]: I0213 06:07:44.164787 2564 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 06:07:44.165031 kubelet[2564]: I0213 06:07:44.164845 2564 kubelet.go:309] "Adding apiserver pod source" Feb 13 06:07:44.165031 kubelet[2564]: I0213 06:07:44.164893 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 06:07:44.166248 kubelet[2564]: I0213 06:07:44.166195 2564 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 06:07:44.167682 kubelet[2564]: I0213 06:07:44.167623 2564 server.go:1232] "Started kubelet" Feb 13 06:07:44.167917 kubelet[2564]: I0213 06:07:44.167886 2564 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 06:07:44.168096 kubelet[2564]: I0213 06:07:44.168013 2564 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 06:07:44.168600 kubelet[2564]: I0213 06:07:44.168506 2564 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 06:07:44.168866 kubelet[2564]: E0213 06:07:44.168784 2564 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 06:07:44.168866 kubelet[2564]: E0213 06:07:44.168859 2564 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 06:07:44.169568 kubelet[2564]: I0213 06:07:44.169559 2564 server.go:462] "Adding debug handlers to kubelet server" Feb 13 06:07:44.169701 kubelet[2564]: I0213 06:07:44.169693 2564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 06:07:44.170023 kubelet[2564]: I0213 06:07:44.169733 2564 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 06:07:44.170079 kubelet[2564]: I0213 06:07:44.170060 2564 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 06:07:44.170215 kubelet[2564]: I0213 06:07:44.170204 2564 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 06:07:44.174012 kubelet[2564]: I0213 06:07:44.173997 2564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 06:07:44.174526 kubelet[2564]: I0213 06:07:44.174514 2564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 06:07:44.174578 kubelet[2564]: I0213 06:07:44.174536 2564 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 06:07:44.174578 kubelet[2564]: I0213 06:07:44.174547 2564 kubelet.go:2303] "Starting kubelet main sync loop" Feb 13 06:07:44.174631 kubelet[2564]: E0213 06:07:44.174581 2564 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 06:07:44.189402 kubelet[2564]: I0213 06:07:44.189361 2564 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 06:07:44.189402 kubelet[2564]: I0213 06:07:44.189373 2564 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 06:07:44.189402 kubelet[2564]: I0213 06:07:44.189384 2564 state_mem.go:36] "Initialized new in-memory state store" Feb 13 06:07:44.189525 kubelet[2564]: I0213 06:07:44.189470 2564 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 06:07:44.189525 kubelet[2564]: I0213 06:07:44.189482 2564 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 06:07:44.189525 kubelet[2564]: I0213 06:07:44.189486 2564 policy_none.go:49] "None policy: Start" Feb 13 06:07:44.189877 kubelet[2564]: I0213 06:07:44.189840 2564 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 06:07:44.189877 kubelet[2564]: I0213 06:07:44.189851 2564 state_mem.go:35] "Initializing new in-memory state store" Feb 13 06:07:44.189928 kubelet[2564]: I0213 06:07:44.189917 2564 state_mem.go:75] "Updated machine memory state" Feb 13 06:07:44.191732 kubelet[2564]: I0213 06:07:44.191678 2564 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 06:07:44.191783 kubelet[2564]: I0213 06:07:44.191778 2564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 06:07:44.247528 sudo[2607]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 06:07:44.248128 sudo[2607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 13 06:07:44.264527 sshd[2477]: Failed password for root from 165.154.0.66 port 48206 ssh2 Feb 13 06:07:44.275638 kubelet[2564]: I0213 06:07:44.275545 2564 topology_manager.go:215] "Topology Admit Handler" podUID="c4d5b4c536fe7fceb39292d6a85906d7" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.275840 kubelet[2564]: I0213 06:07:44.275779 2564 topology_manager.go:215] "Topology Admit Handler" podUID="36aa6311c39622229c21a2d8a426613c" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.275964 kubelet[2564]: I0213 06:07:44.275906 2564 topology_manager.go:215] "Topology Admit Handler" podUID="7521b36d30ba18742780652acf16d26f" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.277901 kubelet[2564]: I0213 06:07:44.277846 2564 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.285805 kubelet[2564]: W0213 06:07:44.285746 2564 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 06:07:44.285805 kubelet[2564]: W0213 06:07:44.285793 2564 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 06:07:44.286556 kubelet[2564]: W0213 06:07:44.286511 2564 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 06:07:44.286703 kubelet[2564]: E0213 06:07:44.286615 2564 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.289906 kubelet[2564]: I0213 06:07:44.289837 2564 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.290043 kubelet[2564]: I0213 06:07:44.289948 2564 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371327 kubelet[2564]: I0213 06:07:44.371284 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371327 kubelet[2564]: I0213 06:07:44.371313 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371327 kubelet[2564]: I0213 06:07:44.371328 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7521b36d30ba18742780652acf16d26f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-25d9a0518b\" (UID: \"7521b36d30ba18742780652acf16d26f\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371477 kubelet[2564]: I0213 06:07:44.371339 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4d5b4c536fe7fceb39292d6a85906d7-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" (UID: \"c4d5b4c536fe7fceb39292d6a85906d7\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371477 kubelet[2564]: I0213 06:07:44.371352 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4d5b4c536fe7fceb39292d6a85906d7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" (UID: \"c4d5b4c536fe7fceb39292d6a85906d7\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371477 kubelet[2564]: I0213 06:07:44.371388 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371477 kubelet[2564]: I0213 06:07:44.371419 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371477 kubelet[2564]: I0213 06:07:44.371441 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36aa6311c39622229c21a2d8a426613c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-25d9a0518b\" (UID: \"36aa6311c39622229c21a2d8a426613c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.371571 kubelet[2564]: I0213 06:07:44.371454 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4d5b4c536fe7fceb39292d6a85906d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" (UID: \"c4d5b4c536fe7fceb39292d6a85906d7\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:44.632407 sudo[2607]: pam_unix(sudo:session): session closed for user root Feb 13 06:07:45.165924 kubelet[2564]: I0213 06:07:45.165813 2564 apiserver.go:52] "Watching apiserver" Feb 13 06:07:45.191102 sshd[2477]: Received disconnect from 165.154.0.66 port 48206:11: Bye Bye [preauth] Feb 13 06:07:45.191102 sshd[2477]: Disconnected from authenticating user root 165.154.0.66 port 48206 [preauth] Feb 13 06:07:45.192624 kubelet[2564]: W0213 06:07:45.192548 2564 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 06:07:45.192817 kubelet[2564]: E0213 06:07:45.192702 2564 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-25d9a0518b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" Feb 13 06:07:45.193717 systemd[1]: sshd@12-145.40.90.207:22-165.154.0.66:48206.service: Deactivated successfully. Feb 13 06:07:45.245717 kubelet[2564]: I0213 06:07:45.245660 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-25d9a0518b" podStartSLOduration=3.245620802 podCreationTimestamp="2024-02-13 06:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:07:45.239021795 +0000 UTC m=+1.134584570" watchObservedRunningTime="2024-02-13 06:07:45.245620802 +0000 UTC m=+1.141183568" Feb 13 06:07:45.245823 kubelet[2564]: I0213 06:07:45.245734 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-25d9a0518b" podStartSLOduration=1.24571572 podCreationTimestamp="2024-02-13 06:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:07:45.245467122 +0000 UTC m=+1.141029897" watchObservedRunningTime="2024-02-13 06:07:45.24571572 +0000 UTC m=+1.141278480" Feb 13 06:07:45.252603 kubelet[2564]: I0213 06:07:45.252591 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-25d9a0518b" podStartSLOduration=1.252570655 podCreationTimestamp="2024-02-13 06:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:07:45.252551376 +0000 UTC m=+1.148114138" watchObservedRunningTime="2024-02-13 06:07:45.252570655 +0000 UTC m=+1.148133413" Feb 13 06:07:45.270857 kubelet[2564]: I0213 06:07:45.270833 2564 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 06:07:45.427895 sudo[1596]: pam_unix(sudo:session): session closed for user root Feb 13 06:07:45.428935 sshd[1593]: pam_unix(sshd:session): session closed for user core Feb 13 06:07:45.430781 systemd[1]: sshd@5-145.40.90.207:22-139.178.68.195:55772.service: Deactivated successfully. Feb 13 06:07:45.431354 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 06:07:45.431480 systemd[1]: session-7.scope: Consumed 2.651s CPU time. Feb 13 06:07:45.431918 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Feb 13 06:07:45.432727 systemd-logind[1461]: Removed session 7. Feb 13 06:07:45.689425 systemd[1]: Started sshd@13-145.40.90.207:22-139.150.74.245:60272.service. Feb 13 06:07:46.586591 sshd[2705]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.150.74.245 user=root Feb 13 06:07:48.835030 sshd[2705]: Failed password for root from 139.150.74.245 port 60272 ssh2 Feb 13 06:07:51.006897 sshd[2705]: Received disconnect from 139.150.74.245 port 60272:11: Bye Bye [preauth] Feb 13 06:07:51.006897 sshd[2705]: Disconnected from authenticating user root 139.150.74.245 port 60272 [preauth] Feb 13 06:07:51.009412 systemd[1]: sshd@13-145.40.90.207:22-139.150.74.245:60272.service: Deactivated successfully. Feb 13 06:07:55.860947 systemd[1]: Started sshd@14-145.40.90.207:22-185.227.136.16:47922.service. Feb 13 06:07:56.895889 sshd[2711]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.227.136.16 user=root Feb 13 06:07:57.841338 kubelet[2564]: I0213 06:07:57.841298 2564 topology_manager.go:215] "Topology Admit Handler" podUID="6bbe3bd3-de94-47e5-95ac-af114b6c5a76" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-fjlfk" Feb 13 06:07:57.847814 systemd[1]: Created slice kubepods-besteffort-pod6bbe3bd3_de94_47e5_95ac_af114b6c5a76.slice. Feb 13 06:07:57.855495 kubelet[2564]: I0213 06:07:57.855472 2564 topology_manager.go:215] "Topology Admit Handler" podUID="3b1413b7-8020-477f-b68d-f41c17dc4a5a" podNamespace="kube-system" podName="kube-proxy-jbwwj" Feb 13 06:07:57.856342 kubelet[2564]: I0213 06:07:57.856322 2564 topology_manager.go:215] "Topology Admit Handler" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" podNamespace="kube-system" podName="cilium-4mk9s" Feb 13 06:07:57.856858 kubelet[2564]: I0213 06:07:57.856845 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-fjlfk\" (UID: \"6bbe3bd3-de94-47e5-95ac-af114b6c5a76\") " pod="kube-system/cilium-operator-6bc8ccdb58-fjlfk" Feb 13 06:07:57.856913 kubelet[2564]: I0213 06:07:57.856890 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s526\" (UniqueName: \"kubernetes.io/projected/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-kube-api-access-7s526\") pod \"cilium-operator-6bc8ccdb58-fjlfk\" (UID: \"6bbe3bd3-de94-47e5-95ac-af114b6c5a76\") " pod="kube-system/cilium-operator-6bc8ccdb58-fjlfk" Feb 13 06:07:57.860270 kubelet[2564]: I0213 06:07:57.860257 2564 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 06:07:57.860511 env[1473]: time="2024-02-13T06:07:57.860482996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 06:07:57.860717 kubelet[2564]: I0213 06:07:57.860602 2564 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 06:07:57.865542 systemd[1]: Created slice kubepods-besteffort-pod3b1413b7_8020_477f_b68d_f41c17dc4a5a.slice. Feb 13 06:07:57.881959 systemd[1]: Created slice kubepods-burstable-podc01960ea_196b_4b62_a22b_0bdac416c20d.slice. Feb 13 06:07:57.957976 kubelet[2564]: I0213 06:07:57.957851 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-bpf-maps\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.957976 kubelet[2564]: I0213 06:07:57.957968 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-config-path\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.958439 kubelet[2564]: I0213 06:07:57.958097 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vjt\" (UniqueName: \"kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-kube-api-access-92vjt\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.958439 kubelet[2564]: I0213 06:07:57.958256 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-etc-cni-netd\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.958439 kubelet[2564]: I0213 06:07:57.958414 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-hubble-tls\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.958761 kubelet[2564]: I0213 06:07:57.958592 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b1413b7-8020-477f-b68d-f41c17dc4a5a-kube-proxy\") pod \"kube-proxy-jbwwj\" (UID: \"3b1413b7-8020-477f-b68d-f41c17dc4a5a\") " pod="kube-system/kube-proxy-jbwwj" Feb 13 06:07:57.958761 kubelet[2564]: I0213 06:07:57.958712 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-run\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.958957 kubelet[2564]: I0213 06:07:57.958781 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-cgroup\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.958957 kubelet[2564]: I0213 06:07:57.958916 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c01960ea-196b-4b62-a22b-0bdac416c20d-clustermesh-secrets\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.959232 kubelet[2564]: I0213 06:07:57.959182 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-hostproc\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.959492 kubelet[2564]: I0213 06:07:57.959421 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-xtables-lock\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.959710 kubelet[2564]: I0213 06:07:57.959530 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5d8\" (UniqueName: \"kubernetes.io/projected/3b1413b7-8020-477f-b68d-f41c17dc4a5a-kube-api-access-2r5d8\") pod \"kube-proxy-jbwwj\" (UID: \"3b1413b7-8020-477f-b68d-f41c17dc4a5a\") " pod="kube-system/kube-proxy-jbwwj" Feb 13 06:07:57.959710 kubelet[2564]: I0213 06:07:57.959638 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-lib-modules\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.960005 kubelet[2564]: I0213 06:07:57.959935 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b1413b7-8020-477f-b68d-f41c17dc4a5a-xtables-lock\") pod \"kube-proxy-jbwwj\" (UID: \"3b1413b7-8020-477f-b68d-f41c17dc4a5a\") " pod="kube-system/kube-proxy-jbwwj" Feb 13 06:07:57.960159 kubelet[2564]: I0213 06:07:57.960071 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cni-path\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.960159 kubelet[2564]: I0213 06:07:57.960143 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-net\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.960407 kubelet[2564]: I0213 06:07:57.960340 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b1413b7-8020-477f-b68d-f41c17dc4a5a-lib-modules\") pod \"kube-proxy-jbwwj\" (UID: \"3b1413b7-8020-477f-b68d-f41c17dc4a5a\") " pod="kube-system/kube-proxy-jbwwj" Feb 13 06:07:57.960513 kubelet[2564]: I0213 06:07:57.960463 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-kernel\") pod \"cilium-4mk9s\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " pod="kube-system/cilium-4mk9s" Feb 13 06:07:57.975693 kubelet[2564]: E0213 06:07:57.975603 2564 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 06:07:57.975693 kubelet[2564]: E0213 06:07:57.975665 2564 projected.go:198] Error preparing data for projected volume kube-api-access-7s526 for pod kube-system/cilium-operator-6bc8ccdb58-fjlfk: configmap "kube-root-ca.crt" not found Feb 13 06:07:57.976023 kubelet[2564]: E0213 06:07:57.975799 2564 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-kube-api-access-7s526 podName:6bbe3bd3-de94-47e5-95ac-af114b6c5a76 nodeName:}" failed. No retries permitted until 2024-02-13 06:07:58.475751188 +0000 UTC m=+14.371313991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7s526" (UniqueName: "kubernetes.io/projected/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-kube-api-access-7s526") pod "cilium-operator-6bc8ccdb58-fjlfk" (UID: "6bbe3bd3-de94-47e5-95ac-af114b6c5a76") : configmap "kube-root-ca.crt" not found Feb 13 06:07:57.994523 update_engine[1463]: I0213 06:07:57.994422 1463 update_attempter.cc:509] Updating boot flags... Feb 13 06:07:58.075495 kubelet[2564]: E0213 06:07:58.075446 2564 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 06:07:58.075495 kubelet[2564]: E0213 06:07:58.075487 2564 projected.go:198] Error preparing data for projected volume kube-api-access-2r5d8 for pod kube-system/kube-proxy-jbwwj: configmap "kube-root-ca.crt" not found Feb 13 06:07:58.075847 kubelet[2564]: E0213 06:07:58.075570 2564 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b1413b7-8020-477f-b68d-f41c17dc4a5a-kube-api-access-2r5d8 podName:3b1413b7-8020-477f-b68d-f41c17dc4a5a nodeName:}" failed. No retries permitted until 2024-02-13 06:07:58.575539391 +0000 UTC m=+14.471102190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2r5d8" (UniqueName: "kubernetes.io/projected/3b1413b7-8020-477f-b68d-f41c17dc4a5a-kube-api-access-2r5d8") pod "kube-proxy-jbwwj" (UID: "3b1413b7-8020-477f-b68d-f41c17dc4a5a") : configmap "kube-root-ca.crt" not found Feb 13 06:07:58.076161 kubelet[2564]: E0213 06:07:58.076105 2564 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 06:07:58.076161 kubelet[2564]: E0213 06:07:58.076147 2564 projected.go:198] Error preparing data for projected volume kube-api-access-92vjt for pod kube-system/cilium-4mk9s: configmap "kube-root-ca.crt" not found Feb 13 06:07:58.076373 kubelet[2564]: E0213 06:07:58.076250 2564 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-kube-api-access-92vjt podName:c01960ea-196b-4b62-a22b-0bdac416c20d nodeName:}" failed. No retries permitted until 2024-02-13 06:07:58.576211702 +0000 UTC m=+14.471774492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-92vjt" (UniqueName: "kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-kube-api-access-92vjt") pod "cilium-4mk9s" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d") : configmap "kube-root-ca.crt" not found Feb 13 06:07:58.765116 env[1473]: time="2024-02-13T06:07:58.765023578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-fjlfk,Uid:6bbe3bd3-de94-47e5-95ac-af114b6c5a76,Namespace:kube-system,Attempt:0,}" Feb 13 06:07:58.768353 env[1473]: time="2024-02-13T06:07:58.768264556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbwwj,Uid:3b1413b7-8020-477f-b68d-f41c17dc4a5a,Namespace:kube-system,Attempt:0,}" Feb 13 06:07:58.785596 env[1473]: time="2024-02-13T06:07:58.785526873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mk9s,Uid:c01960ea-196b-4b62-a22b-0bdac416c20d,Namespace:kube-system,Attempt:0,}" Feb 13 06:07:59.014549 env[1473]: time="2024-02-13T06:07:59.014520946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:07:59.014549 env[1473]: time="2024-02-13T06:07:59.014540449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:07:59.014549 env[1473]: time="2024-02-13T06:07:59.014550299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:07:59.014815 env[1473]: time="2024-02-13T06:07:59.014618503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07f1e2e024e0b19c6da6e8958eacfef5585e9be66aae8f76fbf38a9cfed1860a pid=2751 runtime=io.containerd.runc.v2 Feb 13 06:07:59.015192 env[1473]: time="2024-02-13T06:07:59.015142808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:07:59.015192 env[1473]: time="2024-02-13T06:07:59.015165469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:07:59.015192 env[1473]: time="2024-02-13T06:07:59.015175654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:07:59.015269 env[1473]: time="2024-02-13T06:07:59.015212511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:07:59.015269 env[1473]: time="2024-02-13T06:07:59.015236738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240 pid=2762 runtime=io.containerd.runc.v2 Feb 13 06:07:59.015269 env[1473]: time="2024-02-13T06:07:59.015235288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:07:59.015269 env[1473]: time="2024-02-13T06:07:59.015246817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:07:59.015370 env[1473]: time="2024-02-13T06:07:59.015328112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e pid=2763 runtime=io.containerd.runc.v2 Feb 13 06:07:59.021447 systemd[1]: Started cri-containerd-07f1e2e024e0b19c6da6e8958eacfef5585e9be66aae8f76fbf38a9cfed1860a.scope. Feb 13 06:07:59.022125 systemd[1]: Started cri-containerd-296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240.scope. Feb 13 06:07:59.022910 systemd[1]: Started cri-containerd-67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e.scope. Feb 13 06:07:59.032515 env[1473]: time="2024-02-13T06:07:59.032488869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mk9s,Uid:c01960ea-196b-4b62-a22b-0bdac416c20d,Namespace:kube-system,Attempt:0,} returns sandbox id \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\"" Feb 13 06:07:59.033234 env[1473]: time="2024-02-13T06:07:59.033216649Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 06:07:59.033770 env[1473]: time="2024-02-13T06:07:59.033752107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbwwj,Uid:3b1413b7-8020-477f-b68d-f41c17dc4a5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"07f1e2e024e0b19c6da6e8958eacfef5585e9be66aae8f76fbf38a9cfed1860a\"" Feb 13 06:07:59.034810 env[1473]: time="2024-02-13T06:07:59.034796379Z" level=info msg="CreateContainer within sandbox \"07f1e2e024e0b19c6da6e8958eacfef5585e9be66aae8f76fbf38a9cfed1860a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 06:07:59.040015 env[1473]: time="2024-02-13T06:07:59.039996896Z" level=info msg="CreateContainer within sandbox \"07f1e2e024e0b19c6da6e8958eacfef5585e9be66aae8f76fbf38a9cfed1860a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b15cc725179009cd72e43be45230e0303e2a657c7042944770deec6cd0509f7\"" Feb 13 06:07:59.040279 env[1473]: time="2024-02-13T06:07:59.040260301Z" level=info msg="StartContainer for \"5b15cc725179009cd72e43be45230e0303e2a657c7042944770deec6cd0509f7\"" Feb 13 06:07:59.045245 env[1473]: time="2024-02-13T06:07:59.045218557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-fjlfk,Uid:6bbe3bd3-de94-47e5-95ac-af114b6c5a76,Namespace:kube-system,Attempt:0,} returns sandbox id \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\"" Feb 13 06:07:59.047923 systemd[1]: Started cri-containerd-5b15cc725179009cd72e43be45230e0303e2a657c7042944770deec6cd0509f7.scope. Feb 13 06:07:59.061501 env[1473]: time="2024-02-13T06:07:59.061452970Z" level=info msg="StartContainer for \"5b15cc725179009cd72e43be45230e0303e2a657c7042944770deec6cd0509f7\" returns successfully" Feb 13 06:07:59.184226 sshd[2711]: Failed password for root from 185.227.136.16 port 47922 ssh2 Feb 13 06:07:59.244130 kubelet[2564]: I0213 06:07:59.244072 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jbwwj" podStartSLOduration=2.243974906 podCreationTimestamp="2024-02-13 06:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:07:59.243556253 +0000 UTC m=+15.139119076" watchObservedRunningTime="2024-02-13 06:07:59.243974906 +0000 UTC m=+15.139537716" Feb 13 06:08:01.344778 sshd[2711]: Received disconnect from 185.227.136.16 port 47922:11: Bye Bye [preauth] Feb 13 06:08:01.344778 sshd[2711]: Disconnected from authenticating user root 185.227.136.16 port 47922 [preauth] Feb 13 06:08:01.345465 systemd[1]: sshd@14-145.40.90.207:22-185.227.136.16:47922.service: Deactivated successfully. Feb 13 06:08:02.944624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270924142.mount: Deactivated successfully. Feb 13 06:08:04.633839 env[1473]: time="2024-02-13T06:08:04.633798097Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:08:04.634543 env[1473]: time="2024-02-13T06:08:04.634531547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:08:04.635333 env[1473]: time="2024-02-13T06:08:04.635323151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:08:04.635646 env[1473]: time="2024-02-13T06:08:04.635632232Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 06:08:04.636039 env[1473]: time="2024-02-13T06:08:04.635998858Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 06:08:04.636759 env[1473]: time="2024-02-13T06:08:04.636726569Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 06:08:04.641020 env[1473]: time="2024-02-13T06:08:04.641003151Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\"" Feb 13 06:08:04.641282 env[1473]: time="2024-02-13T06:08:04.641263369Z" level=info msg="StartContainer for \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\"" Feb 13 06:08:04.642032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535669963.mount: Deactivated successfully. Feb 13 06:08:04.650570 systemd[1]: Started cri-containerd-fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309.scope. Feb 13 06:08:04.661433 env[1473]: time="2024-02-13T06:08:04.661409941Z" level=info msg="StartContainer for \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\" returns successfully" Feb 13 06:08:04.665962 systemd[1]: cri-containerd-fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309.scope: Deactivated successfully. Feb 13 06:08:05.645258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309-rootfs.mount: Deactivated successfully. Feb 13 06:08:05.787041 env[1473]: time="2024-02-13T06:08:05.786933631Z" level=info msg="shim disconnected" id=fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309 Feb 13 06:08:05.787992 env[1473]: time="2024-02-13T06:08:05.787041708Z" level=warning msg="cleaning up after shim disconnected" id=fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309 namespace=k8s.io Feb 13 06:08:05.787992 env[1473]: time="2024-02-13T06:08:05.787072904Z" level=info msg="cleaning up dead shim" Feb 13 06:08:05.794962 env[1473]: time="2024-02-13T06:08:05.794906793Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:08:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3073 runtime=io.containerd.runc.v2\n" Feb 13 06:08:06.233240 systemd[1]: Started sshd@15-145.40.90.207:22-124.221.128.115:57526.service. Feb 13 06:08:06.247468 env[1473]: time="2024-02-13T06:08:06.247442970Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 06:08:06.252501 env[1473]: time="2024-02-13T06:08:06.252457537Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\"" Feb 13 06:08:06.252762 env[1473]: time="2024-02-13T06:08:06.252748758Z" level=info msg="StartContainer for \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\"" Feb 13 06:08:06.253449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871142624.mount: Deactivated successfully. Feb 13 06:08:06.261336 systemd[1]: Started cri-containerd-56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a.scope. Feb 13 06:08:06.272851 env[1473]: time="2024-02-13T06:08:06.272801158Z" level=info msg="StartContainer for \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\" returns successfully" Feb 13 06:08:06.279360 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 06:08:06.279497 systemd[1]: Stopped systemd-sysctl.service. Feb 13 06:08:06.279634 systemd[1]: Stopping systemd-sysctl.service... Feb 13 06:08:06.280498 systemd[1]: Starting systemd-sysctl.service... Feb 13 06:08:06.280694 systemd[1]: cri-containerd-56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a.scope: Deactivated successfully. Feb 13 06:08:06.284837 systemd[1]: Finished systemd-sysctl.service. Feb 13 06:08:06.344356 env[1473]: time="2024-02-13T06:08:06.344231199Z" level=info msg="shim disconnected" id=56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a Feb 13 06:08:06.344737 env[1473]: time="2024-02-13T06:08:06.344355436Z" level=warning msg="cleaning up after shim disconnected" id=56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a namespace=k8s.io Feb 13 06:08:06.344737 env[1473]: time="2024-02-13T06:08:06.344403984Z" level=info msg="cleaning up dead shim" Feb 13 06:08:06.361929 env[1473]: time="2024-02-13T06:08:06.361791068Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3139 runtime=io.containerd.runc.v2\n" Feb 13 06:08:06.640728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a-rootfs.mount: Deactivated successfully. Feb 13 06:08:06.886065 env[1473]: time="2024-02-13T06:08:06.886044797Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:08:06.886659 env[1473]: time="2024-02-13T06:08:06.886649848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:08:06.887233 env[1473]: time="2024-02-13T06:08:06.887221589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:08:06.887616 env[1473]: time="2024-02-13T06:08:06.887604453Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 06:08:06.888613 env[1473]: time="2024-02-13T06:08:06.888557834Z" level=info msg="CreateContainer within sandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 06:08:06.893932 env[1473]: time="2024-02-13T06:08:06.893879746Z" level=info msg="CreateContainer within sandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\"" Feb 13 06:08:06.894217 env[1473]: time="2024-02-13T06:08:06.894205647Z" level=info msg="StartContainer for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\"" Feb 13 06:08:06.902980 systemd[1]: Started cri-containerd-b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85.scope. Feb 13 06:08:06.914948 env[1473]: time="2024-02-13T06:08:06.914922968Z" level=info msg="StartContainer for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" returns successfully" Feb 13 06:08:07.254544 env[1473]: time="2024-02-13T06:08:07.254496663Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 06:08:07.262865 env[1473]: time="2024-02-13T06:08:07.262805977Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\"" Feb 13 06:08:07.263194 env[1473]: time="2024-02-13T06:08:07.263178415Z" level=info msg="StartContainer for \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\"" Feb 13 06:08:07.266384 kubelet[2564]: I0213 06:08:07.266355 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-fjlfk" podStartSLOduration=2.424361281 podCreationTimestamp="2024-02-13 06:07:57 +0000 UTC" firstStartedPulling="2024-02-13 06:07:59.045778825 +0000 UTC m=+14.941341582" lastFinishedPulling="2024-02-13 06:08:06.887727046 +0000 UTC m=+22.783289801" observedRunningTime="2024-02-13 06:08:07.265896533 +0000 UTC m=+23.161459294" watchObservedRunningTime="2024-02-13 06:08:07.2663095 +0000 UTC m=+23.161872253" Feb 13 06:08:07.274102 systemd[1]: Started cri-containerd-fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58.scope. Feb 13 06:08:07.288007 env[1473]: time="2024-02-13T06:08:07.287958948Z" level=info msg="StartContainer for \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\" returns successfully" Feb 13 06:08:07.289751 systemd[1]: cri-containerd-fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58.scope: Deactivated successfully. Feb 13 06:08:07.451764 env[1473]: time="2024-02-13T06:08:07.451700778Z" level=info msg="shim disconnected" id=fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58 Feb 13 06:08:07.451764 env[1473]: time="2024-02-13T06:08:07.451732040Z" level=warning msg="cleaning up after shim disconnected" id=fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58 namespace=k8s.io Feb 13 06:08:07.451764 env[1473]: time="2024-02-13T06:08:07.451738829Z" level=info msg="cleaning up dead shim" Feb 13 06:08:07.455358 env[1473]: time="2024-02-13T06:08:07.455296552Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:08:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3244 runtime=io.containerd.runc.v2\n" Feb 13 06:08:08.262161 env[1473]: time="2024-02-13T06:08:08.262032420Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 06:08:08.274862 env[1473]: time="2024-02-13T06:08:08.274813283Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\"" Feb 13 06:08:08.275164 env[1473]: time="2024-02-13T06:08:08.275153042Z" level=info msg="StartContainer for \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\"" Feb 13 06:08:08.275444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264295542.mount: Deactivated successfully. Feb 13 06:08:08.283625 systemd[1]: Started cri-containerd-3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7.scope. Feb 13 06:08:08.294300 env[1473]: time="2024-02-13T06:08:08.294239849Z" level=info msg="StartContainer for \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\" returns successfully" Feb 13 06:08:08.294670 systemd[1]: cri-containerd-3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7.scope: Deactivated successfully. Feb 13 06:08:08.304167 env[1473]: time="2024-02-13T06:08:08.304108976Z" level=info msg="shim disconnected" id=3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7 Feb 13 06:08:08.304167 env[1473]: time="2024-02-13T06:08:08.304138151Z" level=warning msg="cleaning up after shim disconnected" id=3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7 namespace=k8s.io Feb 13 06:08:08.304167 env[1473]: time="2024-02-13T06:08:08.304146093Z" level=info msg="cleaning up dead shim" Feb 13 06:08:08.307976 env[1473]: time="2024-02-13T06:08:08.307930832Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:08:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" Feb 13 06:08:08.645030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7-rootfs.mount: Deactivated successfully. Feb 13 06:08:08.932920 sshd[3086]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:08:09.273464 env[1473]: time="2024-02-13T06:08:09.273366587Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 06:08:09.294455 env[1473]: time="2024-02-13T06:08:09.294326988Z" level=info msg="CreateContainer within sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\"" Feb 13 06:08:09.295342 env[1473]: time="2024-02-13T06:08:09.295246659Z" level=info msg="StartContainer for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\"" Feb 13 06:08:09.317344 systemd[1]: Started cri-containerd-b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f.scope. Feb 13 06:08:09.347289 env[1473]: time="2024-02-13T06:08:09.347235789Z" level=info msg="StartContainer for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" returns successfully" Feb 13 06:08:09.431342 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 06:08:09.492920 kubelet[2564]: I0213 06:08:09.492907 2564 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 06:08:09.503945 kubelet[2564]: I0213 06:08:09.503928 2564 topology_manager.go:215] "Topology Admit Handler" podUID="53bbb88f-a288-4369-85dc-6d3dcbd92cbe" podNamespace="kube-system" podName="coredns-5dd5756b68-v5gmr" Feb 13 06:08:09.504849 kubelet[2564]: I0213 06:08:09.504836 2564 topology_manager.go:215] "Topology Admit Handler" podUID="53942332-d076-4f15-b7ac-727f17b6f50b" podNamespace="kube-system" podName="coredns-5dd5756b68-qq9bt" Feb 13 06:08:09.506940 systemd[1]: Created slice kubepods-burstable-pod53bbb88f_a288_4369_85dc_6d3dcbd92cbe.slice. Feb 13 06:08:09.509033 systemd[1]: Created slice kubepods-burstable-pod53942332_d076_4f15_b7ac_727f17b6f50b.slice. Feb 13 06:08:09.571286 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 06:08:09.638666 kubelet[2564]: I0213 06:08:09.638619 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53942332-d076-4f15-b7ac-727f17b6f50b-config-volume\") pod \"coredns-5dd5756b68-qq9bt\" (UID: \"53942332-d076-4f15-b7ac-727f17b6f50b\") " pod="kube-system/coredns-5dd5756b68-qq9bt" Feb 13 06:08:09.638666 kubelet[2564]: I0213 06:08:09.638645 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljt4\" (UniqueName: \"kubernetes.io/projected/53bbb88f-a288-4369-85dc-6d3dcbd92cbe-kube-api-access-dljt4\") pod \"coredns-5dd5756b68-v5gmr\" (UID: \"53bbb88f-a288-4369-85dc-6d3dcbd92cbe\") " pod="kube-system/coredns-5dd5756b68-v5gmr" Feb 13 06:08:09.638666 kubelet[2564]: I0213 06:08:09.638660 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53bbb88f-a288-4369-85dc-6d3dcbd92cbe-config-volume\") pod \"coredns-5dd5756b68-v5gmr\" (UID: \"53bbb88f-a288-4369-85dc-6d3dcbd92cbe\") " pod="kube-system/coredns-5dd5756b68-v5gmr" Feb 13 06:08:09.638797 kubelet[2564]: I0213 06:08:09.638678 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bclp9\" (UniqueName: \"kubernetes.io/projected/53942332-d076-4f15-b7ac-727f17b6f50b-kube-api-access-bclp9\") pod \"coredns-5dd5756b68-qq9bt\" (UID: \"53942332-d076-4f15-b7ac-727f17b6f50b\") " pod="kube-system/coredns-5dd5756b68-qq9bt" Feb 13 06:08:09.809922 env[1473]: time="2024-02-13T06:08:09.809785590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-v5gmr,Uid:53bbb88f-a288-4369-85dc-6d3dcbd92cbe,Namespace:kube-system,Attempt:0,}" Feb 13 06:08:09.811995 env[1473]: time="2024-02-13T06:08:09.811879561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qq9bt,Uid:53942332-d076-4f15-b7ac-727f17b6f50b,Namespace:kube-system,Attempt:0,}" Feb 13 06:08:10.296120 kubelet[2564]: I0213 06:08:10.296102 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4mk9s" podStartSLOduration=7.693234075 podCreationTimestamp="2024-02-13 06:07:57 +0000 UTC" firstStartedPulling="2024-02-13 06:07:59.033009133 +0000 UTC m=+14.928571887" lastFinishedPulling="2024-02-13 06:08:04.63585262 +0000 UTC m=+20.531415376" observedRunningTime="2024-02-13 06:08:10.295673371 +0000 UTC m=+26.191236127" watchObservedRunningTime="2024-02-13 06:08:10.296077564 +0000 UTC m=+26.191640316" Feb 13 06:08:11.174066 systemd-networkd[1319]: cilium_host: Link UP Feb 13 06:08:11.174145 systemd-networkd[1319]: cilium_net: Link UP Feb 13 06:08:11.181282 systemd-networkd[1319]: cilium_net: Gained carrier Feb 13 06:08:11.188394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 13 06:08:11.188426 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 06:08:11.188489 systemd-networkd[1319]: cilium_host: Gained carrier Feb 13 06:08:11.200339 sshd[3086]: Failed password for root from 124.221.128.115 port 57526 ssh2 Feb 13 06:08:11.234058 systemd-networkd[1319]: cilium_vxlan: Link UP Feb 13 06:08:11.234061 systemd-networkd[1319]: cilium_vxlan: Gained carrier Feb 13 06:08:11.345328 systemd-networkd[1319]: cilium_host: Gained IPv6LL Feb 13 06:08:11.364299 kernel: NET: Registered PF_ALG protocol family Feb 13 06:08:11.892851 systemd-networkd[1319]: lxc_health: Link UP Feb 13 06:08:11.919153 systemd-networkd[1319]: lxc_health: Gained carrier Feb 13 06:08:11.919284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 06:08:12.137395 systemd-networkd[1319]: cilium_net: Gained IPv6LL Feb 13 06:08:12.355235 systemd-networkd[1319]: lxc21a380cd1e03: Link UP Feb 13 06:08:12.371289 kernel: eth0: renamed from tmp45ff2 Feb 13 06:08:12.400343 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 06:08:12.400474 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc21a380cd1e03: link becomes ready Feb 13 06:08:12.414869 systemd-networkd[1319]: lxc21a380cd1e03: Gained carrier Feb 13 06:08:12.419352 kernel: eth0: renamed from tmpda8b3 Feb 13 06:08:12.438213 systemd-networkd[1319]: tmpda8b3: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 06:08:12.438349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcea44728d7818: link becomes ready Feb 13 06:08:12.438301 systemd-networkd[1319]: tmpda8b3: Cannot enable IPv6, ignoring: No such file or directory Feb 13 06:08:12.438387 systemd-networkd[1319]: tmpda8b3: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Feb 13 06:08:12.438395 systemd-networkd[1319]: tmpda8b3: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Feb 13 06:08:12.438401 systemd-networkd[1319]: tmpda8b3: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Feb 13 06:08:12.438424 systemd-networkd[1319]: tmpda8b3: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Feb 13 06:08:12.438609 systemd-networkd[1319]: lxcea44728d7818: Link UP Feb 13 06:08:12.439135 systemd-networkd[1319]: lxcea44728d7818: Gained carrier Feb 13 06:08:13.097442 systemd-networkd[1319]: cilium_vxlan: Gained IPv6LL Feb 13 06:08:13.378700 sshd[3086]: Received disconnect from 124.221.128.115 port 57526:11: Bye Bye [preauth] Feb 13 06:08:13.378700 sshd[3086]: Disconnected from authenticating user root 124.221.128.115 port 57526 [preauth] Feb 13 06:08:13.379327 systemd[1]: sshd@15-145.40.90.207:22-124.221.128.115:57526.service: Deactivated successfully. Feb 13 06:08:13.737432 systemd-networkd[1319]: lxc_health: Gained IPv6LL Feb 13 06:08:13.866425 systemd-networkd[1319]: lxcea44728d7818: Gained IPv6LL Feb 13 06:08:14.441452 systemd-networkd[1319]: lxc21a380cd1e03: Gained IPv6LL Feb 13 06:08:14.731675 env[1473]: time="2024-02-13T06:08:14.731636229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:08:14.731675 env[1473]: time="2024-02-13T06:08:14.731658245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:08:14.731675 env[1473]: time="2024-02-13T06:08:14.731665351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:08:14.731989 env[1473]: time="2024-02-13T06:08:14.731732100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45ff229f9bbce5bc5e76295a3d04b637758a41a0b20cd482e8628ad58fc9cfd0 pid=3988 runtime=io.containerd.runc.v2 Feb 13 06:08:14.731989 env[1473]: time="2024-02-13T06:08:14.731864555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:08:14.731989 env[1473]: time="2024-02-13T06:08:14.731881350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:08:14.731989 env[1473]: time="2024-02-13T06:08:14.731888184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:08:14.731989 env[1473]: time="2024-02-13T06:08:14.731941835Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da8b3d81efeb41b71a261c1d9ce0ee3d0cb63a6a1142ca99267c0eae2123acc2 pid=3990 runtime=io.containerd.runc.v2 Feb 13 06:08:14.739556 systemd[1]: Started cri-containerd-45ff229f9bbce5bc5e76295a3d04b637758a41a0b20cd482e8628ad58fc9cfd0.scope. Feb 13 06:08:14.740280 systemd[1]: Started cri-containerd-da8b3d81efeb41b71a261c1d9ce0ee3d0cb63a6a1142ca99267c0eae2123acc2.scope. Feb 13 06:08:14.760726 env[1473]: time="2024-02-13T06:08:14.760698069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qq9bt,Uid:53942332-d076-4f15-b7ac-727f17b6f50b,Namespace:kube-system,Attempt:0,} returns sandbox id \"da8b3d81efeb41b71a261c1d9ce0ee3d0cb63a6a1142ca99267c0eae2123acc2\"" Feb 13 06:08:14.760726 env[1473]: time="2024-02-13T06:08:14.760707868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-v5gmr,Uid:53bbb88f-a288-4369-85dc-6d3dcbd92cbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ff229f9bbce5bc5e76295a3d04b637758a41a0b20cd482e8628ad58fc9cfd0\"" Feb 13 06:08:14.761873 env[1473]: time="2024-02-13T06:08:14.761859930Z" level=info msg="CreateContainer within sandbox \"da8b3d81efeb41b71a261c1d9ce0ee3d0cb63a6a1142ca99267c0eae2123acc2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 06:08:14.761873 env[1473]: time="2024-02-13T06:08:14.761859012Z" level=info msg="CreateContainer within sandbox \"45ff229f9bbce5bc5e76295a3d04b637758a41a0b20cd482e8628ad58fc9cfd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 06:08:14.766478 env[1473]: time="2024-02-13T06:08:14.766435224Z" level=info msg="CreateContainer within sandbox \"45ff229f9bbce5bc5e76295a3d04b637758a41a0b20cd482e8628ad58fc9cfd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5929e4cba3c85a5e235e24722d2bf1b99eb9510f45f61b4127491b0cd5de4af\"" Feb 13 06:08:14.766719 env[1473]: time="2024-02-13T06:08:14.766678473Z" level=info msg="StartContainer for \"a5929e4cba3c85a5e235e24722d2bf1b99eb9510f45f61b4127491b0cd5de4af\"" Feb 13 06:08:14.767034 env[1473]: time="2024-02-13T06:08:14.766991131Z" level=info msg="CreateContainer within sandbox \"da8b3d81efeb41b71a261c1d9ce0ee3d0cb63a6a1142ca99267c0eae2123acc2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38ee967026fb97debb7765deb82c9f0a5cc2a34d5aa197b24762a1d53272d442\"" Feb 13 06:08:14.767156 env[1473]: time="2024-02-13T06:08:14.767141414Z" level=info msg="StartContainer for \"38ee967026fb97debb7765deb82c9f0a5cc2a34d5aa197b24762a1d53272d442\"" Feb 13 06:08:14.804066 systemd[1]: Started cri-containerd-a5929e4cba3c85a5e235e24722d2bf1b99eb9510f45f61b4127491b0cd5de4af.scope. Feb 13 06:08:14.811817 systemd[1]: Started cri-containerd-38ee967026fb97debb7765deb82c9f0a5cc2a34d5aa197b24762a1d53272d442.scope. Feb 13 06:08:14.836602 env[1473]: time="2024-02-13T06:08:14.836562281Z" level=info msg="StartContainer for \"a5929e4cba3c85a5e235e24722d2bf1b99eb9510f45f61b4127491b0cd5de4af\" returns successfully" Feb 13 06:08:14.837914 env[1473]: time="2024-02-13T06:08:14.837865670Z" level=info msg="StartContainer for \"38ee967026fb97debb7765deb82c9f0a5cc2a34d5aa197b24762a1d53272d442\" returns successfully" Feb 13 06:08:15.307621 kubelet[2564]: I0213 06:08:15.307505 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-v5gmr" podStartSLOduration=18.307388408 podCreationTimestamp="2024-02-13 06:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:08:15.306626909 +0000 UTC m=+31.202189743" watchObservedRunningTime="2024-02-13 06:08:15.307388408 +0000 UTC m=+31.202951217" Feb 13 06:08:15.329357 kubelet[2564]: I0213 06:08:15.329271 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qq9bt" podStartSLOduration=18.329150966 podCreationTimestamp="2024-02-13 06:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:08:15.327924098 +0000 UTC m=+31.223486936" watchObservedRunningTime="2024-02-13 06:08:15.329150966 +0000 UTC m=+31.224713772" Feb 13 06:08:22.987069 systemd[1]: Started sshd@16-145.40.90.207:22-103.86.198.162:56659.service. Feb 13 06:08:24.330773 sshd[4158]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.86.198.162 user=root Feb 13 06:08:26.127628 sshd[4158]: Failed password for root from 103.86.198.162 port 56659 ssh2 Feb 13 06:08:26.713618 sshd[4158]: Received disconnect from 103.86.198.162 port 56659:11: Bye Bye [preauth] Feb 13 06:08:26.713618 sshd[4158]: Disconnected from authenticating user root 103.86.198.162 port 56659 [preauth] Feb 13 06:08:26.716174 systemd[1]: sshd@16-145.40.90.207:22-103.86.198.162:56659.service: Deactivated successfully. Feb 13 06:08:34.901704 systemd[1]: Started sshd@17-145.40.90.207:22-122.155.0.205:43089.service. Feb 13 06:08:36.074052 sshd[4170]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=122.155.0.205 user=root Feb 13 06:08:36.074299 sshd[4170]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 06:08:37.519489 sshd[4170]: Failed password for root from 122.155.0.205 port 43089 ssh2 Feb 13 06:08:37.794172 systemd[1]: Started sshd@18-145.40.90.207:22-139.59.22.185:48576.service. Feb 13 06:08:38.420835 sshd[4170]: Received disconnect from 122.155.0.205 port 43089:11: Bye Bye [preauth] Feb 13 06:08:38.420835 sshd[4170]: Disconnected from authenticating user root 122.155.0.205 port 43089 [preauth] Feb 13 06:08:38.423375 systemd[1]: sshd@17-145.40.90.207:22-122.155.0.205:43089.service: Deactivated successfully. Feb 13 06:08:39.188473 sshd[4173]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.22.185 user=root Feb 13 06:08:41.045429 sshd[4173]: Failed password for root from 139.59.22.185 port 48576 ssh2 Feb 13 06:08:41.579754 sshd[4173]: Received disconnect from 139.59.22.185 port 48576:11: Bye Bye [preauth] Feb 13 06:08:41.579754 sshd[4173]: Disconnected from authenticating user root 139.59.22.185 port 48576 [preauth] Feb 13 06:08:41.582232 systemd[1]: sshd@18-145.40.90.207:22-139.59.22.185:48576.service: Deactivated successfully. Feb 13 06:08:58.016434 systemd[1]: Started sshd@19-145.40.90.207:22-139.150.74.245:52020.service. Feb 13 06:08:58.875963 sshd[4184]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.150.74.245 user=root Feb 13 06:09:00.694273 systemd[1]: Started sshd@20-145.40.90.207:22-139.59.81.65:49212.service. Feb 13 06:09:01.008562 sshd[4184]: Failed password for root from 139.150.74.245 port 52020 ssh2 Feb 13 06:09:02.149825 sshd[4189]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:09:03.294388 sshd[4184]: Received disconnect from 139.150.74.245 port 52020:11: Bye Bye [preauth] Feb 13 06:09:03.294388 sshd[4184]: Disconnected from authenticating user root 139.150.74.245 port 52020 [preauth] Feb 13 06:09:03.296859 systemd[1]: sshd@19-145.40.90.207:22-139.150.74.245:52020.service: Deactivated successfully. Feb 13 06:09:03.830496 sshd[4189]: Failed password for root from 139.59.81.65 port 49212 ssh2 Feb 13 06:09:04.293707 systemd[1]: Started sshd@21-145.40.90.207:22-20.127.106.136:41622.service. Feb 13 06:09:04.545231 sshd[4189]: Received disconnect from 139.59.81.65 port 49212:11: Bye Bye [preauth] Feb 13 06:09:04.545231 sshd[4189]: Disconnected from authenticating user root 139.59.81.65 port 49212 [preauth] Feb 13 06:09:04.546130 systemd[1]: sshd@20-145.40.90.207:22-139.59.81.65:49212.service: Deactivated successfully. Feb 13 06:09:04.748876 sshd[4193]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:09:04.999706 systemd[1]: Started sshd@22-145.40.90.207:22-185.227.136.16:44886.service. Feb 13 06:09:06.041306 sshd[4197]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.227.136.16 user=root Feb 13 06:09:06.369679 sshd[4193]: Failed password for root from 20.127.106.136 port 41622 ssh2 Feb 13 06:09:06.763896 systemd[1]: Started sshd@23-145.40.90.207:22-154.222.225.117:39024.service. Feb 13 06:09:06.949376 sshd[4193]: Received disconnect from 20.127.106.136 port 41622:11: Bye Bye [preauth] Feb 13 06:09:06.949376 sshd[4193]: Disconnected from authenticating user root 20.127.106.136 port 41622 [preauth] Feb 13 06:09:06.951929 systemd[1]: sshd@21-145.40.90.207:22-20.127.106.136:41622.service: Deactivated successfully. Feb 13 06:09:07.706601 sshd[4200]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:09:07.937539 sshd[4197]: Failed password for root from 185.227.136.16 port 44886 ssh2 Feb 13 06:09:08.386613 sshd[4197]: Received disconnect from 185.227.136.16 port 44886:11: Bye Bye [preauth] Feb 13 06:09:08.386613 sshd[4197]: Disconnected from authenticating user root 185.227.136.16 port 44886 [preauth] Feb 13 06:09:08.389108 systemd[1]: sshd@22-145.40.90.207:22-185.227.136.16:44886.service: Deactivated successfully. Feb 13 06:09:09.739347 sshd[4200]: Failed password for root from 154.222.225.117 port 39024 ssh2 Feb 13 06:09:10.006448 sshd[4200]: Received disconnect from 154.222.225.117 port 39024:11: Bye Bye [preauth] Feb 13 06:09:10.006448 sshd[4200]: Disconnected from authenticating user root 154.222.225.117 port 39024 [preauth] Feb 13 06:09:10.008878 systemd[1]: sshd@23-145.40.90.207:22-154.222.225.117:39024.service: Deactivated successfully. Feb 13 06:09:25.727873 systemd[1]: Started sshd@24-145.40.90.207:22-43.153.36.182:54078.service. Feb 13 06:09:29.762456 sshd[4207]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:09:29.983862 systemd[1]: Started sshd@25-145.40.90.207:22-103.86.198.162:37828.service. Feb 13 06:09:31.373374 sshd[4212]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.86.198.162 user=root Feb 13 06:09:32.482562 sshd[4207]: Failed password for root from 43.153.36.182 port 54078 ssh2 Feb 13 06:09:33.370546 sshd[4212]: Failed password for root from 103.86.198.162 port 37828 ssh2 Feb 13 06:09:33.751692 sshd[4212]: Received disconnect from 103.86.198.162 port 37828:11: Bye Bye [preauth] Feb 13 06:09:33.751692 sshd[4212]: Disconnected from authenticating user root 103.86.198.162 port 37828 [preauth] Feb 13 06:09:33.754144 systemd[1]: sshd@25-145.40.90.207:22-103.86.198.162:37828.service: Deactivated successfully. Feb 13 06:09:34.036428 sshd[4207]: Received disconnect from 43.153.36.182 port 54078:11: Bye Bye [preauth] Feb 13 06:09:34.036428 sshd[4207]: Disconnected from authenticating user root 43.153.36.182 port 54078 [preauth] Feb 13 06:09:34.038986 systemd[1]: sshd@24-145.40.90.207:22-43.153.36.182:54078.service: Deactivated successfully. Feb 13 06:09:36.374500 systemd[1]: Started sshd@26-145.40.90.207:22-122.155.0.205:29002.service. Feb 13 06:09:37.546877 sshd[4218]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=122.155.0.205 user=root Feb 13 06:09:39.032318 sshd[4218]: Failed password for root from 122.155.0.205 port 29002 ssh2 Feb 13 06:09:39.894158 sshd[4218]: Received disconnect from 122.155.0.205 port 29002:11: Bye Bye [preauth] Feb 13 06:09:39.894158 sshd[4218]: Disconnected from authenticating user root 122.155.0.205 port 29002 [preauth] Feb 13 06:09:39.896839 systemd[1]: sshd@26-145.40.90.207:22-122.155.0.205:29002.service: Deactivated successfully. Feb 13 06:09:45.943218 systemd[1]: Started sshd@27-145.40.90.207:22-139.59.22.185:42790.service. Feb 13 06:09:47.262522 sshd[4224]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.22.185 user=root Feb 13 06:09:48.008101 systemd[1]: Started sshd@28-145.40.90.207:22-165.154.0.66:53928.service. Feb 13 06:09:49.455483 sshd[4224]: Failed password for root from 139.59.22.185 port 42790 ssh2 Feb 13 06:09:51.779125 sshd[4224]: Received disconnect from 139.59.22.185 port 42790:11: Bye Bye [preauth] Feb 13 06:09:51.779125 sshd[4224]: Disconnected from authenticating user root 139.59.22.185 port 42790 [preauth] Feb 13 06:09:51.781650 systemd[1]: sshd@27-145.40.90.207:22-139.59.22.185:42790.service: Deactivated successfully. Feb 13 06:09:56.516333 sshd[4227]: Connection closed by 165.154.0.66 port 53928 [preauth] Feb 13 06:09:56.516816 systemd[1]: sshd@28-145.40.90.207:22-165.154.0.66:53928.service: Deactivated successfully. Feb 13 06:10:07.475961 systemd[1]: Started sshd@29-145.40.90.207:22-139.150.74.245:43544.service. Feb 13 06:10:08.282959 sshd[4234]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.150.74.245 user=root Feb 13 06:10:09.011933 systemd[1]: Started sshd@30-145.40.90.207:22-185.227.136.16:39114.service. Feb 13 06:10:10.024541 sshd[4234]: Failed password for root from 139.150.74.245 port 43544 ssh2 Feb 13 06:10:10.062012 sshd[4237]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.227.136.16 user=root Feb 13 06:10:10.550630 sshd[4234]: Received disconnect from 139.150.74.245 port 43544:11: Bye Bye [preauth] Feb 13 06:10:10.550630 sshd[4234]: Disconnected from authenticating user root 139.150.74.245 port 43544 [preauth] Feb 13 06:10:10.551316 systemd[1]: sshd@29-145.40.90.207:22-139.150.74.245:43544.service: Deactivated successfully. Feb 13 06:10:12.410633 sshd[4237]: Failed password for root from 185.227.136.16 port 39114 ssh2 Feb 13 06:10:16.436204 sshd[4237]: Received disconnect from 185.227.136.16 port 39114:11: Bye Bye [preauth] Feb 13 06:10:16.436204 sshd[4237]: Disconnected from authenticating user root 185.227.136.16 port 39114 [preauth] Feb 13 06:10:16.438756 systemd[1]: sshd@30-145.40.90.207:22-185.227.136.16:39114.service: Deactivated successfully. Feb 13 06:10:28.379449 systemd[1]: Started sshd@31-145.40.90.207:22-139.59.81.65:41274.service. Feb 13 06:10:29.743593 sshd[4242]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:10:31.700887 sshd[4242]: Failed password for root from 139.59.81.65 port 41274 ssh2 Feb 13 06:10:32.124836 sshd[4242]: Received disconnect from 139.59.81.65 port 41274:11: Bye Bye [preauth] Feb 13 06:10:32.124836 sshd[4242]: Disconnected from authenticating user root 139.59.81.65 port 41274 [preauth] Feb 13 06:10:32.127356 systemd[1]: sshd@31-145.40.90.207:22-139.59.81.65:41274.service: Deactivated successfully. Feb 13 06:10:33.029100 systemd[1]: Started sshd@32-145.40.90.207:22-20.127.106.136:35884.service. Feb 13 06:10:33.176112 systemd[1]: Started sshd@33-145.40.90.207:22-103.86.198.162:47228.service. Feb 13 06:10:33.469268 sshd[4248]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:10:34.550854 sshd[4251]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.86.198.162 user=root Feb 13 06:10:35.642408 sshd[4248]: Failed password for root from 20.127.106.136 port 35884 ssh2 Feb 13 06:10:36.528189 sshd[4251]: Failed password for root from 103.86.198.162 port 47228 ssh2 Feb 13 06:10:36.938442 sshd[4251]: Received disconnect from 103.86.198.162 port 47228:11: Bye Bye [preauth] Feb 13 06:10:36.938442 sshd[4251]: Disconnected from authenticating user root 103.86.198.162 port 47228 [preauth] Feb 13 06:10:36.940928 systemd[1]: sshd@33-145.40.90.207:22-103.86.198.162:47228.service: Deactivated successfully. Feb 13 06:10:37.803629 systemd[1]: Started sshd@34-145.40.90.207:22-154.222.225.117:32994.service. Feb 13 06:10:37.809503 sshd[4248]: Received disconnect from 20.127.106.136 port 35884:11: Bye Bye [preauth] Feb 13 06:10:37.809503 sshd[4248]: Disconnected from authenticating user root 20.127.106.136 port 35884 [preauth] Feb 13 06:10:37.809999 systemd[1]: sshd@32-145.40.90.207:22-20.127.106.136:35884.service: Deactivated successfully. Feb 13 06:10:38.674245 sshd[4255]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:10:40.866541 sshd[4255]: Failed password for root from 154.222.225.117 port 32994 ssh2 Feb 13 06:10:43.101122 sshd[4255]: Received disconnect from 154.222.225.117 port 32994:11: Bye Bye [preauth] Feb 13 06:10:43.101122 sshd[4255]: Disconnected from authenticating user root 154.222.225.117 port 32994 [preauth] Feb 13 06:10:43.103643 systemd[1]: sshd@34-145.40.90.207:22-154.222.225.117:32994.service: Deactivated successfully. Feb 13 06:10:47.707985 systemd[1]: Started sshd@35-145.40.90.207:22-139.59.22.185:55292.service. Feb 13 06:10:49.067682 sshd[4263]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.22.185 user=root Feb 13 06:10:51.105079 sshd[4263]: Failed password for root from 139.59.22.185 port 55292 ssh2 Feb 13 06:10:51.452735 sshd[4263]: Received disconnect from 139.59.22.185 port 55292:11: Bye Bye [preauth] Feb 13 06:10:51.452735 sshd[4263]: Disconnected from authenticating user root 139.59.22.185 port 55292 [preauth] Feb 13 06:10:51.455224 systemd[1]: sshd@35-145.40.90.207:22-139.59.22.185:55292.service: Deactivated successfully. Feb 13 06:11:16.114217 systemd[1]: Started sshd@36-145.40.90.207:22-185.227.136.16:34170.service. Feb 13 06:11:17.746864 sshd[4269]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.227.136.16 user=root Feb 13 06:11:19.611868 systemd[1]: Started sshd@37-145.40.90.207:22-139.150.74.245:35190.service. Feb 13 06:11:19.960072 sshd[4269]: Failed password for root from 185.227.136.16 port 34170 ssh2 Feb 13 06:11:20.427524 sshd[4272]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.150.74.245 user=root Feb 13 06:11:22.192448 sshd[4269]: Received disconnect from 185.227.136.16 port 34170:11: Bye Bye [preauth] Feb 13 06:11:22.192448 sshd[4269]: Disconnected from authenticating user root 185.227.136.16 port 34170 [preauth] Feb 13 06:11:22.194943 systemd[1]: sshd@36-145.40.90.207:22-185.227.136.16:34170.service: Deactivated successfully. Feb 13 06:11:22.385084 sshd[4272]: Failed password for root from 139.150.74.245 port 35190 ssh2 Feb 13 06:11:22.697027 sshd[4272]: Received disconnect from 139.150.74.245 port 35190:11: Bye Bye [preauth] Feb 13 06:11:22.697027 sshd[4272]: Disconnected from authenticating user root 139.150.74.245 port 35190 [preauth] Feb 13 06:11:22.699548 systemd[1]: sshd@37-145.40.90.207:22-139.150.74.245:35190.service: Deactivated successfully. Feb 13 06:11:34.472389 systemd[1]: Started sshd@38-145.40.90.207:22-43.153.36.182:46532.service. Feb 13 06:11:35.605043 sshd[4279]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:11:37.958482 sshd[4279]: Failed password for root from 43.153.36.182 port 46532 ssh2 Feb 13 06:11:39.885637 sshd[4279]: Received disconnect from 43.153.36.182 port 46532:11: Bye Bye [preauth] Feb 13 06:11:39.885637 sshd[4279]: Disconnected from authenticating user root 43.153.36.182 port 46532 [preauth] Feb 13 06:11:39.888179 systemd[1]: sshd@38-145.40.90.207:22-43.153.36.182:46532.service: Deactivated successfully. Feb 13 06:11:39.938024 systemd[1]: Started sshd@39-145.40.90.207:22-103.86.198.162:56630.service. Feb 13 06:11:41.309775 sshd[4283]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.86.198.162 user=root Feb 13 06:11:43.818889 sshd[4283]: Failed password for root from 103.86.198.162 port 56630 ssh2 Feb 13 06:11:45.834733 sshd[4283]: Received disconnect from 103.86.198.162 port 56630:11: Bye Bye [preauth] Feb 13 06:11:45.834733 sshd[4283]: Disconnected from authenticating user root 103.86.198.162 port 56630 [preauth] Feb 13 06:11:45.835413 systemd[1]: sshd@39-145.40.90.207:22-103.86.198.162:56630.service: Deactivated successfully. Feb 13 06:11:46.571287 systemd[1]: Started sshd@40-145.40.90.207:22-124.221.128.115:55664.service. Feb 13 06:11:47.886115 sshd[4289]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:11:50.219575 sshd[4289]: Failed password for root from 124.221.128.115 port 55664 ssh2 Feb 13 06:11:52.311798 sshd[4289]: Received disconnect from 124.221.128.115 port 55664:11: Bye Bye [preauth] Feb 13 06:11:52.311798 sshd[4289]: Disconnected from authenticating user root 124.221.128.115 port 55664 [preauth] Feb 13 06:11:52.314364 systemd[1]: sshd@40-145.40.90.207:22-124.221.128.115:55664.service: Deactivated successfully. Feb 13 06:11:53.622835 systemd[1]: Started sshd@41-145.40.90.207:22-165.154.0.66:33964.service. Feb 13 06:11:56.347025 sshd[4293]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:11:58.249051 sshd[4293]: Failed password for root from 165.154.0.66 port 33964 ssh2 Feb 13 06:11:58.631810 sshd[4293]: Received disconnect from 165.154.0.66 port 33964:11: Bye Bye [preauth] Feb 13 06:11:58.631810 sshd[4293]: Disconnected from authenticating user root 165.154.0.66 port 33964 [preauth] Feb 13 06:11:58.634254 systemd[1]: sshd@41-145.40.90.207:22-165.154.0.66:33964.service: Deactivated successfully. Feb 13 06:11:58.657435 systemd[1]: Started sshd@42-145.40.90.207:22-139.59.81.65:41160.service. Feb 13 06:12:00.002188 sshd[4297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:12:01.452574 sshd[4297]: Failed password for root from 139.59.81.65 port 41160 ssh2 Feb 13 06:12:01.581741 systemd[1]: Started sshd@43-145.40.90.207:22-20.127.106.136:58368.service. Feb 13 06:12:02.031411 sshd[4302]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:12:02.383132 sshd[4297]: Received disconnect from 139.59.81.65 port 41160:11: Bye Bye [preauth] Feb 13 06:12:02.383132 sshd[4297]: Disconnected from authenticating user root 139.59.81.65 port 41160 [preauth] Feb 13 06:12:02.385628 systemd[1]: sshd@42-145.40.90.207:22-139.59.81.65:41160.service: Deactivated successfully. Feb 13 06:12:04.088465 sshd[4302]: Failed password for root from 20.127.106.136 port 58368 ssh2 Feb 13 06:12:04.232510 sshd[4302]: Received disconnect from 20.127.106.136 port 58368:11: Bye Bye [preauth] Feb 13 06:12:04.232510 sshd[4302]: Disconnected from authenticating user root 20.127.106.136 port 58368 [preauth] Feb 13 06:12:04.234247 systemd[1]: sshd@43-145.40.90.207:22-20.127.106.136:58368.service: Deactivated successfully. Feb 13 06:12:06.597921 systemd[1]: Started sshd@44-145.40.90.207:22-154.222.225.117:55194.service. Feb 13 06:12:07.461390 sshd[4307]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:12:09.539342 sshd[4307]: Failed password for root from 154.222.225.117 port 55194 ssh2 Feb 13 06:12:09.746332 sshd[4307]: Received disconnect from 154.222.225.117 port 55194:11: Bye Bye [preauth] Feb 13 06:12:09.746332 sshd[4307]: Disconnected from authenticating user root 154.222.225.117 port 55194 [preauth] Feb 13 06:12:09.748936 systemd[1]: sshd@44-145.40.90.207:22-154.222.225.117:55194.service: Deactivated successfully. Feb 13 06:12:18.339169 systemd[1]: Started sshd@45-145.40.90.207:22-124.221.128.115:34976.service. Feb 13 06:12:19.335613 sshd[4312]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:12:21.062123 sshd[4312]: Failed password for root from 124.221.128.115 port 34976 ssh2 Feb 13 06:12:21.639416 sshd[4312]: Received disconnect from 124.221.128.115 port 34976:11: Bye Bye [preauth] Feb 13 06:12:21.639416 sshd[4312]: Disconnected from authenticating user root 124.221.128.115 port 34976 [preauth] Feb 13 06:12:21.642007 systemd[1]: sshd@45-145.40.90.207:22-124.221.128.115:34976.service: Deactivated successfully. Feb 13 06:12:27.246156 systemd[1]: Started sshd@46-145.40.90.207:22-185.227.136.16:43214.service. Feb 13 06:12:28.215704 sshd[4319]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.227.136.16 user=root Feb 13 06:12:30.844657 sshd[4319]: Failed password for root from 185.227.136.16 port 43214 ssh2 Feb 13 06:12:32.261247 systemd[1]: Started sshd@47-145.40.90.207:22-139.150.74.245:55034.service. Feb 13 06:12:32.689714 sshd[4319]: Received disconnect from 185.227.136.16 port 43214:11: Bye Bye [preauth] Feb 13 06:12:32.689714 sshd[4319]: Disconnected from authenticating user root 185.227.136.16 port 43214 [preauth] Feb 13 06:12:32.692311 systemd[1]: sshd@46-145.40.90.207:22-185.227.136.16:43214.service: Deactivated successfully. Feb 13 06:12:33.076195 sshd[4324]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.150.74.245 user=root Feb 13 06:12:35.389797 sshd[4324]: Failed password for root from 139.150.74.245 port 55034 ssh2 Feb 13 06:12:37.483313 sshd[4324]: Received disconnect from 139.150.74.245 port 55034:11: Bye Bye [preauth] Feb 13 06:12:37.483313 sshd[4324]: Disconnected from authenticating user root 139.150.74.245 port 55034 [preauth] Feb 13 06:12:37.485895 systemd[1]: sshd@47-145.40.90.207:22-139.150.74.245:55034.service: Deactivated successfully. Feb 13 06:12:49.119991 systemd[1]: Started sshd@48-145.40.90.207:22-103.86.198.162:37798.service. Feb 13 06:12:50.482984 sshd[4331]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.86.198.162 user=root Feb 13 06:12:52.465153 sshd[4331]: Failed password for root from 103.86.198.162 port 37798 ssh2 Feb 13 06:12:52.868315 sshd[4331]: Received disconnect from 103.86.198.162 port 37798:11: Bye Bye [preauth] Feb 13 06:12:52.868315 sshd[4331]: Disconnected from authenticating user root 103.86.198.162 port 37798 [preauth] Feb 13 06:12:52.870862 systemd[1]: sshd@48-145.40.90.207:22-103.86.198.162:37798.service: Deactivated successfully. Feb 13 06:12:58.472741 systemd[1]: Started sshd@49-145.40.90.207:22-124.221.128.115:42536.service. Feb 13 06:12:59.248387 sshd[4335]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:13:01.797833 sshd[4335]: Failed password for root from 124.221.128.115 port 42536 ssh2 Feb 13 06:13:03.655704 sshd[4335]: Received disconnect from 124.221.128.115 port 42536:11: Bye Bye [preauth] Feb 13 06:13:03.655704 sshd[4335]: Disconnected from authenticating user root 124.221.128.115 port 42536 [preauth] Feb 13 06:13:03.658217 systemd[1]: sshd@49-145.40.90.207:22-124.221.128.115:42536.service: Deactivated successfully. Feb 13 06:13:30.720269 systemd[1]: Started sshd@50-145.40.90.207:22-139.59.81.65:58630.service. Feb 13 06:13:30.881268 systemd[1]: Started sshd@51-145.40.90.207:22-20.127.106.136:52622.service. Feb 13 06:13:31.036951 systemd[1]: Started sshd@52-145.40.90.207:22-124.221.128.115:50088.service. Feb 13 06:13:31.318397 sshd[4346]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:13:31.945561 sshd[4349]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:13:32.058061 sshd[4343]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:13:33.260774 sshd[4346]: Failed password for root from 20.127.106.136 port 52622 ssh2 Feb 13 06:13:33.523505 sshd[4346]: Received disconnect from 20.127.106.136 port 52622:11: Bye Bye [preauth] Feb 13 06:13:33.523505 sshd[4346]: Disconnected from authenticating user root 20.127.106.136 port 52622 [preauth] Feb 13 06:13:33.525889 systemd[1]: sshd@51-145.40.90.207:22-20.127.106.136:52622.service: Deactivated successfully. Feb 13 06:13:33.804677 sshd[4343]: Failed password for root from 139.59.81.65 port 58630 ssh2 Feb 13 06:13:33.887625 sshd[4349]: Failed password for root from 124.221.128.115 port 50088 ssh2 Feb 13 06:13:34.228690 sshd[4349]: Received disconnect from 124.221.128.115 port 50088:11: Bye Bye [preauth] Feb 13 06:13:34.228690 sshd[4349]: Disconnected from authenticating user root 124.221.128.115 port 50088 [preauth] Feb 13 06:13:34.231232 systemd[1]: sshd@52-145.40.90.207:22-124.221.128.115:50088.service: Deactivated successfully. Feb 13 06:13:34.439066 sshd[4343]: Received disconnect from 139.59.81.65 port 58630:11: Bye Bye [preauth] Feb 13 06:13:34.439066 sshd[4343]: Disconnected from authenticating user root 139.59.81.65 port 58630 [preauth] Feb 13 06:13:34.441673 systemd[1]: sshd@50-145.40.90.207:22-139.59.81.65:58630.service: Deactivated successfully. Feb 13 06:13:35.169273 systemd[1]: Started sshd@53-145.40.90.207:22-8.217.2.214:53650.service. Feb 13 06:13:37.143641 sshd[4356]: kex_exchange_identification: banner line contains invalid characters Feb 13 06:13:37.144445 sshd[4356]: banner exchange: Connection from 8.217.2.214 port 53650: invalid format Feb 13 06:13:37.145198 systemd[1]: sshd@53-145.40.90.207:22-8.217.2.214:53650.service: Deactivated successfully. Feb 13 06:13:38.379885 systemd[1]: Started sshd@54-145.40.90.207:22-185.227.136.16:51106.service. Feb 13 06:13:39.324518 systemd[1]: Started sshd@55-145.40.90.207:22-154.222.225.117:49162.service. Feb 13 06:13:40.215503 sshd[4362]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:13:40.223890 sshd[4359]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.227.136.16 user=root Feb 13 06:13:41.075333 systemd[1]: Started sshd@56-145.40.90.207:22-43.153.36.182:38990.service. Feb 13 06:13:42.393588 sshd[4362]: Failed password for root from 154.222.225.117 port 49162 ssh2 Feb 13 06:13:42.401937 sshd[4359]: Failed password for root from 185.227.136.16 port 51106 ssh2 Feb 13 06:13:43.445427 sshd[4365]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:13:44.646382 sshd[4362]: Received disconnect from 154.222.225.117 port 49162:11: Bye Bye [preauth] Feb 13 06:13:44.646382 sshd[4362]: Disconnected from authenticating user root 154.222.225.117 port 49162 [preauth] Feb 13 06:13:44.648844 systemd[1]: sshd@55-145.40.90.207:22-154.222.225.117:49162.service: Deactivated successfully. Feb 13 06:13:44.669889 sshd[4359]: Received disconnect from 185.227.136.16 port 51106:11: Bye Bye [preauth] Feb 13 06:13:44.669889 sshd[4359]: Disconnected from authenticating user root 185.227.136.16 port 51106 [preauth] Feb 13 06:13:44.672074 systemd[1]: sshd@54-145.40.90.207:22-185.227.136.16:51106.service: Deactivated successfully. Feb 13 06:13:44.934272 systemd[1]: Started sshd@57-145.40.90.207:22-8.217.2.214:38678.service. Feb 13 06:13:45.367546 sshd[4365]: Failed password for root from 43.153.36.182 port 38990 ssh2 Feb 13 06:13:45.589877 sshd[4365]: Received disconnect from 43.153.36.182 port 38990:11: Bye Bye [preauth] Feb 13 06:13:45.589877 sshd[4365]: Disconnected from authenticating user root 43.153.36.182 port 38990 [preauth] Feb 13 06:13:45.592411 systemd[1]: sshd@56-145.40.90.207:22-43.153.36.182:38990.service: Deactivated successfully. Feb 13 06:13:46.135127 systemd[1]: Started sshd@58-145.40.90.207:22-139.150.74.245:46658.service. Feb 13 06:13:46.985764 sshd[4376]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.150.74.245 user=root Feb 13 06:13:48.622056 sshd[4372]: Invalid user NL5xUDpV2xRa from 8.217.2.214 port 38678 Feb 13 06:13:48.623791 sshd[4372]: userauth_pubkey: parse publickey packet: incomplete message [preauth] Feb 13 06:13:48.628827 systemd[1]: sshd@57-145.40.90.207:22-8.217.2.214:38678.service: Deactivated successfully. Feb 13 06:13:49.655487 sshd[4376]: Failed password for root from 139.150.74.245 port 46658 ssh2 Feb 13 06:13:51.401712 sshd[4376]: Received disconnect from 139.150.74.245 port 46658:11: Bye Bye [preauth] Feb 13 06:13:51.401712 sshd[4376]: Disconnected from authenticating user root 139.150.74.245 port 46658 [preauth] Feb 13 06:13:51.404222 systemd[1]: sshd@58-145.40.90.207:22-139.150.74.245:46658.service: Deactivated successfully. Feb 13 06:13:59.399905 systemd[1]: Started sshd@59-145.40.90.207:22-165.154.0.66:41146.service. Feb 13 06:14:01.295945 sshd[4384]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:14:03.689485 sshd[4384]: Failed password for root from 165.154.0.66 port 41146 ssh2 Feb 13 06:14:04.084927 systemd[1]: Started sshd@60-145.40.90.207:22-124.221.128.115:57640.service. Feb 13 06:14:04.944578 sshd[4387]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:14:05.732047 sshd[4384]: Received disconnect from 165.154.0.66 port 41146:11: Bye Bye [preauth] Feb 13 06:14:05.732047 sshd[4384]: Disconnected from authenticating user root 165.154.0.66 port 41146 [preauth] Feb 13 06:14:05.734653 systemd[1]: sshd@59-145.40.90.207:22-165.154.0.66:41146.service: Deactivated successfully. Feb 13 06:14:07.418542 sshd[4387]: Failed password for root from 124.221.128.115 port 57640 ssh2 Feb 13 06:14:08.280206 systemd[1]: Started sshd@61-145.40.90.207:22-85.209.11.27:32448.service. Feb 13 06:14:09.377667 sshd[4387]: Received disconnect from 124.221.128.115 port 57640:11: Bye Bye [preauth] Feb 13 06:14:09.377667 sshd[4387]: Disconnected from authenticating user root 124.221.128.115 port 57640 [preauth] Feb 13 06:14:09.380182 systemd[1]: sshd@60-145.40.90.207:22-124.221.128.115:57640.service: Deactivated successfully. Feb 13 06:14:09.645882 sshd[4392]: Invalid user RPM from 85.209.11.27 port 32448 Feb 13 06:14:10.016683 sshd[4392]: pam_faillock(sshd:auth): User unknown Feb 13 06:14:10.017837 sshd[4392]: pam_unix(sshd:auth): check pass; user unknown Feb 13 06:14:10.017929 sshd[4392]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.27 Feb 13 06:14:10.019031 sshd[4392]: pam_faillock(sshd:auth): User unknown Feb 13 06:14:12.317266 sshd[4392]: Failed password for invalid user RPM from 85.209.11.27 port 32448 ssh2 Feb 13 06:14:14.762578 sshd[4392]: Connection closed by invalid user RPM 85.209.11.27 port 32448 [preauth] Feb 13 06:14:14.765059 systemd[1]: sshd@61-145.40.90.207:22-85.209.11.27:32448.service: Deactivated successfully. Feb 13 06:14:34.803403 systemd[1]: Started sshd@62-145.40.90.207:22-124.221.128.115:36958.service. Feb 13 06:14:35.730412 sshd[4400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:14:38.459664 sshd[4400]: Failed password for root from 124.221.128.115 port 36958 ssh2 Feb 13 06:14:40.175655 sshd[4400]: Received disconnect from 124.221.128.115 port 36958:11: Bye Bye [preauth] Feb 13 06:14:40.175655 sshd[4400]: Disconnected from authenticating user root 124.221.128.115 port 36958 [preauth] Feb 13 06:14:40.178107 systemd[1]: sshd@62-145.40.90.207:22-124.221.128.115:36958.service: Deactivated successfully. Feb 13 06:14:55.045516 systemd[1]: Started sshd@63-145.40.90.207:22-194.165.16.10:65416.service. Feb 13 06:14:55.054489 sshd[4406]: kex_exchange_identification: banner line contains invalid characters Feb 13 06:14:55.054489 sshd[4406]: banner exchange: Connection from 194.165.16.10 port 65416: invalid format Feb 13 06:14:55.054713 systemd[1]: sshd@63-145.40.90.207:22-194.165.16.10:65416.service: Deactivated successfully. Feb 13 06:14:58.821095 systemd[1]: Started sshd@64-145.40.90.207:22-20.127.106.136:46872.service. Feb 13 06:14:59.282974 sshd[4409]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:15:01.973264 sshd[4409]: Failed password for root from 20.127.106.136 port 46872 ssh2 Feb 13 06:15:03.075444 systemd[1]: Started sshd@65-145.40.90.207:22-139.59.81.65:52244.service. Feb 13 06:15:03.627737 sshd[4409]: Received disconnect from 20.127.106.136 port 46872:11: Bye Bye [preauth] Feb 13 06:15:03.627737 sshd[4409]: Disconnected from authenticating user root 20.127.106.136 port 46872 [preauth] Feb 13 06:15:03.630209 systemd[1]: sshd@64-145.40.90.207:22-20.127.106.136:46872.service: Deactivated successfully. Feb 13 06:15:04.494384 sshd[4414]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:15:06.204403 sshd[4414]: Failed password for root from 139.59.81.65 port 52244 ssh2 Feb 13 06:15:06.899800 sshd[4414]: Received disconnect from 139.59.81.65 port 52244:11: Bye Bye [preauth] Feb 13 06:15:06.899800 sshd[4414]: Disconnected from authenticating user root 139.59.81.65 port 52244 [preauth] Feb 13 06:15:06.902354 systemd[1]: sshd@65-145.40.90.207:22-139.59.81.65:52244.service: Deactivated successfully. Feb 13 06:15:07.184176 systemd[1]: Started sshd@66-145.40.90.207:22-124.221.128.115:44506.service. Feb 13 06:15:08.026254 sshd[4419]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:15:10.284527 sshd[4419]: Failed password for root from 124.221.128.115 port 44506 ssh2 Feb 13 06:15:12.386039 systemd[1]: Started sshd@67-145.40.90.207:22-154.222.225.117:43126.service. Feb 13 06:15:12.440611 sshd[4419]: Received disconnect from 124.221.128.115 port 44506:11: Bye Bye [preauth] Feb 13 06:15:12.440611 sshd[4419]: Disconnected from authenticating user root 124.221.128.115 port 44506 [preauth] Feb 13 06:15:12.442096 systemd[1]: sshd@66-145.40.90.207:22-124.221.128.115:44506.service: Deactivated successfully. Feb 13 06:15:13.336221 sshd[4422]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:15:15.283272 sshd[4422]: Failed password for root from 154.222.225.117 port 43126 ssh2 Feb 13 06:15:15.638952 sshd[4422]: Received disconnect from 154.222.225.117 port 43126:11: Bye Bye [preauth] Feb 13 06:15:15.638952 sshd[4422]: Disconnected from authenticating user root 154.222.225.117 port 43126 [preauth] Feb 13 06:15:15.641440 systemd[1]: sshd@67-145.40.90.207:22-154.222.225.117:43126.service: Deactivated successfully. Feb 13 06:15:36.514444 systemd[1]: Started sshd@68-145.40.90.207:22-124.221.128.115:52048.service. Feb 13 06:15:37.419999 sshd[4431]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:15:39.326547 sshd[4431]: Failed password for root from 124.221.128.115 port 52048 ssh2 Feb 13 06:15:39.720130 sshd[4431]: Received disconnect from 124.221.128.115 port 52048:11: Bye Bye [preauth] Feb 13 06:15:39.720130 sshd[4431]: Disconnected from authenticating user root 124.221.128.115 port 52048 [preauth] Feb 13 06:15:39.722726 systemd[1]: sshd@68-145.40.90.207:22-124.221.128.115:52048.service: Deactivated successfully. Feb 13 06:15:40.911583 systemd[1]: Started sshd@69-145.40.90.207:22-43.153.36.182:59672.service. Feb 13 06:15:44.697245 sshd[4436]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:15:47.231577 sshd[4436]: Failed password for root from 43.153.36.182 port 59672 ssh2 Feb 13 06:15:48.979948 sshd[4436]: Received disconnect from 43.153.36.182 port 59672:11: Bye Bye [preauth] Feb 13 06:15:48.979948 sshd[4436]: Disconnected from authenticating user root 43.153.36.182 port 59672 [preauth] Feb 13 06:15:48.982512 systemd[1]: sshd@69-145.40.90.207:22-43.153.36.182:59672.service: Deactivated successfully. Feb 13 06:16:05.652416 systemd[1]: Started sshd@70-145.40.90.207:22-165.154.0.66:59130.service. Feb 13 06:16:05.927395 systemd[1]: Started sshd@71-145.40.90.207:22-124.221.128.115:59596.service. Feb 13 06:16:06.537641 sshd[4446]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:16:06.760185 sshd[4449]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:16:07.757688 sshd[4446]: Failed password for root from 165.154.0.66 port 59130 ssh2 Feb 13 06:16:07.980086 sshd[4449]: Failed password for root from 124.221.128.115 port 59596 ssh2 Feb 13 06:16:08.827184 sshd[4446]: Received disconnect from 165.154.0.66 port 59130:11: Bye Bye [preauth] Feb 13 06:16:08.827184 sshd[4446]: Disconnected from authenticating user root 165.154.0.66 port 59130 [preauth] Feb 13 06:16:08.827764 systemd[1]: sshd@70-145.40.90.207:22-165.154.0.66:59130.service: Deactivated successfully. Feb 13 06:16:09.036049 sshd[4449]: Received disconnect from 124.221.128.115 port 59596:11: Bye Bye [preauth] Feb 13 06:16:09.036049 sshd[4449]: Disconnected from authenticating user root 124.221.128.115 port 59596 [preauth] Feb 13 06:16:09.038605 systemd[1]: sshd@71-145.40.90.207:22-124.221.128.115:59596.service: Deactivated successfully. Feb 13 06:16:25.000793 systemd[1]: Started sshd@72-145.40.90.207:22-20.127.106.136:41128.service. Feb 13 06:16:25.479448 sshd[4455]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:16:26.975157 sshd[4455]: Failed password for root from 20.127.106.136 port 41128 ssh2 Feb 13 06:16:27.684715 sshd[4455]: Received disconnect from 20.127.106.136 port 41128:11: Bye Bye [preauth] Feb 13 06:16:27.684715 sshd[4455]: Disconnected from authenticating user root 20.127.106.136 port 41128 [preauth] Feb 13 06:16:27.687232 systemd[1]: sshd@72-145.40.90.207:22-20.127.106.136:41128.service: Deactivated successfully. Feb 13 06:16:31.466646 systemd[1]: Started sshd@73-145.40.90.207:22-139.59.81.65:51238.service. Feb 13 06:16:32.833349 sshd[4461]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:16:34.624604 sshd[4461]: Failed password for root from 139.59.81.65 port 51238 ssh2 Feb 13 06:16:34.924201 systemd[1]: Started sshd@74-145.40.90.207:22-124.221.128.115:38908.service. Feb 13 06:16:35.221807 sshd[4461]: Received disconnect from 139.59.81.65 port 51238:11: Bye Bye [preauth] Feb 13 06:16:35.221807 sshd[4461]: Disconnected from authenticating user root 139.59.81.65 port 51238 [preauth] Feb 13 06:16:35.222974 systemd[1]: sshd@73-145.40.90.207:22-139.59.81.65:51238.service: Deactivated successfully. Feb 13 06:16:35.901142 sshd[4464]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:16:38.104389 sshd[4464]: Failed password for root from 124.221.128.115 port 38908 ssh2 Feb 13 06:16:40.342671 sshd[4464]: Received disconnect from 124.221.128.115 port 38908:11: Bye Bye [preauth] Feb 13 06:16:40.342671 sshd[4464]: Disconnected from authenticating user root 124.221.128.115 port 38908 [preauth] Feb 13 06:16:40.345299 systemd[1]: sshd@74-145.40.90.207:22-124.221.128.115:38908.service: Deactivated successfully. Feb 13 06:16:43.147189 systemd[1]: Started sshd@75-145.40.90.207:22-154.222.225.117:37094.service. Feb 13 06:16:44.047012 sshd[4469]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:16:45.818555 sshd[4469]: Failed password for root from 154.222.225.117 port 37094 ssh2 Feb 13 06:16:46.347243 sshd[4469]: Received disconnect from 154.222.225.117 port 37094:11: Bye Bye [preauth] Feb 13 06:16:46.347243 sshd[4469]: Disconnected from authenticating user root 154.222.225.117 port 37094 [preauth] Feb 13 06:16:46.349833 systemd[1]: sshd@75-145.40.90.207:22-154.222.225.117:37094.service: Deactivated successfully. Feb 13 06:17:03.556749 systemd[1]: Started sshd@76-145.40.90.207:22-124.221.128.115:46454.service. Feb 13 06:17:05.239968 sshd[4477]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:17:07.226546 sshd[4477]: Failed password for root from 124.221.128.115 port 46454 ssh2 Feb 13 06:17:07.536935 sshd[4477]: Received disconnect from 124.221.128.115 port 46454:11: Bye Bye [preauth] Feb 13 06:17:07.536935 sshd[4477]: Disconnected from authenticating user root 124.221.128.115 port 46454 [preauth] Feb 13 06:17:07.539434 systemd[1]: sshd@76-145.40.90.207:22-124.221.128.115:46454.service: Deactivated successfully. Feb 13 06:17:22.065532 update_engine[1463]: I0213 06:17:22.065457 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 06:17:22.065532 update_engine[1463]: I0213 06:17:22.065546 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 06:17:22.068511 update_engine[1463]: I0213 06:17:22.067464 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 06:17:22.068726 update_engine[1463]: I0213 06:17:22.068532 1463 omaha_request_params.cc:62] Current group set to lts Feb 13 06:17:22.068981 update_engine[1463]: I0213 06:17:22.068936 1463 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 06:17:22.068981 update_engine[1463]: I0213 06:17:22.068961 1463 update_attempter.cc:643] Scheduling an action processor start. Feb 13 06:17:22.069366 update_engine[1463]: I0213 06:17:22.069005 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 06:17:22.069366 update_engine[1463]: I0213 06:17:22.069094 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 06:17:22.069366 update_engine[1463]: I0213 06:17:22.069311 1463 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 06:17:22.069366 update_engine[1463]: I0213 06:17:22.069340 1463 omaha_request_action.cc:271] Request: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: Feb 13 06:17:22.069366 update_engine[1463]: I0213 06:17:22.069356 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:17:22.071292 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 06:17:22.073068 update_engine[1463]: I0213 06:17:22.073060 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:17:22.073128 update_engine[1463]: E0213 06:17:22.073119 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:17:22.073164 update_engine[1463]: I0213 06:17:22.073158 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 06:17:31.992330 update_engine[1463]: I0213 06:17:31.992216 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:17:31.993456 update_engine[1463]: I0213 06:17:31.992802 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:17:31.993456 update_engine[1463]: E0213 06:17:31.993068 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:17:31.993456 update_engine[1463]: I0213 06:17:31.993348 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 06:17:35.711539 systemd[1]: Started sshd@77-145.40.90.207:22-124.221.128.115:54000.service. Feb 13 06:17:36.590730 sshd[4487]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:17:37.425829 systemd[1]: Started sshd@78-145.40.90.207:22-43.153.36.182:52100.service. Feb 13 06:17:38.121676 sshd[4490]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:17:38.502487 sshd[4487]: Failed password for root from 124.221.128.115 port 54000 ssh2 Feb 13 06:17:38.886973 sshd[4487]: Received disconnect from 124.221.128.115 port 54000:11: Bye Bye [preauth] Feb 13 06:17:38.886973 sshd[4487]: Disconnected from authenticating user root 124.221.128.115 port 54000 [preauth] Feb 13 06:17:38.889341 systemd[1]: sshd@77-145.40.90.207:22-124.221.128.115:54000.service: Deactivated successfully. Feb 13 06:17:39.973829 sshd[4490]: Failed password for root from 43.153.36.182 port 52100 ssh2 Feb 13 06:17:40.270050 sshd[4490]: Received disconnect from 43.153.36.182 port 52100:11: Bye Bye [preauth] Feb 13 06:17:40.270050 sshd[4490]: Disconnected from authenticating user root 43.153.36.182 port 52100 [preauth] Feb 13 06:17:40.272484 systemd[1]: sshd@78-145.40.90.207:22-43.153.36.182:52100.service: Deactivated successfully. Feb 13 06:17:41.991649 update_engine[1463]: I0213 06:17:41.991523 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:17:41.992458 update_engine[1463]: I0213 06:17:41.991982 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:17:41.992458 update_engine[1463]: E0213 06:17:41.992179 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:17:41.992458 update_engine[1463]: I0213 06:17:41.992391 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 06:17:42.501693 systemd[1]: Started sshd@79-145.40.90.207:22-218.92.0.45:35450.service. Feb 13 06:17:51.717884 systemd[1]: Started sshd@80-145.40.90.207:22-20.127.106.136:35370.service. Feb 13 06:17:51.991714 update_engine[1463]: I0213 06:17:51.991488 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.991928 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:17:51.992610 update_engine[1463]: E0213 06:17:51.992139 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992339 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992359 1463 omaha_request_action.cc:621] Omaha request response: Feb 13 06:17:51.992610 update_engine[1463]: E0213 06:17:51.992499 1463 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992525 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992535 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992543 1463 update_attempter.cc:306] Processing Done. Feb 13 06:17:51.992610 update_engine[1463]: E0213 06:17:51.992568 1463 update_attempter.cc:619] Update failed. Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992576 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992585 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 06:17:51.992610 update_engine[1463]: I0213 06:17:51.992595 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.992746 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.992796 1463 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.992806 1463 omaha_request_action.cc:271] Request: Feb 13 06:17:51.993779 update_engine[1463]: Feb 13 06:17:51.993779 update_engine[1463]: Feb 13 06:17:51.993779 update_engine[1463]: Feb 13 06:17:51.993779 update_engine[1463]: Feb 13 06:17:51.993779 update_engine[1463]: Feb 13 06:17:51.993779 update_engine[1463]: Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.992816 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993101 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:17:51.993779 update_engine[1463]: E0213 06:17:51.993294 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993444 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993460 1463 omaha_request_action.cc:621] Omaha request response: Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993471 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993479 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993485 1463 update_attempter.cc:306] Processing Done. Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993494 1463 update_attempter.cc:310] Error event sent. Feb 13 06:17:51.993779 update_engine[1463]: I0213 06:17:51.993513 1463 update_check_scheduler.cc:74] Next update check in 42m43s Feb 13 06:17:51.995450 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 06:17:51.995450 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 06:17:52.182959 sshd[4499]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:17:54.290659 sshd[4499]: Failed password for root from 20.127.106.136 port 35370 ssh2 Feb 13 06:17:54.387449 sshd[4499]: Received disconnect from 20.127.106.136 port 35370:11: Bye Bye [preauth] Feb 13 06:17:54.387449 sshd[4499]: Disconnected from authenticating user root 20.127.106.136 port 35370 [preauth] Feb 13 06:17:54.389978 systemd[1]: sshd@80-145.40.90.207:22-20.127.106.136:35370.service: Deactivated successfully. Feb 13 06:17:59.549820 systemd[1]: Started sshd@81-145.40.90.207:22-139.59.81.65:37686.service. Feb 13 06:18:00.988597 sshd[4505]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:18:02.859467 sshd[4505]: Failed password for root from 139.59.81.65 port 37686 ssh2 Feb 13 06:18:03.397558 sshd[4505]: Received disconnect from 139.59.81.65 port 37686:11: Bye Bye [preauth] Feb 13 06:18:03.397558 sshd[4505]: Disconnected from authenticating user root 139.59.81.65 port 37686 [preauth] Feb 13 06:18:03.400038 systemd[1]: sshd@81-145.40.90.207:22-139.59.81.65:37686.service: Deactivated successfully. Feb 13 06:18:05.365670 systemd[1]: Started sshd@82-145.40.90.207:22-124.221.128.115:33316.service. Feb 13 06:18:06.177807 sshd[4509]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:18:07.537916 sshd[4509]: Failed password for root from 124.221.128.115 port 33316 ssh2 Feb 13 06:18:08.451706 sshd[4509]: Received disconnect from 124.221.128.115 port 33316:11: Bye Bye [preauth] Feb 13 06:18:08.451706 sshd[4509]: Disconnected from authenticating user root 124.221.128.115 port 33316 [preauth] Feb 13 06:18:08.454147 systemd[1]: sshd@82-145.40.90.207:22-124.221.128.115:33316.service: Deactivated successfully. Feb 13 06:18:12.456018 systemd[1]: Started sshd@83-145.40.90.207:22-165.154.0.66:49258.service. Feb 13 06:18:14.248519 sshd[4513]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:18:16.040530 systemd[1]: Started sshd@84-145.40.90.207:22-154.222.225.117:59292.service. Feb 13 06:18:16.376405 sshd[4513]: Failed password for root from 165.154.0.66 port 49258 ssh2 Feb 13 06:18:16.539528 sshd[4513]: Received disconnect from 165.154.0.66 port 49258:11: Bye Bye [preauth] Feb 13 06:18:16.539528 sshd[4513]: Disconnected from authenticating user root 165.154.0.66 port 49258 [preauth] Feb 13 06:18:16.541996 systemd[1]: sshd@83-145.40.90.207:22-165.154.0.66:49258.service: Deactivated successfully. Feb 13 06:18:16.943698 sshd[4516]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:18:19.011442 sshd[4516]: Failed password for root from 154.222.225.117 port 59292 ssh2 Feb 13 06:18:19.241171 sshd[4516]: Received disconnect from 154.222.225.117 port 59292:11: Bye Bye [preauth] Feb 13 06:18:19.241171 sshd[4516]: Disconnected from authenticating user root 154.222.225.117 port 59292 [preauth] Feb 13 06:18:19.243656 systemd[1]: sshd@84-145.40.90.207:22-154.222.225.117:59292.service: Deactivated successfully. Feb 13 06:18:34.770854 systemd[1]: Started sshd@85-145.40.90.207:22-124.221.128.115:40862.service. Feb 13 06:18:35.668090 sshd[4523]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:18:38.011705 sshd[4523]: Failed password for root from 124.221.128.115 port 40862 ssh2 Feb 13 06:18:40.090051 sshd[4523]: Received disconnect from 124.221.128.115 port 40862:11: Bye Bye [preauth] Feb 13 06:18:40.090051 sshd[4523]: Disconnected from authenticating user root 124.221.128.115 port 40862 [preauth] Feb 13 06:18:40.092522 systemd[1]: sshd@85-145.40.90.207:22-124.221.128.115:40862.service: Deactivated successfully. Feb 13 06:19:04.705413 systemd[1]: Started sshd@86-145.40.90.207:22-124.221.128.115:48410.service. Feb 13 06:19:05.603668 sshd[4531]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:19:08.066706 sshd[4531]: Failed password for root from 124.221.128.115 port 48410 ssh2 Feb 13 06:19:10.042508 sshd[4531]: Received disconnect from 124.221.128.115 port 48410:11: Bye Bye [preauth] Feb 13 06:19:10.042508 sshd[4531]: Disconnected from authenticating user root 124.221.128.115 port 48410 [preauth] Feb 13 06:19:10.044847 systemd[1]: sshd@86-145.40.90.207:22-124.221.128.115:48410.service: Deactivated successfully. Feb 13 06:19:20.601386 systemd[1]: Started sshd@87-145.40.90.207:22-20.127.106.136:57862.service. Feb 13 06:19:21.064149 sshd[4535]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:19:23.056413 sshd[4535]: Failed password for root from 20.127.106.136 port 57862 ssh2 Feb 13 06:19:23.268876 sshd[4535]: Received disconnect from 20.127.106.136 port 57862:11: Bye Bye [preauth] Feb 13 06:19:23.268876 sshd[4535]: Disconnected from authenticating user root 20.127.106.136 port 57862 [preauth] Feb 13 06:19:23.271576 systemd[1]: sshd@87-145.40.90.207:22-20.127.106.136:57862.service: Deactivated successfully. Feb 13 06:19:29.572913 systemd[1]: Started sshd@88-145.40.90.207:22-139.59.81.65:39448.service. Feb 13 06:19:30.935229 sshd[4542]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:19:32.495527 sshd[4542]: Failed password for root from 139.59.81.65 port 39448 ssh2 Feb 13 06:19:33.320907 sshd[4542]: Received disconnect from 139.59.81.65 port 39448:11: Bye Bye [preauth] Feb 13 06:19:33.320907 sshd[4542]: Disconnected from authenticating user root 139.59.81.65 port 39448 [preauth] Feb 13 06:19:33.323446 systemd[1]: sshd@88-145.40.90.207:22-139.59.81.65:39448.service: Deactivated successfully. Feb 13 06:19:37.225476 systemd[1]: Started sshd@89-145.40.90.207:22-124.221.128.115:55958.service. Feb 13 06:19:38.246189 sshd[4546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:19:40.237485 sshd[4546]: Failed password for root from 124.221.128.115 port 55958 ssh2 Feb 13 06:19:40.567837 sshd[4546]: Received disconnect from 124.221.128.115 port 55958:11: Bye Bye [preauth] Feb 13 06:19:40.567837 sshd[4546]: Disconnected from authenticating user root 124.221.128.115 port 55958 [preauth] Feb 13 06:19:40.570210 systemd[1]: sshd@89-145.40.90.207:22-124.221.128.115:55958.service: Deactivated successfully. Feb 13 06:19:42.506835 sshd[4495]: Timeout before authentication for 218.92.0.45 port 35450 Feb 13 06:19:42.508326 systemd[1]: sshd@79-145.40.90.207:22-218.92.0.45:35450.service: Deactivated successfully. Feb 13 06:19:47.087097 systemd[1]: Started sshd@90-145.40.90.207:22-154.222.225.117:53260.service. Feb 13 06:19:47.966664 sshd[4555]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:19:50.525540 sshd[4555]: Failed password for root from 154.222.225.117 port 53260 ssh2 Feb 13 06:19:52.394305 sshd[4555]: Received disconnect from 154.222.225.117 port 53260:11: Bye Bye [preauth] Feb 13 06:19:52.394305 sshd[4555]: Disconnected from authenticating user root 154.222.225.117 port 53260 [preauth] Feb 13 06:19:52.396824 systemd[1]: sshd@90-145.40.90.207:22-154.222.225.117:53260.service: Deactivated successfully. Feb 13 06:19:56.621303 systemd[1]: Started sshd@91-145.40.90.207:22-43.153.36.182:44572.service. Feb 13 06:20:04.622148 sshd[4559]: Connection closed by 43.153.36.182 port 44572 [preauth] Feb 13 06:20:04.624082 systemd[1]: sshd@91-145.40.90.207:22-43.153.36.182:44572.service: Deactivated successfully. Feb 13 06:20:07.304799 systemd[1]: Started sshd@92-145.40.90.207:22-124.221.128.115:35278.service. Feb 13 06:20:08.235853 sshd[4565]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:20:10.012621 sshd[4565]: Failed password for root from 124.221.128.115 port 35278 ssh2 Feb 13 06:20:10.537747 sshd[4565]: Received disconnect from 124.221.128.115 port 35278:11: Bye Bye [preauth] Feb 13 06:20:10.537747 sshd[4565]: Disconnected from authenticating user root 124.221.128.115 port 35278 [preauth] Feb 13 06:20:10.540290 systemd[1]: sshd@92-145.40.90.207:22-124.221.128.115:35278.service: Deactivated successfully. Feb 13 06:20:17.453087 systemd[1]: Started sshd@93-145.40.90.207:22-165.154.0.66:39400.service. Feb 13 06:20:20.002195 sshd[4569]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:20:21.758919 sshd[4569]: Failed password for root from 165.154.0.66 port 39400 ssh2 Feb 13 06:20:22.285829 sshd[4569]: Received disconnect from 165.154.0.66 port 39400:11: Bye Bye [preauth] Feb 13 06:20:22.285829 sshd[4569]: Disconnected from authenticating user root 165.154.0.66 port 39400 [preauth] Feb 13 06:20:22.288419 systemd[1]: sshd@93-145.40.90.207:22-165.154.0.66:39400.service: Deactivated successfully. Feb 13 06:20:36.982928 systemd[1]: Started sshd@94-145.40.90.207:22-124.221.128.115:42818.service. Feb 13 06:20:38.044132 sshd[4575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:20:40.272856 sshd[4575]: Failed password for root from 124.221.128.115 port 42818 ssh2 Feb 13 06:20:42.513542 sshd[4575]: Received disconnect from 124.221.128.115 port 42818:11: Bye Bye [preauth] Feb 13 06:20:42.513542 sshd[4575]: Disconnected from authenticating user root 124.221.128.115 port 42818 [preauth] Feb 13 06:20:42.516160 systemd[1]: sshd@94-145.40.90.207:22-124.221.128.115:42818.service: Deactivated successfully. Feb 13 06:20:50.210045 systemd[1]: Started sshd@95-145.40.90.207:22-20.127.106.136:52120.service. Feb 13 06:20:50.654852 sshd[4581]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=20.127.106.136 user=root Feb 13 06:20:52.530522 sshd[4581]: Failed password for root from 20.127.106.136 port 52120 ssh2 Feb 13 06:20:52.867459 sshd[4581]: Received disconnect from 20.127.106.136 port 52120:11: Bye Bye [preauth] Feb 13 06:20:52.867459 sshd[4581]: Disconnected from authenticating user root 20.127.106.136 port 52120 [preauth] Feb 13 06:20:52.869895 systemd[1]: sshd@95-145.40.90.207:22-20.127.106.136:52120.service: Deactivated successfully. Feb 13 06:20:57.824555 systemd[1]: Started sshd@96-145.40.90.207:22-139.59.81.65:38484.service. Feb 13 06:20:59.180785 sshd[4585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.81.65 user=root Feb 13 06:21:01.960669 sshd[4585]: Failed password for root from 139.59.81.65 port 38484 ssh2 Feb 13 06:21:03.705636 sshd[4585]: Received disconnect from 139.59.81.65 port 38484:11: Bye Bye [preauth] Feb 13 06:21:03.705636 sshd[4585]: Disconnected from authenticating user root 139.59.81.65 port 38484 [preauth] Feb 13 06:21:03.706369 systemd[1]: sshd@96-145.40.90.207:22-139.59.81.65:38484.service: Deactivated successfully. Feb 13 06:21:09.512473 systemd[1]: Started sshd@97-145.40.90.207:22-124.221.128.115:50370.service. Feb 13 06:21:10.482223 sshd[4591]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:21:12.103394 sshd[4591]: Failed password for root from 124.221.128.115 port 50370 ssh2 Feb 13 06:21:12.780000 sshd[4591]: Received disconnect from 124.221.128.115 port 50370:11: Bye Bye [preauth] Feb 13 06:21:12.780000 sshd[4591]: Disconnected from authenticating user root 124.221.128.115 port 50370 [preauth] Feb 13 06:21:12.782479 systemd[1]: sshd@97-145.40.90.207:22-124.221.128.115:50370.service: Deactivated successfully. Feb 13 06:21:19.525641 systemd[1]: Started sshd@98-145.40.90.207:22-154.222.225.117:47230.service. Feb 13 06:21:20.464499 sshd[4595]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.222.225.117 user=root Feb 13 06:21:22.125720 sshd[4595]: Failed password for root from 154.222.225.117 port 47230 ssh2 Feb 13 06:21:22.762326 sshd[4595]: Received disconnect from 154.222.225.117 port 47230:11: Bye Bye [preauth] Feb 13 06:21:22.762326 sshd[4595]: Disconnected from authenticating user root 154.222.225.117 port 47230 [preauth] Feb 13 06:21:22.764892 systemd[1]: sshd@98-145.40.90.207:22-154.222.225.117:47230.service: Deactivated successfully. Feb 13 06:21:27.312644 systemd[1]: Started sshd@99-145.40.90.207:22-139.178.68.195:48480.service. Feb 13 06:21:27.342427 sshd[4600]: Accepted publickey for core from 139.178.68.195 port 48480 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:27.343335 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:27.346510 systemd-logind[1461]: New session 8 of user core. Feb 13 06:21:27.347169 systemd[1]: Started session-8.scope. Feb 13 06:21:27.482232 sshd[4600]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:27.483697 systemd[1]: sshd@99-145.40.90.207:22-139.178.68.195:48480.service: Deactivated successfully. Feb 13 06:21:27.484136 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 06:21:27.484494 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Feb 13 06:21:27.484889 systemd-logind[1461]: Removed session 8. Feb 13 06:21:32.492123 systemd[1]: Started sshd@100-145.40.90.207:22-139.178.68.195:48496.service. Feb 13 06:21:32.520949 sshd[4630]: Accepted publickey for core from 139.178.68.195 port 48496 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:32.521800 sshd[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:32.524513 systemd-logind[1461]: New session 9 of user core. Feb 13 06:21:32.525089 systemd[1]: Started session-9.scope. Feb 13 06:21:32.612270 sshd[4630]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:32.613666 systemd[1]: sshd@100-145.40.90.207:22-139.178.68.195:48496.service: Deactivated successfully. Feb 13 06:21:32.614085 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 06:21:32.614484 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Feb 13 06:21:32.614984 systemd-logind[1461]: Removed session 9. Feb 13 06:21:37.621790 systemd[1]: Started sshd@101-145.40.90.207:22-139.178.68.195:43028.service. Feb 13 06:21:37.650151 sshd[4657]: Accepted publickey for core from 139.178.68.195 port 43028 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:37.650999 sshd[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:37.653866 systemd-logind[1461]: New session 10 of user core. Feb 13 06:21:37.654456 systemd[1]: Started session-10.scope. Feb 13 06:21:37.743736 sshd[4657]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:37.745075 systemd[1]: sshd@101-145.40.90.207:22-139.178.68.195:43028.service: Deactivated successfully. Feb 13 06:21:37.745498 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 06:21:37.745894 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Feb 13 06:21:37.746317 systemd-logind[1461]: Removed session 10. Feb 13 06:21:41.114277 systemd[1]: Started sshd@102-145.40.90.207:22-124.221.128.115:57914.service. Feb 13 06:21:42.005407 sshd[4682]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:21:42.753235 systemd[1]: Started sshd@103-145.40.90.207:22-139.178.68.195:43044.service. Feb 13 06:21:42.782752 sshd[4685]: Accepted publickey for core from 139.178.68.195 port 43044 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:42.783615 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:42.786506 systemd-logind[1461]: New session 11 of user core. Feb 13 06:21:42.787124 systemd[1]: Started session-11.scope. Feb 13 06:21:42.875676 sshd[4685]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:42.877514 systemd[1]: sshd@103-145.40.90.207:22-139.178.68.195:43044.service: Deactivated successfully. Feb 13 06:21:42.877855 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 06:21:42.878197 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Feb 13 06:21:42.878768 systemd[1]: Started sshd@104-145.40.90.207:22-139.178.68.195:43052.service. Feb 13 06:21:42.879138 systemd-logind[1461]: Removed session 11. Feb 13 06:21:42.907412 sshd[4711]: Accepted publickey for core from 139.178.68.195 port 43052 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:42.908108 sshd[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:42.910598 systemd-logind[1461]: New session 12 of user core. Feb 13 06:21:42.911057 systemd[1]: Started session-12.scope. Feb 13 06:21:43.300260 sshd[4711]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:43.302021 systemd[1]: sshd@104-145.40.90.207:22-139.178.68.195:43052.service: Deactivated successfully. Feb 13 06:21:43.302379 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 06:21:43.302706 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Feb 13 06:21:43.303339 systemd[1]: Started sshd@105-145.40.90.207:22-139.178.68.195:43062.service. Feb 13 06:21:43.303773 systemd-logind[1461]: Removed session 12. Feb 13 06:21:43.332412 sshd[4735]: Accepted publickey for core from 139.178.68.195 port 43062 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:43.333174 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:43.335596 systemd-logind[1461]: New session 13 of user core. Feb 13 06:21:43.336090 systemd[1]: Started session-13.scope. Feb 13 06:21:43.448638 sshd[4735]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:43.449998 systemd[1]: sshd@105-145.40.90.207:22-139.178.68.195:43062.service: Deactivated successfully. Feb 13 06:21:43.450456 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 06:21:43.450852 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Feb 13 06:21:43.451229 systemd-logind[1461]: Removed session 13. Feb 13 06:21:44.254103 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 13 06:21:44.260321 systemd-tmpfiles[4762]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 06:21:44.260549 systemd-tmpfiles[4762]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 06:21:44.261225 systemd-tmpfiles[4762]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 06:21:44.271969 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 13 06:21:44.272058 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 13 06:21:44.273146 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 13 06:21:44.353589 sshd[4682]: Failed password for root from 124.221.128.115 port 57914 ssh2 Feb 13 06:21:46.432784 sshd[4682]: Received disconnect from 124.221.128.115 port 57914:11: Bye Bye [preauth] Feb 13 06:21:46.432784 sshd[4682]: Disconnected from authenticating user root 124.221.128.115 port 57914 [preauth] Feb 13 06:21:46.435342 systemd[1]: sshd@102-145.40.90.207:22-124.221.128.115:57914.service: Deactivated successfully. Feb 13 06:21:48.457844 systemd[1]: Started sshd@106-145.40.90.207:22-139.178.68.195:57294.service. Feb 13 06:21:48.487998 sshd[4767]: Accepted publickey for core from 139.178.68.195 port 57294 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:48.488936 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:48.492080 systemd-logind[1461]: New session 14 of user core. Feb 13 06:21:48.492714 systemd[1]: Started session-14.scope. Feb 13 06:21:48.580495 sshd[4767]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:48.582012 systemd[1]: sshd@106-145.40.90.207:22-139.178.68.195:57294.service: Deactivated successfully. Feb 13 06:21:48.582458 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 06:21:48.582894 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Feb 13 06:21:48.583432 systemd-logind[1461]: Removed session 14. Feb 13 06:21:53.589957 systemd[1]: Started sshd@107-145.40.90.207:22-139.178.68.195:57298.service. Feb 13 06:21:53.619641 sshd[4793]: Accepted publickey for core from 139.178.68.195 port 57298 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:53.620465 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:53.623158 systemd-logind[1461]: New session 15 of user core. Feb 13 06:21:53.623715 systemd[1]: Started session-15.scope. Feb 13 06:21:53.711979 sshd[4793]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:53.713476 systemd[1]: sshd@107-145.40.90.207:22-139.178.68.195:57298.service: Deactivated successfully. Feb 13 06:21:53.713912 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 06:21:53.714220 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Feb 13 06:21:53.714735 systemd-logind[1461]: Removed session 15. Feb 13 06:21:58.721994 systemd[1]: Started sshd@108-145.40.90.207:22-139.178.68.195:47956.service. Feb 13 06:21:58.751668 sshd[4818]: Accepted publickey for core from 139.178.68.195 port 47956 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:58.752512 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:58.755618 systemd-logind[1461]: New session 16 of user core. Feb 13 06:21:58.756224 systemd[1]: Started session-16.scope. Feb 13 06:21:58.855381 sshd[4818]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:58.861037 systemd[1]: sshd@108-145.40.90.207:22-139.178.68.195:47956.service: Deactivated successfully. Feb 13 06:21:58.863023 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 06:21:58.864839 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Feb 13 06:21:58.866957 systemd-logind[1461]: Removed session 16. Feb 13 06:22:03.863781 systemd[1]: Started sshd@109-145.40.90.207:22-139.178.68.195:47962.service. Feb 13 06:22:03.893326 sshd[4846]: Accepted publickey for core from 139.178.68.195 port 47962 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:03.894151 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:03.897088 systemd-logind[1461]: New session 17 of user core. Feb 13 06:22:03.897739 systemd[1]: Started session-17.scope. Feb 13 06:22:03.988550 sshd[4846]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:03.990002 systemd[1]: sshd@109-145.40.90.207:22-139.178.68.195:47962.service: Deactivated successfully. Feb 13 06:22:03.990456 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 06:22:03.990849 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Feb 13 06:22:03.991257 systemd-logind[1461]: Removed session 17. Feb 13 06:22:08.997640 systemd[1]: Started sshd@110-145.40.90.207:22-139.178.68.195:42438.service. Feb 13 06:22:09.027000 sshd[4871]: Accepted publickey for core from 139.178.68.195 port 42438 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:09.027805 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:09.030750 systemd-logind[1461]: New session 18 of user core. Feb 13 06:22:09.031388 systemd[1]: Started session-18.scope. Feb 13 06:22:09.121038 sshd[4871]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:09.122519 systemd[1]: sshd@110-145.40.90.207:22-139.178.68.195:42438.service: Deactivated successfully. Feb 13 06:22:09.122976 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 06:22:09.123363 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Feb 13 06:22:09.123940 systemd-logind[1461]: Removed session 18. Feb 13 06:22:10.807404 systemd[1]: Started sshd@111-145.40.90.207:22-124.221.128.115:37228.service. Feb 13 06:22:11.704617 sshd[4896]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:22:13.354600 systemd[1]: Started sshd@112-145.40.90.207:22-43.153.36.182:37032.service. Feb 13 06:22:13.625910 sshd[4899]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.36.182 user=root Feb 13 06:22:14.032558 sshd[4896]: Failed password for root from 124.221.128.115 port 37228 ssh2 Feb 13 06:22:14.130916 systemd[1]: Started sshd@113-145.40.90.207:22-139.178.68.195:42444.service. Feb 13 06:22:14.160711 sshd[4902]: Accepted publickey for core from 139.178.68.195 port 42444 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:14.161543 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:14.164547 systemd-logind[1461]: New session 19 of user core. Feb 13 06:22:14.165239 systemd[1]: Started session-19.scope. Feb 13 06:22:14.255948 sshd[4902]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:14.257436 systemd[1]: sshd@113-145.40.90.207:22-139.178.68.195:42444.service: Deactivated successfully. Feb 13 06:22:14.257898 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 06:22:14.258218 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Feb 13 06:22:14.258874 systemd-logind[1461]: Removed session 19. Feb 13 06:22:15.561559 sshd[4899]: Failed password for root from 43.153.36.182 port 37032 ssh2 Feb 13 06:22:15.766022 sshd[4899]: Received disconnect from 43.153.36.182 port 37032:11: Bye Bye [preauth] Feb 13 06:22:15.766022 sshd[4899]: Disconnected from authenticating user root 43.153.36.182 port 37032 [preauth] Feb 13 06:22:15.768562 systemd[1]: sshd@112-145.40.90.207:22-43.153.36.182:37032.service: Deactivated successfully. Feb 13 06:22:16.126387 sshd[4896]: Received disconnect from 124.221.128.115 port 37228:11: Bye Bye [preauth] Feb 13 06:22:16.126387 sshd[4896]: Disconnected from authenticating user root 124.221.128.115 port 37228 [preauth] Feb 13 06:22:16.128955 systemd[1]: sshd@111-145.40.90.207:22-124.221.128.115:37228.service: Deactivated successfully. Feb 13 06:22:19.267296 systemd[1]: Started sshd@114-145.40.90.207:22-139.178.68.195:46172.service. Feb 13 06:22:19.300464 sshd[4929]: Accepted publickey for core from 139.178.68.195 port 46172 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:19.301253 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:19.303981 systemd-logind[1461]: New session 20 of user core. Feb 13 06:22:19.304519 systemd[1]: Started session-20.scope. Feb 13 06:22:19.393878 sshd[4929]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:19.395402 systemd[1]: sshd@114-145.40.90.207:22-139.178.68.195:46172.service: Deactivated successfully. Feb 13 06:22:19.395841 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 06:22:19.396193 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. Feb 13 06:22:19.396725 systemd-logind[1461]: Removed session 20. Feb 13 06:22:21.192948 systemd[1]: Started sshd@115-145.40.90.207:22-165.154.0.66:52256.service. Feb 13 06:22:22.493238 sshd[4954]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:22:24.403998 systemd[1]: Started sshd@116-145.40.90.207:22-139.178.68.195:46176.service. Feb 13 06:22:24.433702 sshd[4958]: Accepted publickey for core from 139.178.68.195 port 46176 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:24.434514 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:24.437464 systemd-logind[1461]: New session 21 of user core. Feb 13 06:22:24.438190 systemd[1]: Started session-21.scope. Feb 13 06:22:24.530294 sshd[4958]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:24.531985 systemd[1]: sshd@116-145.40.90.207:22-139.178.68.195:46176.service: Deactivated successfully. Feb 13 06:22:24.532481 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 06:22:24.532963 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. Feb 13 06:22:24.533701 systemd-logind[1461]: Removed session 21. Feb 13 06:22:24.666369 sshd[4954]: Failed password for root from 165.154.0.66 port 52256 ssh2 Feb 13 06:22:26.917267 sshd[4954]: Received disconnect from 165.154.0.66 port 52256:11: Bye Bye [preauth] Feb 13 06:22:26.917267 sshd[4954]: Disconnected from authenticating user root 165.154.0.66 port 52256 [preauth] Feb 13 06:22:26.919839 systemd[1]: sshd@115-145.40.90.207:22-165.154.0.66:52256.service: Deactivated successfully. Feb 13 06:22:29.539866 systemd[1]: Started sshd@117-145.40.90.207:22-139.178.68.195:45234.service. Feb 13 06:22:29.569119 sshd[4986]: Accepted publickey for core from 139.178.68.195 port 45234 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:29.569963 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:29.573010 systemd-logind[1461]: New session 22 of user core. Feb 13 06:22:29.573668 systemd[1]: Started session-22.scope. Feb 13 06:22:29.664624 sshd[4986]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:29.666053 systemd[1]: sshd@117-145.40.90.207:22-139.178.68.195:45234.service: Deactivated successfully. Feb 13 06:22:29.666490 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 06:22:29.666884 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. Feb 13 06:22:29.667368 systemd-logind[1461]: Removed session 22. Feb 13 06:22:34.674856 systemd[1]: Started sshd@118-145.40.90.207:22-139.178.68.195:45236.service. Feb 13 06:22:34.704528 sshd[5012]: Accepted publickey for core from 139.178.68.195 port 45236 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:34.705512 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:34.708583 systemd-logind[1461]: New session 23 of user core. Feb 13 06:22:34.709271 systemd[1]: Started session-23.scope. Feb 13 06:22:34.801361 sshd[5012]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:34.802916 systemd[1]: sshd@118-145.40.90.207:22-139.178.68.195:45236.service: Deactivated successfully. Feb 13 06:22:34.803381 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 06:22:34.803776 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit. Feb 13 06:22:34.804243 systemd-logind[1461]: Removed session 23. Feb 13 06:22:39.811478 systemd[1]: Started sshd@119-145.40.90.207:22-139.178.68.195:52898.service. Feb 13 06:22:39.847579 sshd[5040]: Accepted publickey for core from 139.178.68.195 port 52898 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:39.848227 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:39.850942 systemd-logind[1461]: New session 24 of user core. Feb 13 06:22:39.851580 systemd[1]: Started session-24.scope. Feb 13 06:22:39.938348 sshd[5040]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:39.939700 systemd[1]: sshd@119-145.40.90.207:22-139.178.68.195:52898.service: Deactivated successfully. Feb 13 06:22:39.940158 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 06:22:39.940575 systemd-logind[1461]: Session 24 logged out. Waiting for processes to exit. Feb 13 06:22:39.941145 systemd-logind[1461]: Removed session 24. Feb 13 06:22:40.722595 systemd[1]: Started sshd@120-145.40.90.207:22-124.221.128.115:44776.service. Feb 13 06:22:41.527024 sshd[5065]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:22:43.308688 sshd[5065]: Failed password for root from 124.221.128.115 port 44776 ssh2 Feb 13 06:22:43.799908 sshd[5065]: Received disconnect from 124.221.128.115 port 44776:11: Bye Bye [preauth] Feb 13 06:22:43.799908 sshd[5065]: Disconnected from authenticating user root 124.221.128.115 port 44776 [preauth] Feb 13 06:22:43.802435 systemd[1]: sshd@120-145.40.90.207:22-124.221.128.115:44776.service: Deactivated successfully. Feb 13 06:22:44.948602 systemd[1]: Started sshd@121-145.40.90.207:22-139.178.68.195:52914.service. Feb 13 06:22:44.980263 sshd[5071]: Accepted publickey for core from 139.178.68.195 port 52914 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:44.981285 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:44.984671 systemd-logind[1461]: New session 25 of user core. Feb 13 06:22:44.985449 systemd[1]: Started session-25.scope. Feb 13 06:22:45.076083 sshd[5071]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:45.077580 systemd[1]: sshd@121-145.40.90.207:22-139.178.68.195:52914.service: Deactivated successfully. Feb 13 06:22:45.078016 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 06:22:45.078366 systemd-logind[1461]: Session 25 logged out. Waiting for processes to exit. Feb 13 06:22:45.078826 systemd-logind[1461]: Removed session 25. Feb 13 06:22:50.085648 systemd[1]: Started sshd@122-145.40.90.207:22-139.178.68.195:39386.service. Feb 13 06:22:50.115120 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 39386 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:50.115992 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:50.118943 systemd-logind[1461]: New session 26 of user core. Feb 13 06:22:50.119499 systemd[1]: Started session-26.scope. Feb 13 06:22:50.210655 sshd[5095]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:50.212031 systemd[1]: sshd@122-145.40.90.207:22-139.178.68.195:39386.service: Deactivated successfully. Feb 13 06:22:50.212458 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 06:22:50.212860 systemd-logind[1461]: Session 26 logged out. Waiting for processes to exit. Feb 13 06:22:50.213366 systemd-logind[1461]: Removed session 26. Feb 13 06:22:55.220028 systemd[1]: Started sshd@123-145.40.90.207:22-139.178.68.195:39400.service. Feb 13 06:22:55.249652 sshd[5120]: Accepted publickey for core from 139.178.68.195 port 39400 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:22:55.250499 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:22:55.253398 systemd-logind[1461]: New session 27 of user core. Feb 13 06:22:55.254074 systemd[1]: Started session-27.scope. Feb 13 06:22:55.338876 sshd[5120]: pam_unix(sshd:session): session closed for user core Feb 13 06:22:55.340368 systemd[1]: sshd@123-145.40.90.207:22-139.178.68.195:39400.service: Deactivated successfully. Feb 13 06:22:55.340791 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 06:22:55.341125 systemd-logind[1461]: Session 27 logged out. Waiting for processes to exit. Feb 13 06:22:55.341664 systemd-logind[1461]: Removed session 27. Feb 13 06:22:58.672005 systemd[1]: Started sshd@124-145.40.90.207:22-141.98.11.90:32100.service. Feb 13 06:23:00.349184 systemd[1]: Started sshd@125-145.40.90.207:22-139.178.68.195:51568.service. Feb 13 06:23:00.379721 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 51568 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:00.380401 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:00.382668 systemd-logind[1461]: New session 28 of user core. Feb 13 06:23:00.383184 systemd[1]: Started session-28.scope. Feb 13 06:23:00.468677 sshd[5150]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:00.470043 systemd[1]: sshd@125-145.40.90.207:22-139.178.68.195:51568.service: Deactivated successfully. Feb 13 06:23:00.470475 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 06:23:00.470884 systemd-logind[1461]: Session 28 logged out. Waiting for processes to exit. Feb 13 06:23:00.471438 systemd-logind[1461]: Removed session 28. Feb 13 06:23:00.533570 sshd[5145]: Invalid user teste from 141.98.11.90 port 32100 Feb 13 06:23:00.809336 sshd[5145]: pam_faillock(sshd:auth): User unknown Feb 13 06:23:00.810330 sshd[5145]: pam_unix(sshd:auth): check pass; user unknown Feb 13 06:23:00.810420 sshd[5145]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.90 Feb 13 06:23:00.811324 sshd[5145]: pam_faillock(sshd:auth): User unknown Feb 13 06:23:02.868459 sshd[5145]: Failed password for invalid user teste from 141.98.11.90 port 32100 ssh2 Feb 13 06:23:04.612529 sshd[5145]: Connection closed by invalid user teste 141.98.11.90 port 32100 [preauth] Feb 13 06:23:04.615077 systemd[1]: sshd@124-145.40.90.207:22-141.98.11.90:32100.service: Deactivated successfully. Feb 13 06:23:05.479367 systemd[1]: Started sshd@126-145.40.90.207:22-139.178.68.195:51578.service. Feb 13 06:23:05.509607 sshd[5176]: Accepted publickey for core from 139.178.68.195 port 51578 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:05.510619 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:05.514019 systemd-logind[1461]: New session 29 of user core. Feb 13 06:23:05.514778 systemd[1]: Started session-29.scope. Feb 13 06:23:05.603692 sshd[5176]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:05.605178 systemd[1]: sshd@126-145.40.90.207:22-139.178.68.195:51578.service: Deactivated successfully. Feb 13 06:23:05.605654 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 06:23:05.606051 systemd-logind[1461]: Session 29 logged out. Waiting for processes to exit. Feb 13 06:23:05.606598 systemd-logind[1461]: Removed session 29. Feb 13 06:23:10.024651 systemd[1]: Started sshd@127-145.40.90.207:22-124.221.128.115:52322.service. Feb 13 06:23:10.613991 systemd[1]: Started sshd@128-145.40.90.207:22-139.178.68.195:41236.service. Feb 13 06:23:10.643151 sshd[5204]: Accepted publickey for core from 139.178.68.195 port 41236 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:10.644010 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:10.646952 systemd-logind[1461]: New session 30 of user core. Feb 13 06:23:10.647563 systemd[1]: Started session-30.scope. Feb 13 06:23:10.735451 sshd[5204]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:10.737033 systemd[1]: sshd@128-145.40.90.207:22-139.178.68.195:41236.service: Deactivated successfully. Feb 13 06:23:10.737518 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 06:23:10.737953 systemd-logind[1461]: Session 30 logged out. Waiting for processes to exit. Feb 13 06:23:10.738460 systemd-logind[1461]: Removed session 30. Feb 13 06:23:10.941497 sshd[5201]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:23:12.703158 sshd[5201]: Failed password for root from 124.221.128.115 port 52322 ssh2 Feb 13 06:23:13.242482 sshd[5201]: Received disconnect from 124.221.128.115 port 52322:11: Bye Bye [preauth] Feb 13 06:23:13.242482 sshd[5201]: Disconnected from authenticating user root 124.221.128.115 port 52322 [preauth] Feb 13 06:23:13.245031 systemd[1]: sshd@127-145.40.90.207:22-124.221.128.115:52322.service: Deactivated successfully. Feb 13 06:23:15.745677 systemd[1]: Started sshd@129-145.40.90.207:22-139.178.68.195:41242.service. Feb 13 06:23:15.774575 sshd[5230]: Accepted publickey for core from 139.178.68.195 port 41242 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:15.775418 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:15.778169 systemd-logind[1461]: New session 31 of user core. Feb 13 06:23:15.778785 systemd[1]: Started session-31.scope. Feb 13 06:23:15.863000 sshd[5230]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:15.864632 systemd[1]: sshd@129-145.40.90.207:22-139.178.68.195:41242.service: Deactivated successfully. Feb 13 06:23:15.865098 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 06:23:15.865586 systemd-logind[1461]: Session 31 logged out. Waiting for processes to exit. Feb 13 06:23:15.866158 systemd-logind[1461]: Removed session 31. Feb 13 06:23:20.872535 systemd[1]: Started sshd@130-145.40.90.207:22-139.178.68.195:46910.service. Feb 13 06:23:20.901228 sshd[5255]: Accepted publickey for core from 139.178.68.195 port 46910 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:20.902091 sshd[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:20.904979 systemd-logind[1461]: New session 32 of user core. Feb 13 06:23:20.905627 systemd[1]: Started session-32.scope. Feb 13 06:23:20.994573 sshd[5255]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:20.996030 systemd[1]: sshd@130-145.40.90.207:22-139.178.68.195:46910.service: Deactivated successfully. Feb 13 06:23:20.996487 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 06:23:20.996882 systemd-logind[1461]: Session 32 logged out. Waiting for processes to exit. Feb 13 06:23:20.997273 systemd-logind[1461]: Removed session 32. Feb 13 06:23:26.005747 systemd[1]: Started sshd@131-145.40.90.207:22-139.178.68.195:46920.service. Feb 13 06:23:26.037960 sshd[5280]: Accepted publickey for core from 139.178.68.195 port 46920 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:26.041205 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:26.051794 systemd-logind[1461]: New session 33 of user core. Feb 13 06:23:26.054388 systemd[1]: Started session-33.scope. Feb 13 06:23:26.156649 sshd[5280]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:26.158037 systemd[1]: sshd@131-145.40.90.207:22-139.178.68.195:46920.service: Deactivated successfully. Feb 13 06:23:26.158524 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 06:23:26.158926 systemd-logind[1461]: Session 33 logged out. Waiting for processes to exit. Feb 13 06:23:26.159361 systemd-logind[1461]: Removed session 33. Feb 13 06:23:31.166273 systemd[1]: Started sshd@132-145.40.90.207:22-139.178.68.195:52794.service. Feb 13 06:23:31.194725 sshd[5308]: Accepted publickey for core from 139.178.68.195 port 52794 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:31.195572 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:31.198486 systemd-logind[1461]: New session 34 of user core. Feb 13 06:23:31.199432 systemd[1]: Started session-34.scope. Feb 13 06:23:31.289492 sshd[5308]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:31.290858 systemd[1]: sshd@132-145.40.90.207:22-139.178.68.195:52794.service: Deactivated successfully. Feb 13 06:23:31.291319 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 06:23:31.291693 systemd-logind[1461]: Session 34 logged out. Waiting for processes to exit. Feb 13 06:23:31.292105 systemd-logind[1461]: Removed session 34. Feb 13 06:23:36.299647 systemd[1]: Started sshd@133-145.40.90.207:22-139.178.68.195:53334.service. Feb 13 06:23:36.329396 sshd[5334]: Accepted publickey for core from 139.178.68.195 port 53334 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:36.330235 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:36.333118 systemd-logind[1461]: New session 35 of user core. Feb 13 06:23:36.334062 systemd[1]: Started session-35.scope. Feb 13 06:23:36.423679 sshd[5334]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:36.425022 systemd[1]: sshd@133-145.40.90.207:22-139.178.68.195:53334.service: Deactivated successfully. Feb 13 06:23:36.425487 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 06:23:36.425828 systemd-logind[1461]: Session 35 logged out. Waiting for processes to exit. Feb 13 06:23:36.426221 systemd-logind[1461]: Removed session 35. Feb 13 06:23:41.193939 systemd[1]: Started sshd@134-145.40.90.207:22-124.221.128.115:59868.service. Feb 13 06:23:41.437009 systemd[1]: Started sshd@135-145.40.90.207:22-139.178.68.195:53342.service. Feb 13 06:23:41.468859 sshd[5362]: Accepted publickey for core from 139.178.68.195 port 53342 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:41.469549 sshd[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:41.472140 systemd-logind[1461]: New session 36 of user core. Feb 13 06:23:41.472907 systemd[1]: Started session-36.scope. Feb 13 06:23:41.560460 sshd[5362]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:41.561913 systemd[1]: sshd@135-145.40.90.207:22-139.178.68.195:53342.service: Deactivated successfully. Feb 13 06:23:41.562381 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 06:23:41.562754 systemd-logind[1461]: Session 36 logged out. Waiting for processes to exit. Feb 13 06:23:41.563157 systemd-logind[1461]: Removed session 36. Feb 13 06:23:41.984520 sshd[5359]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:23:44.338110 sshd[5359]: Failed password for root from 124.221.128.115 port 59868 ssh2 Feb 13 06:23:46.394156 sshd[5359]: Received disconnect from 124.221.128.115 port 59868:11: Bye Bye [preauth] Feb 13 06:23:46.394156 sshd[5359]: Disconnected from authenticating user root 124.221.128.115 port 59868 [preauth] Feb 13 06:23:46.396654 systemd[1]: sshd@134-145.40.90.207:22-124.221.128.115:59868.service: Deactivated successfully. Feb 13 06:23:46.569835 systemd[1]: Started sshd@136-145.40.90.207:22-139.178.68.195:50852.service. Feb 13 06:23:46.599213 sshd[5391]: Accepted publickey for core from 139.178.68.195 port 50852 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:46.599990 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:46.602827 systemd-logind[1461]: New session 37 of user core. Feb 13 06:23:46.603453 systemd[1]: Started session-37.scope. Feb 13 06:23:46.690672 sshd[5391]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:46.692014 systemd[1]: sshd@136-145.40.90.207:22-139.178.68.195:50852.service: Deactivated successfully. Feb 13 06:23:46.692448 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 06:23:46.692806 systemd-logind[1461]: Session 37 logged out. Waiting for processes to exit. Feb 13 06:23:46.693243 systemd-logind[1461]: Removed session 37. Feb 13 06:23:51.695069 systemd[1]: Started sshd@137-145.40.90.207:22-139.178.68.195:50866.service. Feb 13 06:23:51.725680 sshd[5417]: Accepted publickey for core from 139.178.68.195 port 50866 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:51.726466 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:51.729201 systemd-logind[1461]: New session 38 of user core. Feb 13 06:23:51.729811 systemd[1]: Started session-38.scope. Feb 13 06:23:51.821018 sshd[5417]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:51.822533 systemd[1]: sshd@137-145.40.90.207:22-139.178.68.195:50866.service: Deactivated successfully. Feb 13 06:23:51.822962 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 06:23:51.823249 systemd-logind[1461]: Session 38 logged out. Waiting for processes to exit. Feb 13 06:23:51.823784 systemd-logind[1461]: Removed session 38. Feb 13 06:23:56.825114 systemd[1]: Started sshd@138-145.40.90.207:22-139.178.68.195:60202.service. Feb 13 06:23:56.854219 sshd[5443]: Accepted publickey for core from 139.178.68.195 port 60202 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:23:56.855076 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:23:56.858034 systemd-logind[1461]: New session 39 of user core. Feb 13 06:23:56.858620 systemd[1]: Started session-39.scope. Feb 13 06:23:56.947449 sshd[5443]: pam_unix(sshd:session): session closed for user core Feb 13 06:23:56.948987 systemd[1]: sshd@138-145.40.90.207:22-139.178.68.195:60202.service: Deactivated successfully. Feb 13 06:23:56.949444 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 06:23:56.949796 systemd-logind[1461]: Session 39 logged out. Waiting for processes to exit. Feb 13 06:23:56.950239 systemd-logind[1461]: Removed session 39. Feb 13 06:24:01.957154 systemd[1]: Started sshd@139-145.40.90.207:22-139.178.68.195:60210.service. Feb 13 06:24:01.986167 sshd[5471]: Accepted publickey for core from 139.178.68.195 port 60210 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:01.987040 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:01.989960 systemd-logind[1461]: New session 40 of user core. Feb 13 06:24:01.990564 systemd[1]: Started session-40.scope. Feb 13 06:24:02.080368 sshd[5471]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:02.081885 systemd[1]: sshd@139-145.40.90.207:22-139.178.68.195:60210.service: Deactivated successfully. Feb 13 06:24:02.082328 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 06:24:02.082719 systemd-logind[1461]: Session 40 logged out. Waiting for processes to exit. Feb 13 06:24:02.083230 systemd-logind[1461]: Removed session 40. Feb 13 06:24:07.089923 systemd[1]: Started sshd@140-145.40.90.207:22-139.178.68.195:41116.service. Feb 13 06:24:07.118956 sshd[5496]: Accepted publickey for core from 139.178.68.195 port 41116 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:07.119648 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:07.122188 systemd-logind[1461]: New session 41 of user core. Feb 13 06:24:07.122775 systemd[1]: Started session-41.scope. Feb 13 06:24:07.210400 sshd[5496]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:07.211922 systemd[1]: sshd@140-145.40.90.207:22-139.178.68.195:41116.service: Deactivated successfully. Feb 13 06:24:07.212374 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 06:24:07.212782 systemd-logind[1461]: Session 41 logged out. Waiting for processes to exit. Feb 13 06:24:07.213182 systemd-logind[1461]: Removed session 41. Feb 13 06:24:12.221938 systemd[1]: Started sshd@141-145.40.90.207:22-139.178.68.195:41124.service. Feb 13 06:24:12.254534 sshd[5522]: Accepted publickey for core from 139.178.68.195 port 41124 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:12.255293 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:12.257803 systemd-logind[1461]: New session 42 of user core. Feb 13 06:24:12.258325 systemd[1]: Started session-42.scope. Feb 13 06:24:12.348134 sshd[5522]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:12.349585 systemd[1]: sshd@141-145.40.90.207:22-139.178.68.195:41124.service: Deactivated successfully. Feb 13 06:24:12.350038 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 06:24:12.350481 systemd-logind[1461]: Session 42 logged out. Waiting for processes to exit. Feb 13 06:24:12.351021 systemd-logind[1461]: Removed session 42. Feb 13 06:24:13.785580 systemd[1]: Started sshd@142-145.40.90.207:22-124.221.128.115:39190.service. Feb 13 06:24:14.532744 sshd[5546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:24:14.532971 sshd[5546]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 06:24:16.414744 sshd[5546]: Failed password for root from 124.221.128.115 port 39190 ssh2 Feb 13 06:24:16.794334 sshd[5546]: Received disconnect from 124.221.128.115 port 39190:11: Bye Bye [preauth] Feb 13 06:24:16.794334 sshd[5546]: Disconnected from authenticating user root 124.221.128.115 port 39190 [preauth] Feb 13 06:24:16.796887 systemd[1]: sshd@142-145.40.90.207:22-124.221.128.115:39190.service: Deactivated successfully. Feb 13 06:24:17.357318 systemd[1]: Started sshd@143-145.40.90.207:22-139.178.68.195:59618.service. Feb 13 06:24:17.386554 sshd[5550]: Accepted publickey for core from 139.178.68.195 port 59618 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:17.387420 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:17.390297 systemd-logind[1461]: New session 43 of user core. Feb 13 06:24:17.390956 systemd[1]: Started session-43.scope. Feb 13 06:24:17.478885 sshd[5550]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:17.480375 systemd[1]: sshd@143-145.40.90.207:22-139.178.68.195:59618.service: Deactivated successfully. Feb 13 06:24:17.480789 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 06:24:17.481117 systemd-logind[1461]: Session 43 logged out. Waiting for processes to exit. Feb 13 06:24:17.481707 systemd-logind[1461]: Removed session 43. Feb 13 06:24:22.488549 systemd[1]: Started sshd@144-145.40.90.207:22-139.178.68.195:59630.service. Feb 13 06:24:22.518361 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 59630 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:22.519216 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:22.522093 systemd-logind[1461]: New session 44 of user core. Feb 13 06:24:22.522672 systemd[1]: Started session-44.scope. Feb 13 06:24:22.611674 sshd[5575]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:22.613069 systemd[1]: sshd@144-145.40.90.207:22-139.178.68.195:59630.service: Deactivated successfully. Feb 13 06:24:22.613501 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 06:24:22.613928 systemd-logind[1461]: Session 44 logged out. Waiting for processes to exit. Feb 13 06:24:22.614480 systemd-logind[1461]: Removed session 44. Feb 13 06:24:25.193892 systemd[1]: Started sshd@145-145.40.90.207:22-165.154.0.66:33694.service. Feb 13 06:24:27.621822 systemd[1]: Started sshd@146-145.40.90.207:22-139.178.68.195:55136.service. Feb 13 06:24:27.651752 sshd[5604]: Accepted publickey for core from 139.178.68.195 port 55136 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:27.652596 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:27.655614 systemd-logind[1461]: New session 45 of user core. Feb 13 06:24:27.656236 systemd[1]: Started session-45.scope. Feb 13 06:24:27.743432 sshd[5604]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:27.744915 systemd[1]: sshd@146-145.40.90.207:22-139.178.68.195:55136.service: Deactivated successfully. Feb 13 06:24:27.745435 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 06:24:27.745857 systemd-logind[1461]: Session 45 logged out. Waiting for processes to exit. Feb 13 06:24:27.746270 systemd-logind[1461]: Removed session 45. Feb 13 06:24:32.752529 systemd[1]: Started sshd@147-145.40.90.207:22-139.178.68.195:55140.service. Feb 13 06:24:32.817718 sshd[5631]: Accepted publickey for core from 139.178.68.195 port 55140 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:32.819034 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:32.823770 systemd-logind[1461]: New session 46 of user core. Feb 13 06:24:32.824784 systemd[1]: Started session-46.scope. Feb 13 06:24:32.917820 sshd[5631]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:32.919397 systemd[1]: sshd@147-145.40.90.207:22-139.178.68.195:55140.service: Deactivated successfully. Feb 13 06:24:32.919843 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 06:24:32.920164 systemd-logind[1461]: Session 46 logged out. Waiting for processes to exit. Feb 13 06:24:32.920840 systemd-logind[1461]: Removed session 46. Feb 13 06:24:33.196248 sshd[5601]: Connection closed by 165.154.0.66 port 33694 [preauth] Feb 13 06:24:33.196704 systemd[1]: sshd@145-145.40.90.207:22-165.154.0.66:33694.service: Deactivated successfully. Feb 13 06:24:37.927280 systemd[1]: Started sshd@148-145.40.90.207:22-139.178.68.195:51702.service. Feb 13 06:24:37.956158 sshd[5657]: Accepted publickey for core from 139.178.68.195 port 51702 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:37.957029 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:37.959961 systemd-logind[1461]: New session 47 of user core. Feb 13 06:24:37.960556 systemd[1]: Started session-47.scope. Feb 13 06:24:38.047577 sshd[5657]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:38.049013 systemd[1]: sshd@148-145.40.90.207:22-139.178.68.195:51702.service: Deactivated successfully. Feb 13 06:24:38.049436 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 06:24:38.049846 systemd-logind[1461]: Session 47 logged out. Waiting for processes to exit. Feb 13 06:24:38.050264 systemd-logind[1461]: Removed session 47. Feb 13 06:24:43.058806 systemd[1]: Started sshd@149-145.40.90.207:22-139.178.68.195:51710.service. Feb 13 06:24:43.091147 sshd[5682]: Accepted publickey for core from 139.178.68.195 port 51710 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:43.091886 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:43.094598 systemd-logind[1461]: New session 48 of user core. Feb 13 06:24:43.095082 systemd[1]: Started session-48.scope. Feb 13 06:24:43.186277 sshd[5682]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:43.188073 systemd[1]: sshd@149-145.40.90.207:22-139.178.68.195:51710.service: Deactivated successfully. Feb 13 06:24:43.188407 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 06:24:43.188800 systemd-logind[1461]: Session 48 logged out. Waiting for processes to exit. Feb 13 06:24:43.189366 systemd[1]: Started sshd@150-145.40.90.207:22-139.178.68.195:51722.service. Feb 13 06:24:43.189887 systemd-logind[1461]: Removed session 48. Feb 13 06:24:43.217657 sshd[5707]: Accepted publickey for core from 139.178.68.195 port 51722 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:43.218317 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:43.220464 systemd-logind[1461]: New session 49 of user core. Feb 13 06:24:43.220951 systemd[1]: Started session-49.scope. Feb 13 06:24:44.320628 sshd[5707]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:44.322442 systemd[1]: sshd@150-145.40.90.207:22-139.178.68.195:51722.service: Deactivated successfully. Feb 13 06:24:44.322755 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 06:24:44.323070 systemd-logind[1461]: Session 49 logged out. Waiting for processes to exit. Feb 13 06:24:44.323670 systemd[1]: Started sshd@151-145.40.90.207:22-139.178.68.195:51732.service. Feb 13 06:24:44.324120 systemd-logind[1461]: Removed session 49. Feb 13 06:24:44.354678 sshd[5731]: Accepted publickey for core from 139.178.68.195 port 51732 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:44.355610 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:44.358875 systemd-logind[1461]: New session 50 of user core. Feb 13 06:24:44.359766 systemd[1]: Started session-50.scope. Feb 13 06:24:45.006156 sshd[5731]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:45.008904 systemd[1]: sshd@151-145.40.90.207:22-139.178.68.195:51732.service: Deactivated successfully. Feb 13 06:24:45.009447 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 06:24:45.010049 systemd-logind[1461]: Session 50 logged out. Waiting for processes to exit. Feb 13 06:24:45.011185 systemd[1]: Started sshd@152-145.40.90.207:22-139.178.68.195:51744.service. Feb 13 06:24:45.012002 systemd-logind[1461]: Removed session 50. Feb 13 06:24:45.056899 sshd[5759]: Accepted publickey for core from 139.178.68.195 port 51744 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:45.058083 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:45.061303 systemd-logind[1461]: New session 51 of user core. Feb 13 06:24:45.062105 systemd[1]: Started session-51.scope. Feb 13 06:24:45.282448 sshd[5759]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:45.284711 systemd[1]: sshd@152-145.40.90.207:22-139.178.68.195:51744.service: Deactivated successfully. Feb 13 06:24:45.285229 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 06:24:45.285733 systemd-logind[1461]: Session 51 logged out. Waiting for processes to exit. Feb 13 06:24:45.286573 systemd[1]: Started sshd@153-145.40.90.207:22-139.178.68.195:51748.service. Feb 13 06:24:45.287050 systemd-logind[1461]: Removed session 51. Feb 13 06:24:45.316139 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 51748 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:45.316940 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:45.319446 systemd-logind[1461]: New session 52 of user core. Feb 13 06:24:45.320031 systemd[1]: Started session-52.scope. Feb 13 06:24:45.464319 sshd[5785]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:45.465840 systemd[1]: sshd@153-145.40.90.207:22-139.178.68.195:51748.service: Deactivated successfully. Feb 13 06:24:45.466311 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 06:24:45.466751 systemd-logind[1461]: Session 52 logged out. Waiting for processes to exit. Feb 13 06:24:45.467219 systemd-logind[1461]: Removed session 52. Feb 13 06:24:48.374014 systemd[1]: Started sshd@154-145.40.90.207:22-124.221.128.115:46744.service. Feb 13 06:24:49.224766 sshd[5810]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:24:50.473108 systemd[1]: Started sshd@155-145.40.90.207:22-139.178.68.195:45924.service. Feb 13 06:24:50.503362 sshd[5813]: Accepted publickey for core from 139.178.68.195 port 45924 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:50.504414 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:50.507292 systemd-logind[1461]: New session 53 of user core. Feb 13 06:24:50.508038 systemd[1]: Started session-53.scope. Feb 13 06:24:50.594189 sshd[5813]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:50.595571 systemd[1]: sshd@155-145.40.90.207:22-139.178.68.195:45924.service: Deactivated successfully. Feb 13 06:24:50.595994 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 06:24:50.596348 systemd-logind[1461]: Session 53 logged out. Waiting for processes to exit. Feb 13 06:24:50.596804 systemd-logind[1461]: Removed session 53. Feb 13 06:24:50.910766 sshd[5810]: Failed password for root from 124.221.128.115 port 46744 ssh2 Feb 13 06:24:51.500168 sshd[5810]: Received disconnect from 124.221.128.115 port 46744:11: Bye Bye [preauth] Feb 13 06:24:51.500168 sshd[5810]: Disconnected from authenticating user root 124.221.128.115 port 46744 [preauth] Feb 13 06:24:51.502784 systemd[1]: sshd@154-145.40.90.207:22-124.221.128.115:46744.service: Deactivated successfully. Feb 13 06:24:55.603857 systemd[1]: Started sshd@156-145.40.90.207:22-139.178.68.195:45926.service. Feb 13 06:24:55.632597 sshd[5839]: Accepted publickey for core from 139.178.68.195 port 45926 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:24:55.633566 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:24:55.636338 systemd-logind[1461]: New session 54 of user core. Feb 13 06:24:55.637010 systemd[1]: Started session-54.scope. Feb 13 06:24:55.726800 sshd[5839]: pam_unix(sshd:session): session closed for user core Feb 13 06:24:55.728172 systemd[1]: sshd@156-145.40.90.207:22-139.178.68.195:45926.service: Deactivated successfully. Feb 13 06:24:55.728611 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 06:24:55.728983 systemd-logind[1461]: Session 54 logged out. Waiting for processes to exit. Feb 13 06:24:55.729492 systemd-logind[1461]: Removed session 54. Feb 13 06:25:00.736120 systemd[1]: Started sshd@157-145.40.90.207:22-139.178.68.195:58678.service. Feb 13 06:25:00.764666 sshd[5867]: Accepted publickey for core from 139.178.68.195 port 58678 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:00.765594 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:00.768588 systemd-logind[1461]: New session 55 of user core. Feb 13 06:25:00.769231 systemd[1]: Started session-55.scope. Feb 13 06:25:00.856717 sshd[5867]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:00.858089 systemd[1]: sshd@157-145.40.90.207:22-139.178.68.195:58678.service: Deactivated successfully. Feb 13 06:25:00.858540 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 06:25:00.858964 systemd-logind[1461]: Session 55 logged out. Waiting for processes to exit. Feb 13 06:25:00.859618 systemd-logind[1461]: Removed session 55. Feb 13 06:25:05.860102 systemd[1]: Started sshd@158-145.40.90.207:22-139.178.68.195:58690.service. Feb 13 06:25:05.891230 sshd[5894]: Accepted publickey for core from 139.178.68.195 port 58690 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:05.891989 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:05.895004 systemd-logind[1461]: New session 56 of user core. Feb 13 06:25:05.895571 systemd[1]: Started session-56.scope. Feb 13 06:25:05.981242 sshd[5894]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:05.982756 systemd[1]: sshd@158-145.40.90.207:22-139.178.68.195:58690.service: Deactivated successfully. Feb 13 06:25:05.983168 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 06:25:05.983540 systemd-logind[1461]: Session 56 logged out. Waiting for processes to exit. Feb 13 06:25:05.984045 systemd-logind[1461]: Removed session 56. Feb 13 06:25:10.990796 systemd[1]: Started sshd@159-145.40.90.207:22-139.178.68.195:54944.service. Feb 13 06:25:11.019568 sshd[5919]: Accepted publickey for core from 139.178.68.195 port 54944 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:11.020434 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:11.023379 systemd-logind[1461]: New session 57 of user core. Feb 13 06:25:11.024007 systemd[1]: Started session-57.scope. Feb 13 06:25:11.109786 sshd[5919]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:11.111188 systemd[1]: sshd@159-145.40.90.207:22-139.178.68.195:54944.service: Deactivated successfully. Feb 13 06:25:11.111618 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 06:25:11.111984 systemd-logind[1461]: Session 57 logged out. Waiting for processes to exit. Feb 13 06:25:11.112467 systemd-logind[1461]: Removed session 57. Feb 13 06:25:16.118869 systemd[1]: Started sshd@160-145.40.90.207:22-139.178.68.195:53564.service. Feb 13 06:25:16.147682 sshd[5942]: Accepted publickey for core from 139.178.68.195 port 53564 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:16.148443 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:16.151048 systemd-logind[1461]: New session 58 of user core. Feb 13 06:25:16.151604 systemd[1]: Started session-58.scope. Feb 13 06:25:16.237704 sshd[5942]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:16.239061 systemd[1]: sshd@160-145.40.90.207:22-139.178.68.195:53564.service: Deactivated successfully. Feb 13 06:25:16.239502 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 06:25:16.239920 systemd-logind[1461]: Session 58 logged out. Waiting for processes to exit. Feb 13 06:25:16.240436 systemd-logind[1461]: Removed session 58. Feb 13 06:25:19.531438 systemd[1]: Started sshd@161-145.40.90.207:22-124.221.128.115:54292.service. Feb 13 06:25:20.418865 sshd[5966]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:25:21.247650 systemd[1]: Started sshd@162-145.40.90.207:22-139.178.68.195:53576.service. Feb 13 06:25:21.277090 sshd[5969]: Accepted publickey for core from 139.178.68.195 port 53576 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:21.277904 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:21.281011 systemd-logind[1461]: New session 59 of user core. Feb 13 06:25:21.281615 systemd[1]: Started session-59.scope. Feb 13 06:25:21.367088 sshd[5969]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:21.368498 systemd[1]: sshd@162-145.40.90.207:22-139.178.68.195:53576.service: Deactivated successfully. Feb 13 06:25:21.368937 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 06:25:21.369237 systemd-logind[1461]: Session 59 logged out. Waiting for processes to exit. Feb 13 06:25:21.369645 systemd-logind[1461]: Removed session 59. Feb 13 06:25:22.361175 sshd[5966]: Failed password for root from 124.221.128.115 port 54292 ssh2 Feb 13 06:25:22.706135 sshd[5966]: Received disconnect from 124.221.128.115 port 54292:11: Bye Bye [preauth] Feb 13 06:25:22.706135 sshd[5966]: Disconnected from authenticating user root 124.221.128.115 port 54292 [preauth] Feb 13 06:25:22.708626 systemd[1]: sshd@161-145.40.90.207:22-124.221.128.115:54292.service: Deactivated successfully. Feb 13 06:25:26.376718 systemd[1]: Started sshd@163-145.40.90.207:22-139.178.68.195:34096.service. Feb 13 06:25:26.405694 sshd[5994]: Accepted publickey for core from 139.178.68.195 port 34096 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:26.406539 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:26.409307 systemd-logind[1461]: New session 60 of user core. Feb 13 06:25:26.409985 systemd[1]: Started session-60.scope. Feb 13 06:25:26.497471 sshd[5994]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:26.498989 systemd[1]: sshd@163-145.40.90.207:22-139.178.68.195:34096.service: Deactivated successfully. Feb 13 06:25:26.499433 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 06:25:26.499785 systemd-logind[1461]: Session 60 logged out. Waiting for processes to exit. Feb 13 06:25:26.500241 systemd-logind[1461]: Removed session 60. Feb 13 06:25:31.507128 systemd[1]: Started sshd@164-145.40.90.207:22-139.178.68.195:34100.service. Feb 13 06:25:31.535967 sshd[6020]: Accepted publickey for core from 139.178.68.195 port 34100 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:31.536858 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:31.539906 systemd-logind[1461]: New session 61 of user core. Feb 13 06:25:31.540536 systemd[1]: Started session-61.scope. Feb 13 06:25:31.624532 sshd[6020]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:31.626019 systemd[1]: sshd@164-145.40.90.207:22-139.178.68.195:34100.service: Deactivated successfully. Feb 13 06:25:31.626491 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 06:25:31.626928 systemd-logind[1461]: Session 61 logged out. Waiting for processes to exit. Feb 13 06:25:31.627435 systemd-logind[1461]: Removed session 61. Feb 13 06:25:36.634211 systemd[1]: Started sshd@165-145.40.90.207:22-139.178.68.195:48600.service. Feb 13 06:25:36.663286 sshd[6044]: Accepted publickey for core from 139.178.68.195 port 48600 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:36.664177 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:36.667028 systemd-logind[1461]: New session 62 of user core. Feb 13 06:25:36.667605 systemd[1]: Started session-62.scope. Feb 13 06:25:36.754782 sshd[6044]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:36.756128 systemd[1]: sshd@165-145.40.90.207:22-139.178.68.195:48600.service: Deactivated successfully. Feb 13 06:25:36.756565 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 06:25:36.756959 systemd-logind[1461]: Session 62 logged out. Waiting for processes to exit. Feb 13 06:25:36.757488 systemd-logind[1461]: Removed session 62. Feb 13 06:25:41.763723 systemd[1]: Started sshd@166-145.40.90.207:22-139.178.68.195:48606.service. Feb 13 06:25:41.792662 sshd[6068]: Accepted publickey for core from 139.178.68.195 port 48606 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:41.793629 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:41.796614 systemd-logind[1461]: New session 63 of user core. Feb 13 06:25:41.797187 systemd[1]: Started session-63.scope. Feb 13 06:25:41.886002 sshd[6068]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:41.887581 systemd[1]: sshd@166-145.40.90.207:22-139.178.68.195:48606.service: Deactivated successfully. Feb 13 06:25:41.888012 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 06:25:41.888347 systemd-logind[1461]: Session 63 logged out. Waiting for processes to exit. Feb 13 06:25:41.888977 systemd-logind[1461]: Removed session 63. Feb 13 06:25:46.895035 systemd[1]: Started sshd@167-145.40.90.207:22-139.178.68.195:53336.service. Feb 13 06:25:46.923965 sshd[6094]: Accepted publickey for core from 139.178.68.195 port 53336 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:46.924816 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:46.927895 systemd-logind[1461]: New session 64 of user core. Feb 13 06:25:46.928472 systemd[1]: Started session-64.scope. Feb 13 06:25:47.019955 sshd[6094]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:47.021422 systemd[1]: sshd@167-145.40.90.207:22-139.178.68.195:53336.service: Deactivated successfully. Feb 13 06:25:47.021889 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 06:25:47.022206 systemd-logind[1461]: Session 64 logged out. Waiting for processes to exit. Feb 13 06:25:47.022837 systemd-logind[1461]: Removed session 64. Feb 13 06:25:52.031641 systemd[1]: Started sshd@168-145.40.90.207:22-139.178.68.195:53352.service. Feb 13 06:25:52.063676 sshd[6120]: Accepted publickey for core from 139.178.68.195 port 53352 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:52.064380 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:52.066849 systemd-logind[1461]: New session 65 of user core. Feb 13 06:25:52.067319 systemd[1]: Started session-65.scope. Feb 13 06:25:52.187085 sshd[6120]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:52.188453 systemd[1]: sshd@168-145.40.90.207:22-139.178.68.195:53352.service: Deactivated successfully. Feb 13 06:25:52.188909 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 06:25:52.189202 systemd-logind[1461]: Session 65 logged out. Waiting for processes to exit. Feb 13 06:25:52.189597 systemd-logind[1461]: Removed session 65. Feb 13 06:25:53.415559 systemd[1]: Started sshd@169-145.40.90.207:22-124.221.128.115:33614.service. Feb 13 06:25:54.321837 sshd[6146]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:25:56.600034 sshd[6146]: Failed password for root from 124.221.128.115 port 33614 ssh2 Feb 13 06:25:57.196232 systemd[1]: Started sshd@170-145.40.90.207:22-139.178.68.195:43650.service. Feb 13 06:25:57.225172 sshd[6149]: Accepted publickey for core from 139.178.68.195 port 43650 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:25:57.225989 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:25:57.228917 systemd-logind[1461]: New session 66 of user core. Feb 13 06:25:57.229501 systemd[1]: Started session-66.scope. Feb 13 06:25:57.319051 sshd[6149]: pam_unix(sshd:session): session closed for user core Feb 13 06:25:57.320529 systemd[1]: sshd@170-145.40.90.207:22-139.178.68.195:43650.service: Deactivated successfully. Feb 13 06:25:57.320967 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 06:25:57.321282 systemd-logind[1461]: Session 66 logged out. Waiting for processes to exit. Feb 13 06:25:57.321917 systemd-logind[1461]: Removed session 66. Feb 13 06:25:58.747969 sshd[6146]: Received disconnect from 124.221.128.115 port 33614:11: Bye Bye [preauth] Feb 13 06:25:58.747969 sshd[6146]: Disconnected from authenticating user root 124.221.128.115 port 33614 [preauth] Feb 13 06:25:58.750580 systemd[1]: sshd@169-145.40.90.207:22-124.221.128.115:33614.service: Deactivated successfully. Feb 13 06:26:02.328266 systemd[1]: Started sshd@171-145.40.90.207:22-139.178.68.195:43664.service. Feb 13 06:26:02.356699 sshd[6179]: Accepted publickey for core from 139.178.68.195 port 43664 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:02.357492 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:02.360237 systemd-logind[1461]: New session 67 of user core. Feb 13 06:26:02.360786 systemd[1]: Started session-67.scope. Feb 13 06:26:02.446448 sshd[6179]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:02.447928 systemd[1]: sshd@171-145.40.90.207:22-139.178.68.195:43664.service: Deactivated successfully. Feb 13 06:26:02.448363 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 06:26:02.448786 systemd-logind[1461]: Session 67 logged out. Waiting for processes to exit. Feb 13 06:26:02.449245 systemd-logind[1461]: Removed session 67. Feb 13 06:26:07.456189 systemd[1]: Started sshd@172-145.40.90.207:22-139.178.68.195:35440.service. Feb 13 06:26:07.484856 sshd[6203]: Accepted publickey for core from 139.178.68.195 port 35440 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:07.485631 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:07.488519 systemd-logind[1461]: New session 68 of user core. Feb 13 06:26:07.489141 systemd[1]: Started session-68.scope. Feb 13 06:26:07.577613 sshd[6203]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:07.578997 systemd[1]: sshd@172-145.40.90.207:22-139.178.68.195:35440.service: Deactivated successfully. Feb 13 06:26:07.579429 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 06:26:07.579832 systemd-logind[1461]: Session 68 logged out. Waiting for processes to exit. Feb 13 06:26:07.580250 systemd-logind[1461]: Removed session 68. Feb 13 06:26:12.587123 systemd[1]: Started sshd@173-145.40.90.207:22-139.178.68.195:35456.service. Feb 13 06:26:12.616924 sshd[6227]: Accepted publickey for core from 139.178.68.195 port 35456 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:12.620126 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:12.630610 systemd-logind[1461]: New session 69 of user core. Feb 13 06:26:12.633651 systemd[1]: Started session-69.scope. Feb 13 06:26:12.738652 sshd[6227]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:12.740018 systemd[1]: sshd@173-145.40.90.207:22-139.178.68.195:35456.service: Deactivated successfully. Feb 13 06:26:12.740458 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 06:26:12.740885 systemd-logind[1461]: Session 69 logged out. Waiting for processes to exit. Feb 13 06:26:12.741348 systemd-logind[1461]: Removed session 69. Feb 13 06:26:17.750609 systemd[1]: Started sshd@174-145.40.90.207:22-139.178.68.195:49018.service. Feb 13 06:26:17.784733 sshd[6252]: Accepted publickey for core from 139.178.68.195 port 49018 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:17.785614 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:17.788077 systemd-logind[1461]: New session 70 of user core. Feb 13 06:26:17.788665 systemd[1]: Started session-70.scope. Feb 13 06:26:17.877340 sshd[6252]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:17.878849 systemd[1]: sshd@174-145.40.90.207:22-139.178.68.195:49018.service: Deactivated successfully. Feb 13 06:26:17.879338 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 06:26:17.879798 systemd-logind[1461]: Session 70 logged out. Waiting for processes to exit. Feb 13 06:26:17.880256 systemd-logind[1461]: Removed session 70. Feb 13 06:26:22.888222 systemd[1]: Started sshd@175-145.40.90.207:22-139.178.68.195:49022.service. Feb 13 06:26:22.921569 sshd[6276]: Accepted publickey for core from 139.178.68.195 port 49022 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:22.922314 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:22.924895 systemd-logind[1461]: New session 71 of user core. Feb 13 06:26:22.925410 systemd[1]: Started session-71.scope. Feb 13 06:26:23.015606 sshd[6276]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:23.017044 systemd[1]: sshd@175-145.40.90.207:22-139.178.68.195:49022.service: Deactivated successfully. Feb 13 06:26:23.017504 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 06:26:23.017885 systemd-logind[1461]: Session 71 logged out. Waiting for processes to exit. Feb 13 06:26:23.018263 systemd-logind[1461]: Removed session 71. Feb 13 06:26:24.893111 systemd[1]: Started sshd@176-145.40.90.207:22-124.221.128.115:41160.service. Feb 13 06:26:25.665531 sshd[6300]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.221.128.115 user=root Feb 13 06:26:26.864570 sshd[6300]: Failed password for root from 124.221.128.115 port 41160 ssh2 Feb 13 06:26:27.933187 sshd[6300]: Received disconnect from 124.221.128.115 port 41160:11: Bye Bye [preauth] Feb 13 06:26:27.933187 sshd[6300]: Disconnected from authenticating user root 124.221.128.115 port 41160 [preauth] Feb 13 06:26:27.935766 systemd[1]: sshd@176-145.40.90.207:22-124.221.128.115:41160.service: Deactivated successfully. Feb 13 06:26:28.025732 systemd[1]: Started sshd@177-145.40.90.207:22-139.178.68.195:53262.service. Feb 13 06:26:28.055226 sshd[6304]: Accepted publickey for core from 139.178.68.195 port 53262 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:28.056115 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:28.059039 systemd-logind[1461]: New session 72 of user core. Feb 13 06:26:28.059677 systemd[1]: Started session-72.scope. Feb 13 06:26:28.146604 sshd[6304]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:28.148034 systemd[1]: sshd@177-145.40.90.207:22-139.178.68.195:53262.service: Deactivated successfully. Feb 13 06:26:28.148459 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 06:26:28.148887 systemd-logind[1461]: Session 72 logged out. Waiting for processes to exit. Feb 13 06:26:28.149370 systemd-logind[1461]: Removed session 72. Feb 13 06:26:33.070369 systemd[1]: Started sshd@178-145.40.90.207:22-165.154.0.66:39616.service. Feb 13 06:26:33.150199 systemd[1]: Started sshd@179-145.40.90.207:22-139.178.68.195:53274.service. Feb 13 06:26:33.179765 sshd[6334]: Accepted publickey for core from 139.178.68.195 port 53274 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:33.180570 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:33.183074 systemd-logind[1461]: New session 73 of user core. Feb 13 06:26:33.183759 systemd[1]: Started session-73.scope. Feb 13 06:26:33.269889 sshd[6334]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:33.271366 systemd[1]: sshd@179-145.40.90.207:22-139.178.68.195:53274.service: Deactivated successfully. Feb 13 06:26:33.271790 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 06:26:33.272156 systemd-logind[1461]: Session 73 logged out. Waiting for processes to exit. Feb 13 06:26:33.272757 systemd-logind[1461]: Removed session 73. Feb 13 06:26:34.440918 sshd[6331]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:26:34.441152 sshd[6331]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 06:26:36.874819 sshd[6331]: Failed password for root from 165.154.0.66 port 39616 ssh2 Feb 13 06:26:38.280694 systemd[1]: Started sshd@180-145.40.90.207:22-139.178.68.195:53584.service. Feb 13 06:26:38.310144 sshd[6361]: Accepted publickey for core from 139.178.68.195 port 53584 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:38.311001 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:38.313774 systemd-logind[1461]: New session 74 of user core. Feb 13 06:26:38.314370 systemd[1]: Started session-74.scope. Feb 13 06:26:38.402139 sshd[6361]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:38.403828 systemd[1]: sshd@180-145.40.90.207:22-139.178.68.195:53584.service: Deactivated successfully. Feb 13 06:26:38.404333 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 06:26:38.404746 systemd-logind[1461]: Session 74 logged out. Waiting for processes to exit. Feb 13 06:26:38.405215 systemd-logind[1461]: Removed session 74. Feb 13 06:26:38.872356 sshd[6331]: Received disconnect from 165.154.0.66 port 39616:11: Bye Bye [preauth] Feb 13 06:26:38.872356 sshd[6331]: Disconnected from authenticating user root 165.154.0.66 port 39616 [preauth] Feb 13 06:26:38.873459 systemd[1]: sshd@178-145.40.90.207:22-165.154.0.66:39616.service: Deactivated successfully. Feb 13 06:26:43.412362 systemd[1]: Started sshd@181-145.40.90.207:22-139.178.68.195:53594.service. Feb 13 06:26:43.441299 sshd[6385]: Accepted publickey for core from 139.178.68.195 port 53594 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:43.442027 sshd[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:43.444803 systemd-logind[1461]: New session 75 of user core. Feb 13 06:26:43.445672 systemd[1]: Started session-75.scope. Feb 13 06:26:43.529804 sshd[6385]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:43.531204 systemd[1]: sshd@181-145.40.90.207:22-139.178.68.195:53594.service: Deactivated successfully. Feb 13 06:26:43.531756 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 06:26:43.532106 systemd-logind[1461]: Session 75 logged out. Waiting for processes to exit. Feb 13 06:26:43.532511 systemd-logind[1461]: Removed session 75. Feb 13 06:26:48.540270 systemd[1]: Started sshd@182-145.40.90.207:22-139.178.68.195:58600.service. Feb 13 06:26:48.569760 sshd[6411]: Accepted publickey for core from 139.178.68.195 port 58600 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:48.570597 sshd[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:48.573490 systemd-logind[1461]: New session 76 of user core. Feb 13 06:26:48.574418 systemd[1]: Started session-76.scope. Feb 13 06:26:48.663293 sshd[6411]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:48.664820 systemd[1]: sshd@182-145.40.90.207:22-139.178.68.195:58600.service: Deactivated successfully. Feb 13 06:26:48.665311 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 06:26:48.665689 systemd-logind[1461]: Session 76 logged out. Waiting for processes to exit. Feb 13 06:26:48.666141 systemd-logind[1461]: Removed session 76. Feb 13 06:26:53.673652 systemd[1]: Started sshd@183-145.40.90.207:22-139.178.68.195:58602.service. Feb 13 06:26:53.703269 sshd[6435]: Accepted publickey for core from 139.178.68.195 port 58602 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:53.704145 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:53.706845 systemd-logind[1461]: New session 77 of user core. Feb 13 06:26:53.707757 systemd[1]: Started session-77.scope. Feb 13 06:26:53.799326 sshd[6435]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:53.800711 systemd[1]: sshd@183-145.40.90.207:22-139.178.68.195:58602.service: Deactivated successfully. Feb 13 06:26:53.801187 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 06:26:53.801618 systemd-logind[1461]: Session 77 logged out. Waiting for processes to exit. Feb 13 06:26:53.802085 systemd-logind[1461]: Removed session 77. Feb 13 06:26:58.809335 systemd[1]: Started sshd@184-145.40.90.207:22-139.178.68.195:50644.service. Feb 13 06:26:58.838549 sshd[6461]: Accepted publickey for core from 139.178.68.195 port 50644 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:26:58.839401 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:26:58.841851 systemd-logind[1461]: New session 78 of user core. Feb 13 06:26:58.842414 systemd[1]: Started session-78.scope. Feb 13 06:26:58.955743 sshd[6461]: pam_unix(sshd:session): session closed for user core Feb 13 06:26:58.957182 systemd[1]: sshd@184-145.40.90.207:22-139.178.68.195:50644.service: Deactivated successfully. Feb 13 06:26:58.957681 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 06:26:58.958072 systemd-logind[1461]: Session 78 logged out. Waiting for processes to exit. Feb 13 06:26:58.958481 systemd-logind[1461]: Removed session 78. Feb 13 06:27:03.965166 systemd[1]: Started sshd@185-145.40.90.207:22-139.178.68.195:50660.service. Feb 13 06:27:03.994972 sshd[6488]: Accepted publickey for core from 139.178.68.195 port 50660 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:03.995898 sshd[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:03.999009 systemd-logind[1461]: New session 79 of user core. Feb 13 06:27:03.999832 systemd[1]: Started session-79.scope. Feb 13 06:27:04.088869 sshd[6488]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:04.090327 systemd[1]: sshd@185-145.40.90.207:22-139.178.68.195:50660.service: Deactivated successfully. Feb 13 06:27:04.090816 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 06:27:04.091160 systemd-logind[1461]: Session 79 logged out. Waiting for processes to exit. Feb 13 06:27:04.091908 systemd-logind[1461]: Removed session 79. Feb 13 06:27:09.098083 systemd[1]: Started sshd@186-145.40.90.207:22-139.178.68.195:59182.service. Feb 13 06:27:09.127699 sshd[6514]: Accepted publickey for core from 139.178.68.195 port 59182 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:09.128510 sshd[6514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:09.131490 systemd-logind[1461]: New session 80 of user core. Feb 13 06:27:09.132302 systemd[1]: Started session-80.scope. Feb 13 06:27:09.222624 sshd[6514]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:09.224030 systemd[1]: sshd@186-145.40.90.207:22-139.178.68.195:59182.service: Deactivated successfully. Feb 13 06:27:09.224475 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 06:27:09.224867 systemd-logind[1461]: Session 80 logged out. Waiting for processes to exit. Feb 13 06:27:09.225251 systemd-logind[1461]: Removed session 80. Feb 13 06:27:14.231807 systemd[1]: Started sshd@187-145.40.90.207:22-139.178.68.195:59190.service. Feb 13 06:27:14.260645 sshd[6538]: Accepted publickey for core from 139.178.68.195 port 59190 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:14.261417 sshd[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:14.264119 systemd-logind[1461]: New session 81 of user core. Feb 13 06:27:14.264996 systemd[1]: Started session-81.scope. Feb 13 06:27:14.358852 sshd[6538]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:14.364853 systemd[1]: sshd@187-145.40.90.207:22-139.178.68.195:59190.service: Deactivated successfully. Feb 13 06:27:14.366870 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 06:27:14.368587 systemd-logind[1461]: Session 81 logged out. Waiting for processes to exit. Feb 13 06:27:14.370945 systemd-logind[1461]: Removed session 81. Feb 13 06:27:19.367915 systemd[1]: Started sshd@188-145.40.90.207:22-139.178.68.195:48820.service. Feb 13 06:27:19.397267 sshd[6562]: Accepted publickey for core from 139.178.68.195 port 48820 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:19.398111 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:19.401005 systemd-logind[1461]: New session 82 of user core. Feb 13 06:27:19.401757 systemd[1]: Started session-82.scope. Feb 13 06:27:19.489744 sshd[6562]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:19.491130 systemd[1]: sshd@188-145.40.90.207:22-139.178.68.195:48820.service: Deactivated successfully. Feb 13 06:27:19.491606 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 06:27:19.491977 systemd-logind[1461]: Session 82 logged out. Waiting for processes to exit. Feb 13 06:27:19.492405 systemd-logind[1461]: Removed session 82. Feb 13 06:27:24.502874 systemd[1]: Started sshd@189-145.40.90.207:22-139.178.68.195:48834.service. Feb 13 06:27:24.557201 sshd[6587]: Accepted publickey for core from 139.178.68.195 port 48834 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:24.558005 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:24.560458 systemd-logind[1461]: New session 83 of user core. Feb 13 06:27:24.561013 systemd[1]: Started session-83.scope. Feb 13 06:27:24.644679 sshd[6587]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:24.646041 systemd[1]: sshd@189-145.40.90.207:22-139.178.68.195:48834.service: Deactivated successfully. Feb 13 06:27:24.646490 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 06:27:24.646830 systemd-logind[1461]: Session 83 logged out. Waiting for processes to exit. Feb 13 06:27:24.647251 systemd-logind[1461]: Removed session 83. Feb 13 06:27:29.654965 systemd[1]: Started sshd@190-145.40.90.207:22-139.178.68.195:58138.service. Feb 13 06:27:29.684174 sshd[6615]: Accepted publickey for core from 139.178.68.195 port 58138 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:29.684832 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:29.687189 systemd-logind[1461]: New session 84 of user core. Feb 13 06:27:29.687830 systemd[1]: Started session-84.scope. Feb 13 06:27:29.776587 sshd[6615]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:29.778047 systemd[1]: sshd@190-145.40.90.207:22-139.178.68.195:58138.service: Deactivated successfully. Feb 13 06:27:29.778504 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 06:27:29.778884 systemd-logind[1461]: Session 84 logged out. Waiting for processes to exit. Feb 13 06:27:29.779279 systemd-logind[1461]: Removed session 84. Feb 13 06:27:34.786730 systemd[1]: Started sshd@191-145.40.90.207:22-139.178.68.195:58142.service. Feb 13 06:27:34.815415 sshd[6640]: Accepted publickey for core from 139.178.68.195 port 58142 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:34.816259 sshd[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:34.819058 systemd-logind[1461]: New session 85 of user core. Feb 13 06:27:34.819904 systemd[1]: Started session-85.scope. Feb 13 06:27:34.907218 sshd[6640]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:34.908593 systemd[1]: sshd@191-145.40.90.207:22-139.178.68.195:58142.service: Deactivated successfully. Feb 13 06:27:34.909031 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 06:27:34.909399 systemd-logind[1461]: Session 85 logged out. Waiting for processes to exit. Feb 13 06:27:34.909896 systemd-logind[1461]: Removed session 85. Feb 13 06:27:39.917050 systemd[1]: Started sshd@192-145.40.90.207:22-139.178.68.195:45194.service. Feb 13 06:27:39.946392 sshd[6665]: Accepted publickey for core from 139.178.68.195 port 45194 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:39.947058 sshd[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:39.949250 systemd-logind[1461]: New session 86 of user core. Feb 13 06:27:39.949888 systemd[1]: Started session-86.scope. Feb 13 06:27:40.035458 sshd[6665]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:40.036951 systemd[1]: sshd@192-145.40.90.207:22-139.178.68.195:45194.service: Deactivated successfully. Feb 13 06:27:40.037406 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 06:27:40.037789 systemd-logind[1461]: Session 86 logged out. Waiting for processes to exit. Feb 13 06:27:40.038205 systemd-logind[1461]: Removed session 86. Feb 13 06:27:45.044520 systemd[1]: Started sshd@193-145.40.90.207:22-139.178.68.195:45210.service. Feb 13 06:27:45.073476 sshd[6692]: Accepted publickey for core from 139.178.68.195 port 45210 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:45.074524 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:45.077257 systemd-logind[1461]: New session 87 of user core. Feb 13 06:27:45.078102 systemd[1]: Started session-87.scope. Feb 13 06:27:45.164577 sshd[6692]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:45.166516 systemd[1]: sshd@193-145.40.90.207:22-139.178.68.195:45210.service: Deactivated successfully. Feb 13 06:27:45.166903 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 06:27:45.167228 systemd-logind[1461]: Session 87 logged out. Waiting for processes to exit. Feb 13 06:27:45.167928 systemd[1]: Started sshd@194-145.40.90.207:22-139.178.68.195:45224.service. Feb 13 06:27:45.168297 systemd-logind[1461]: Removed session 87. Feb 13 06:27:45.197695 sshd[6719]: Accepted publickey for core from 139.178.68.195 port 45224 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:45.198540 sshd[6719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:45.201607 systemd-logind[1461]: New session 88 of user core. Feb 13 06:27:45.202171 systemd[1]: Started session-88.scope. Feb 13 06:27:46.631410 env[1473]: time="2024-02-13T06:27:46.631267305Z" level=info msg="StopContainer for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" with timeout 30 (s)" Feb 13 06:27:46.632171 env[1473]: time="2024-02-13T06:27:46.632047154Z" level=info msg="Stop container \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" with signal terminated" Feb 13 06:27:46.651718 systemd[1]: cri-containerd-b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85.scope: Deactivated successfully. Feb 13 06:27:46.652311 systemd[1]: cri-containerd-b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85.scope: Consumed 2.281s CPU time. Feb 13 06:27:46.673925 env[1473]: time="2024-02-13T06:27:46.673800209Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 06:27:46.680418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85-rootfs.mount: Deactivated successfully. Feb 13 06:27:46.680874 env[1473]: time="2024-02-13T06:27:46.680839302Z" level=info msg="StopContainer for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" with timeout 2 (s)" Feb 13 06:27:46.681117 env[1473]: time="2024-02-13T06:27:46.681086006Z" level=info msg="Stop container \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" with signal terminated" Feb 13 06:27:46.683891 env[1473]: time="2024-02-13T06:27:46.683820482Z" level=info msg="shim disconnected" id=b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85 Feb 13 06:27:46.683891 env[1473]: time="2024-02-13T06:27:46.683864480Z" level=warning msg="cleaning up after shim disconnected" id=b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85 namespace=k8s.io Feb 13 06:27:46.683891 env[1473]: time="2024-02-13T06:27:46.683877850Z" level=info msg="cleaning up dead shim" Feb 13 06:27:46.687217 systemd-networkd[1319]: lxc_health: Link DOWN Feb 13 06:27:46.687222 systemd-networkd[1319]: lxc_health: Lost carrier Feb 13 06:27:46.690687 env[1473]: time="2024-02-13T06:27:46.690625223Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6785 runtime=io.containerd.runc.v2\n" Feb 13 06:27:46.691828 env[1473]: time="2024-02-13T06:27:46.691769693Z" level=info msg="StopContainer for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" returns successfully" Feb 13 06:27:46.692369 env[1473]: time="2024-02-13T06:27:46.692313459Z" level=info msg="StopPodSandbox for \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\"" Feb 13 06:27:46.692452 env[1473]: time="2024-02-13T06:27:46.692375001Z" level=info msg="Container to stop \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.694534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240-shm.mount: Deactivated successfully. Feb 13 06:27:46.698709 systemd[1]: cri-containerd-296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240.scope: Deactivated successfully. Feb 13 06:27:46.716074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240-rootfs.mount: Deactivated successfully. Feb 13 06:27:46.741426 env[1473]: time="2024-02-13T06:27:46.741312116Z" level=info msg="shim disconnected" id=296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240 Feb 13 06:27:46.741775 env[1473]: time="2024-02-13T06:27:46.741428579Z" level=warning msg="cleaning up after shim disconnected" id=296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240 namespace=k8s.io Feb 13 06:27:46.741775 env[1473]: time="2024-02-13T06:27:46.741469785Z" level=info msg="cleaning up dead shim" Feb 13 06:27:46.752139 systemd[1]: cri-containerd-b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f.scope: Deactivated successfully. Feb 13 06:27:46.752787 systemd[1]: cri-containerd-b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f.scope: Consumed 11.212s CPU time. Feb 13 06:27:46.757381 env[1473]: time="2024-02-13T06:27:46.757262223Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6818 runtime=io.containerd.runc.v2\n" Feb 13 06:27:46.758017 env[1473]: time="2024-02-13T06:27:46.757922973Z" level=info msg="TearDown network for sandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" successfully" Feb 13 06:27:46.758017 env[1473]: time="2024-02-13T06:27:46.757976909Z" level=info msg="StopPodSandbox for \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" returns successfully" Feb 13 06:27:46.790841 env[1473]: time="2024-02-13T06:27:46.790749528Z" level=info msg="shim disconnected" id=b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f Feb 13 06:27:46.791232 env[1473]: time="2024-02-13T06:27:46.790843668Z" level=warning msg="cleaning up after shim disconnected" id=b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f namespace=k8s.io Feb 13 06:27:46.791232 env[1473]: time="2024-02-13T06:27:46.790871783Z" level=info msg="cleaning up dead shim" Feb 13 06:27:46.792058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f-rootfs.mount: Deactivated successfully. Feb 13 06:27:46.806368 env[1473]: time="2024-02-13T06:27:46.806250374Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6842 runtime=io.containerd.runc.v2\n" Feb 13 06:27:46.806634 kubelet[2564]: I0213 06:27:46.806510 2564 scope.go:117] "RemoveContainer" containerID="b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85" Feb 13 06:27:46.808522 env[1473]: time="2024-02-13T06:27:46.808447961Z" level=info msg="StopContainer for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" returns successfully" Feb 13 06:27:46.809228 env[1473]: time="2024-02-13T06:27:46.809157555Z" level=info msg="StopPodSandbox for \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\"" Feb 13 06:27:46.809465 env[1473]: time="2024-02-13T06:27:46.809190820Z" level=info msg="RemoveContainer for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\"" Feb 13 06:27:46.809465 env[1473]: time="2024-02-13T06:27:46.809300354Z" level=info msg="Container to stop \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.809465 env[1473]: time="2024-02-13T06:27:46.809346938Z" level=info msg="Container to stop \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.809465 env[1473]: time="2024-02-13T06:27:46.809377387Z" level=info msg="Container to stop \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.809465 env[1473]: time="2024-02-13T06:27:46.809405134Z" level=info msg="Container to stop \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.809465 env[1473]: time="2024-02-13T06:27:46.809432272Z" level=info msg="Container to stop \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.813125 env[1473]: time="2024-02-13T06:27:46.813018941Z" level=info msg="RemoveContainer for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" returns successfully" Feb 13 06:27:46.813591 kubelet[2564]: I0213 06:27:46.813512 2564 scope.go:117] "RemoveContainer" containerID="b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85" Feb 13 06:27:46.814190 env[1473]: time="2024-02-13T06:27:46.813980726Z" level=error msg="ContainerStatus for \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\": not found" Feb 13 06:27:46.814522 kubelet[2564]: E0213 06:27:46.814450 2564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\": not found" containerID="b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85" Feb 13 06:27:46.814727 kubelet[2564]: I0213 06:27:46.814626 2564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85"} err="failed to get container status \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\": rpc error: code = NotFound desc = an error occurred when try to find container \"b88b8797cf2602566a03e125f04228f0dcd761dfb3e9ffd4e84e3abb88c2ea85\": not found" Feb 13 06:27:46.820403 systemd[1]: cri-containerd-67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e.scope: Deactivated successfully. Feb 13 06:27:46.839756 kubelet[2564]: I0213 06:27:46.839712 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s526\" (UniqueName: \"kubernetes.io/projected/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-kube-api-access-7s526\") pod \"6bbe3bd3-de94-47e5-95ac-af114b6c5a76\" (UID: \"6bbe3bd3-de94-47e5-95ac-af114b6c5a76\") " Feb 13 06:27:46.839964 kubelet[2564]: I0213 06:27:46.839811 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-cilium-config-path\") pod \"6bbe3bd3-de94-47e5-95ac-af114b6c5a76\" (UID: \"6bbe3bd3-de94-47e5-95ac-af114b6c5a76\") " Feb 13 06:27:46.843249 kubelet[2564]: I0213 06:27:46.843206 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bbe3bd3-de94-47e5-95ac-af114b6c5a76" (UID: "6bbe3bd3-de94-47e5-95ac-af114b6c5a76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 06:27:46.844473 kubelet[2564]: I0213 06:27:46.844432 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-kube-api-access-7s526" (OuterVolumeSpecName: "kube-api-access-7s526") pod "6bbe3bd3-de94-47e5-95ac-af114b6c5a76" (UID: "6bbe3bd3-de94-47e5-95ac-af114b6c5a76"). InnerVolumeSpecName "kube-api-access-7s526". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:46.847603 env[1473]: time="2024-02-13T06:27:46.847343890Z" level=info msg="shim disconnected" id=67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e Feb 13 06:27:46.847603 env[1473]: time="2024-02-13T06:27:46.847452279Z" level=warning msg="cleaning up after shim disconnected" id=67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e namespace=k8s.io Feb 13 06:27:46.847603 env[1473]: time="2024-02-13T06:27:46.847487498Z" level=info msg="cleaning up dead shim" Feb 13 06:27:46.858525 env[1473]: time="2024-02-13T06:27:46.858464078Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6873 runtime=io.containerd.runc.v2\n" Feb 13 06:27:46.859013 env[1473]: time="2024-02-13T06:27:46.858963043Z" level=info msg="TearDown network for sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" successfully" Feb 13 06:27:46.859171 env[1473]: time="2024-02-13T06:27:46.859005954Z" level=info msg="StopPodSandbox for \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" returns successfully" Feb 13 06:27:46.940615 kubelet[2564]: I0213 06:27:46.940398 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92vjt\" (UniqueName: \"kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-kube-api-access-92vjt\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.940615 kubelet[2564]: I0213 06:27:46.940500 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-hubble-tls\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.940615 kubelet[2564]: I0213 06:27:46.940558 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-run\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.940615 kubelet[2564]: I0213 06:27:46.940607 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-lib-modules\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.941639 kubelet[2564]: I0213 06:27:46.940664 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-cgroup\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.941639 kubelet[2564]: I0213 06:27:46.940718 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-bpf-maps\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.941639 kubelet[2564]: I0213 06:27:46.940688 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.941639 kubelet[2564]: I0213 06:27:46.940779 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-config-path\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.941639 kubelet[2564]: I0213 06:27:46.940751 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.941639 kubelet[2564]: I0213 06:27:46.940832 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-xtables-lock\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.942751 kubelet[2564]: I0213 06:27:46.940805 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.942751 kubelet[2564]: I0213 06:27:46.940865 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.942751 kubelet[2564]: I0213 06:27:46.940906 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-hostproc\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.942751 kubelet[2564]: I0213 06:27:46.940842 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.942751 kubelet[2564]: I0213 06:27:46.940971 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c01960ea-196b-4b62-a22b-0bdac416c20d-clustermesh-secrets\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.943682 kubelet[2564]: I0213 06:27:46.941028 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cni-path\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.943682 kubelet[2564]: I0213 06:27:46.941015 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-hostproc" (OuterVolumeSpecName: "hostproc") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.943682 kubelet[2564]: I0213 06:27:46.941085 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-net\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.943682 kubelet[2564]: I0213 06:27:46.941091 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cni-path" (OuterVolumeSpecName: "cni-path") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.943682 kubelet[2564]: I0213 06:27:46.941147 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-kernel\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.944601 kubelet[2564]: I0213 06:27:46.941151 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.944601 kubelet[2564]: I0213 06:27:46.941205 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-etc-cni-netd\") pod \"c01960ea-196b-4b62-a22b-0bdac416c20d\" (UID: \"c01960ea-196b-4b62-a22b-0bdac416c20d\") " Feb 13 06:27:46.944601 kubelet[2564]: I0213 06:27:46.941201 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.944601 kubelet[2564]: I0213 06:27:46.941331 2564 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7s526\" (UniqueName: \"kubernetes.io/projected/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-kube-api-access-7s526\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.944601 kubelet[2564]: I0213 06:27:46.941333 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.944601 kubelet[2564]: I0213 06:27:46.941392 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-run\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941433 2564 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-lib-modules\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941467 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bbe3bd3-de94-47e5-95ac-af114b6c5a76-cilium-config-path\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941501 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-cgroup\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941531 2564 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-bpf-maps\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941561 2564 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-xtables-lock\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941594 2564 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-hostproc\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941624 2564 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-cni-path\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.945711 kubelet[2564]: I0213 06:27:46.941657 2564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-net\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.947157 kubelet[2564]: I0213 06:27:46.941700 2564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:46.947373 kubelet[2564]: I0213 06:27:46.947238 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-kube-api-access-92vjt" (OuterVolumeSpecName: "kube-api-access-92vjt") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "kube-api-access-92vjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:46.947934 kubelet[2564]: I0213 06:27:46.947862 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01960ea-196b-4b62-a22b-0bdac416c20d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 06:27:46.948171 kubelet[2564]: I0213 06:27:46.947962 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:46.949053 kubelet[2564]: I0213 06:27:46.948975 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c01960ea-196b-4b62-a22b-0bdac416c20d" (UID: "c01960ea-196b-4b62-a22b-0bdac416c20d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 06:27:47.042888 kubelet[2564]: I0213 06:27:47.042808 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c01960ea-196b-4b62-a22b-0bdac416c20d-cilium-config-path\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:47.042888 kubelet[2564]: I0213 06:27:47.042900 2564 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c01960ea-196b-4b62-a22b-0bdac416c20d-clustermesh-secrets\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:47.043430 kubelet[2564]: I0213 06:27:47.042958 2564 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c01960ea-196b-4b62-a22b-0bdac416c20d-etc-cni-netd\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:47.043430 kubelet[2564]: I0213 06:27:47.043022 2564 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-92vjt\" (UniqueName: \"kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-kube-api-access-92vjt\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:47.043430 kubelet[2564]: I0213 06:27:47.043110 2564 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c01960ea-196b-4b62-a22b-0bdac416c20d-hubble-tls\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:47.116018 systemd[1]: Removed slice kubepods-besteffort-pod6bbe3bd3_de94_47e5_95ac_af114b6c5a76.slice. Feb 13 06:27:47.116354 systemd[1]: kubepods-besteffort-pod6bbe3bd3_de94_47e5_95ac_af114b6c5a76.slice: Consumed 2.301s CPU time. Feb 13 06:27:47.650477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e-rootfs.mount: Deactivated successfully. Feb 13 06:27:47.650531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e-shm.mount: Deactivated successfully. Feb 13 06:27:47.650569 systemd[1]: var-lib-kubelet-pods-c01960ea\x2d196b\x2d4b62\x2da22b\x2d0bdac416c20d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d92vjt.mount: Deactivated successfully. Feb 13 06:27:47.650603 systemd[1]: var-lib-kubelet-pods-6bbe3bd3\x2dde94\x2d47e5\x2d95ac\x2daf114b6c5a76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7s526.mount: Deactivated successfully. Feb 13 06:27:47.650632 systemd[1]: var-lib-kubelet-pods-c01960ea\x2d196b\x2d4b62\x2da22b\x2d0bdac416c20d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 06:27:47.650662 systemd[1]: var-lib-kubelet-pods-c01960ea\x2d196b\x2d4b62\x2da22b\x2d0bdac416c20d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 06:27:47.818299 kubelet[2564]: I0213 06:27:47.818234 2564 scope.go:117] "RemoveContainer" containerID="b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f" Feb 13 06:27:47.821081 env[1473]: time="2024-02-13T06:27:47.821006895Z" level=info msg="RemoveContainer for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\"" Feb 13 06:27:47.823065 env[1473]: time="2024-02-13T06:27:47.823052558Z" level=info msg="RemoveContainer for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" returns successfully" Feb 13 06:27:47.823555 systemd[1]: Removed slice kubepods-burstable-podc01960ea_196b_4b62_a22b_0bdac416c20d.slice. Feb 13 06:27:47.823731 kubelet[2564]: I0213 06:27:47.823571 2564 scope.go:117] "RemoveContainer" containerID="3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7" Feb 13 06:27:47.823616 systemd[1]: kubepods-burstable-podc01960ea_196b_4b62_a22b_0bdac416c20d.slice: Consumed 11.260s CPU time. Feb 13 06:27:47.824092 env[1473]: time="2024-02-13T06:27:47.824052686Z" level=info msg="RemoveContainer for \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\"" Feb 13 06:27:47.825148 env[1473]: time="2024-02-13T06:27:47.825136665Z" level=info msg="RemoveContainer for \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\" returns successfully" Feb 13 06:27:47.825202 kubelet[2564]: I0213 06:27:47.825196 2564 scope.go:117] "RemoveContainer" containerID="fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58" Feb 13 06:27:47.825667 env[1473]: time="2024-02-13T06:27:47.825654190Z" level=info msg="RemoveContainer for \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\"" Feb 13 06:27:47.826697 env[1473]: time="2024-02-13T06:27:47.826656786Z" level=info msg="RemoveContainer for \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\" returns successfully" Feb 13 06:27:47.826738 kubelet[2564]: I0213 06:27:47.826714 2564 scope.go:117] "RemoveContainer" containerID="56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a" Feb 13 06:27:47.827172 env[1473]: time="2024-02-13T06:27:47.827161008Z" level=info msg="RemoveContainer for \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\"" Feb 13 06:27:47.828227 env[1473]: time="2024-02-13T06:27:47.828186342Z" level=info msg="RemoveContainer for \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\" returns successfully" Feb 13 06:27:47.828262 kubelet[2564]: I0213 06:27:47.828239 2564 scope.go:117] "RemoveContainer" containerID="fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309" Feb 13 06:27:47.828676 env[1473]: time="2024-02-13T06:27:47.828643017Z" level=info msg="RemoveContainer for \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\"" Feb 13 06:27:47.829899 env[1473]: time="2024-02-13T06:27:47.829855663Z" level=info msg="RemoveContainer for \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\" returns successfully" Feb 13 06:27:47.829995 kubelet[2564]: I0213 06:27:47.829954 2564 scope.go:117] "RemoveContainer" containerID="b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f" Feb 13 06:27:47.830056 env[1473]: time="2024-02-13T06:27:47.830029917Z" level=error msg="ContainerStatus for \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\": not found" Feb 13 06:27:47.830104 kubelet[2564]: E0213 06:27:47.830099 2564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\": not found" containerID="b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f" Feb 13 06:27:47.830129 kubelet[2564]: I0213 06:27:47.830117 2564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f"} err="failed to get container status \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b513b20a27f934b2d16bd96e42650fc11d61ae1ca82c5dee89e29ac0b19b557f\": not found" Feb 13 06:27:47.830129 kubelet[2564]: I0213 06:27:47.830123 2564 scope.go:117] "RemoveContainer" containerID="3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7" Feb 13 06:27:47.830212 env[1473]: time="2024-02-13T06:27:47.830190049Z" level=error msg="ContainerStatus for \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\": not found" Feb 13 06:27:47.830249 kubelet[2564]: E0213 06:27:47.830245 2564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\": not found" containerID="3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7" Feb 13 06:27:47.830273 kubelet[2564]: I0213 06:27:47.830256 2564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7"} err="failed to get container status \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b86a46424861e23c9847760529e353fbf3aa17c8af4935f273c0d19ed7b3aa7\": not found" Feb 13 06:27:47.830273 kubelet[2564]: I0213 06:27:47.830261 2564 scope.go:117] "RemoveContainer" containerID="fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58" Feb 13 06:27:47.830387 env[1473]: time="2024-02-13T06:27:47.830335422Z" level=error msg="ContainerStatus for \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\": not found" Feb 13 06:27:47.830423 kubelet[2564]: E0213 06:27:47.830392 2564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\": not found" containerID="fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58" Feb 13 06:27:47.830423 kubelet[2564]: I0213 06:27:47.830402 2564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58"} err="failed to get container status \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc7f9c7795bf3ae84228fc81c85079e6f1dcb349ad713aae52bb47bf6697ce58\": not found" Feb 13 06:27:47.830423 kubelet[2564]: I0213 06:27:47.830407 2564 scope.go:117] "RemoveContainer" containerID="56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a" Feb 13 06:27:47.830504 env[1473]: time="2024-02-13T06:27:47.830466923Z" level=error msg="ContainerStatus for \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\": not found" Feb 13 06:27:47.830549 kubelet[2564]: E0213 06:27:47.830543 2564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\": not found" containerID="56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a" Feb 13 06:27:47.830575 kubelet[2564]: I0213 06:27:47.830562 2564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a"} err="failed to get container status \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"56ffe794a3947ce48e08d24a9a66788697ad9e449d60d181cfbbccf86fa73d7a\": not found" Feb 13 06:27:47.830575 kubelet[2564]: I0213 06:27:47.830571 2564 scope.go:117] "RemoveContainer" containerID="fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309" Feb 13 06:27:47.830666 env[1473]: time="2024-02-13T06:27:47.830643770Z" level=error msg="ContainerStatus for \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\": not found" Feb 13 06:27:47.830719 kubelet[2564]: E0213 06:27:47.830713 2564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\": not found" containerID="fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309" Feb 13 06:27:47.830744 kubelet[2564]: I0213 06:27:47.830729 2564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309"} err="failed to get container status \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc77b083b62963c29dc50fa7ed2a51c555f06b3ea70c85668744b17e68677309\": not found" Feb 13 06:27:48.181417 kubelet[2564]: I0213 06:27:48.181402 2564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6bbe3bd3-de94-47e5-95ac-af114b6c5a76" path="/var/lib/kubelet/pods/6bbe3bd3-de94-47e5-95ac-af114b6c5a76/volumes" Feb 13 06:27:48.181686 kubelet[2564]: I0213 06:27:48.181656 2564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" path="/var/lib/kubelet/pods/c01960ea-196b-4b62-a22b-0bdac416c20d/volumes" Feb 13 06:27:48.572234 sshd[6719]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:48.574682 systemd[1]: sshd@194-145.40.90.207:22-139.178.68.195:45224.service: Deactivated successfully. Feb 13 06:27:48.575143 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 06:27:48.575675 systemd-logind[1461]: Session 88 logged out. Waiting for processes to exit. Feb 13 06:27:48.576367 systemd[1]: Started sshd@195-145.40.90.207:22-139.178.68.195:48792.service. Feb 13 06:27:48.576891 systemd-logind[1461]: Removed session 88. Feb 13 06:27:48.606337 sshd[6890]: Accepted publickey for core from 139.178.68.195 port 48792 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:48.607247 sshd[6890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:48.610244 systemd-logind[1461]: New session 89 of user core. Feb 13 06:27:48.611211 systemd[1]: Started session-89.scope. Feb 13 06:27:48.981940 sshd[6890]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:48.984208 systemd[1]: sshd@195-145.40.90.207:22-139.178.68.195:48792.service: Deactivated successfully. Feb 13 06:27:48.984626 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 06:27:48.985033 systemd-logind[1461]: Session 89 logged out. Waiting for processes to exit. Feb 13 06:27:48.985785 systemd[1]: Started sshd@196-145.40.90.207:22-139.178.68.195:48794.service. Feb 13 06:27:48.986169 systemd-logind[1461]: Removed session 89. Feb 13 06:27:48.991635 kubelet[2564]: I0213 06:27:48.991615 2564 topology_manager.go:215] "Topology Admit Handler" podUID="3b25b643-c993-4601-afe4-a3bbeeef458b" podNamespace="kube-system" podName="cilium-vk58j" Feb 13 06:27:48.991933 kubelet[2564]: E0213 06:27:48.991652 2564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" containerName="mount-cgroup" Feb 13 06:27:48.991933 kubelet[2564]: E0213 06:27:48.991686 2564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" containerName="apply-sysctl-overwrites" Feb 13 06:27:48.991933 kubelet[2564]: E0213 06:27:48.991693 2564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bbe3bd3-de94-47e5-95ac-af114b6c5a76" containerName="cilium-operator" Feb 13 06:27:48.991933 kubelet[2564]: E0213 06:27:48.991699 2564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" containerName="cilium-agent" Feb 13 06:27:48.991933 kubelet[2564]: E0213 06:27:48.991706 2564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" containerName="clean-cilium-state" Feb 13 06:27:48.991933 kubelet[2564]: E0213 06:27:48.991712 2564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" containerName="mount-bpf-fs" Feb 13 06:27:48.991933 kubelet[2564]: I0213 06:27:48.991746 2564 memory_manager.go:346] "RemoveStaleState removing state" podUID="c01960ea-196b-4b62-a22b-0bdac416c20d" containerName="cilium-agent" Feb 13 06:27:48.991933 kubelet[2564]: I0213 06:27:48.991755 2564 memory_manager.go:346] "RemoveStaleState removing state" podUID="6bbe3bd3-de94-47e5-95ac-af114b6c5a76" containerName="cilium-operator" Feb 13 06:27:48.995486 systemd[1]: Created slice kubepods-burstable-pod3b25b643_c993_4601_afe4_a3bbeeef458b.slice. Feb 13 06:27:49.017891 sshd[6914]: Accepted publickey for core from 139.178.68.195 port 48794 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:49.018692 sshd[6914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:49.020981 systemd-logind[1461]: New session 90 of user core. Feb 13 06:27:49.021522 systemd[1]: Started session-90.scope. Feb 13 06:27:49.059669 kubelet[2564]: I0213 06:27:49.059641 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-bpf-maps\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.059669 kubelet[2564]: I0213 06:27:49.059673 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-etc-cni-netd\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.059824 kubelet[2564]: I0213 06:27:49.059735 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-net\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.059824 kubelet[2564]: I0213 06:27:49.059761 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz7n2\" (UniqueName: \"kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-kube-api-access-pz7n2\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.059824 kubelet[2564]: I0213 06:27:49.059780 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-run\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.059824 kubelet[2564]: I0213 06:27:49.059794 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-cgroup\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.059824 kubelet[2564]: I0213 06:27:49.059811 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cni-path\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060003 kubelet[2564]: I0213 06:27:49.059828 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-lib-modules\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060003 kubelet[2564]: I0213 06:27:49.059855 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-hubble-tls\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060003 kubelet[2564]: I0213 06:27:49.059889 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-ipsec-secrets\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060003 kubelet[2564]: I0213 06:27:49.059937 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-hostproc\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060003 kubelet[2564]: I0213 06:27:49.059968 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-xtables-lock\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060003 kubelet[2564]: I0213 06:27:49.059987 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-clustermesh-secrets\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060156 kubelet[2564]: I0213 06:27:49.060002 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-config-path\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.060156 kubelet[2564]: I0213 06:27:49.060016 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-kernel\") pod \"cilium-vk58j\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " pod="kube-system/cilium-vk58j" Feb 13 06:27:49.127001 sshd[6914]: pam_unix(sshd:session): session closed for user core Feb 13 06:27:49.129170 systemd[1]: sshd@196-145.40.90.207:22-139.178.68.195:48794.service: Deactivated successfully. Feb 13 06:27:49.129577 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 06:27:49.129962 systemd-logind[1461]: Session 90 logged out. Waiting for processes to exit. Feb 13 06:27:49.130650 systemd[1]: Started sshd@197-145.40.90.207:22-139.178.68.195:48802.service. Feb 13 06:27:49.131098 systemd-logind[1461]: Removed session 90. Feb 13 06:27:49.133421 kubelet[2564]: E0213 06:27:49.133401 2564 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-pz7n2 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-vk58j" podUID="3b25b643-c993-4601-afe4-a3bbeeef458b" Feb 13 06:27:49.161048 sshd[6941]: Accepted publickey for core from 139.178.68.195 port 48802 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:27:49.161905 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:27:49.172240 systemd-logind[1461]: New session 91 of user core. Feb 13 06:27:49.172885 systemd[1]: Started session-91.scope. Feb 13 06:27:49.558342 kubelet[2564]: E0213 06:27:49.558230 2564 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 06:27:49.866575 kubelet[2564]: I0213 06:27:49.866464 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-hubble-tls\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.866575 kubelet[2564]: I0213 06:27:49.866506 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-hostproc\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.866575 kubelet[2564]: I0213 06:27:49.866530 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-kernel\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.866575 kubelet[2564]: I0213 06:27:49.866553 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-bpf-maps\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.866575 kubelet[2564]: I0213 06:27:49.866563 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-hostproc" (OuterVolumeSpecName: "hostproc") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.866980 kubelet[2564]: I0213 06:27:49.866582 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-lib-modules\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.866980 kubelet[2564]: I0213 06:27:49.866611 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.866980 kubelet[2564]: I0213 06:27:49.866618 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-etc-cni-netd\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.866980 kubelet[2564]: I0213 06:27:49.866607 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.866980 kubelet[2564]: I0213 06:27:49.866656 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-net\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867245 kubelet[2564]: I0213 06:27:49.866648 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867245 kubelet[2564]: I0213 06:27:49.866657 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867245 kubelet[2564]: I0213 06:27:49.866703 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-ipsec-secrets\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867245 kubelet[2564]: I0213 06:27:49.866700 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867245 kubelet[2564]: I0213 06:27:49.866742 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cni-path\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867507 kubelet[2564]: I0213 06:27:49.866777 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-xtables-lock\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867507 kubelet[2564]: I0213 06:27:49.866798 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cni-path" (OuterVolumeSpecName: "cni-path") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867507 kubelet[2564]: I0213 06:27:49.866811 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867507 kubelet[2564]: I0213 06:27:49.866818 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-cgroup\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867507 kubelet[2564]: I0213 06:27:49.866838 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867742 kubelet[2564]: I0213 06:27:49.866864 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz7n2\" (UniqueName: \"kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-kube-api-access-pz7n2\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867742 kubelet[2564]: I0213 06:27:49.866890 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-run\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867742 kubelet[2564]: I0213 06:27:49.866916 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-config-path\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867742 kubelet[2564]: I0213 06:27:49.866942 2564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-clustermesh-secrets\") pod \"3b25b643-c993-4601-afe4-a3bbeeef458b\" (UID: \"3b25b643-c993-4601-afe4-a3bbeeef458b\") " Feb 13 06:27:49.867742 kubelet[2564]: I0213 06:27:49.866946 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.867742 kubelet[2564]: I0213 06:27:49.866989 2564 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-bpf-maps\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867004 2564 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-lib-modules\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867016 2564 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-hostproc\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867029 2564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867041 2564 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-etc-cni-netd\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867054 2564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-host-proc-sys-net\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867066 2564 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cni-path\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867079 2564 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-xtables-lock\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.867971 kubelet[2564]: I0213 06:27:49.867090 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-cgroup\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.868803 kubelet[2564]: I0213 06:27:49.868745 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 06:27:49.869426 kubelet[2564]: I0213 06:27:49.869394 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:49.869826 kubelet[2564]: I0213 06:27:49.869761 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-kube-api-access-pz7n2" (OuterVolumeSpecName: "kube-api-access-pz7n2") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "kube-api-access-pz7n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:49.869826 kubelet[2564]: I0213 06:27:49.869761 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 06:27:49.870085 kubelet[2564]: I0213 06:27:49.870037 2564 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3b25b643-c993-4601-afe4-a3bbeeef458b" (UID: "3b25b643-c993-4601-afe4-a3bbeeef458b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 06:27:49.871411 systemd[1]: var-lib-kubelet-pods-3b25b643\x2dc993\x2d4601\x2dafe4\x2da3bbeeef458b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpz7n2.mount: Deactivated successfully. Feb 13 06:27:49.871538 systemd[1]: var-lib-kubelet-pods-3b25b643\x2dc993\x2d4601\x2dafe4\x2da3bbeeef458b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 06:27:49.871628 systemd[1]: var-lib-kubelet-pods-3b25b643\x2dc993\x2d4601\x2dafe4\x2da3bbeeef458b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 06:27:49.871707 systemd[1]: var-lib-kubelet-pods-3b25b643\x2dc993\x2d4601\x2dafe4\x2da3bbeeef458b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 06:27:49.968154 kubelet[2564]: I0213 06:27:49.968048 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.968154 kubelet[2564]: I0213 06:27:49.968125 2564 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pz7n2\" (UniqueName: \"kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-kube-api-access-pz7n2\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.968154 kubelet[2564]: I0213 06:27:49.968164 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-run\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.968759 kubelet[2564]: I0213 06:27:49.968202 2564 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b25b643-c993-4601-afe4-a3bbeeef458b-cilium-config-path\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.968759 kubelet[2564]: I0213 06:27:49.968238 2564 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b25b643-c993-4601-afe4-a3bbeeef458b-clustermesh-secrets\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:49.968759 kubelet[2564]: I0213 06:27:49.968270 2564 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b25b643-c993-4601-afe4-a3bbeeef458b-hubble-tls\") on node \"ci-3510.3.2-a-25d9a0518b\" DevicePath \"\"" Feb 13 06:27:50.191558 systemd[1]: Removed slice kubepods-burstable-pod3b25b643_c993_4601_afe4_a3bbeeef458b.slice. Feb 13 06:27:50.846723 kubelet[2564]: I0213 06:27:50.846702 2564 topology_manager.go:215] "Topology Admit Handler" podUID="d309600f-fe65-465c-9bf0-2c7d464ba722" podNamespace="kube-system" podName="cilium-psfsq" Feb 13 06:27:50.849890 systemd[1]: Created slice kubepods-burstable-podd309600f_fe65_465c_9bf0_2c7d464ba722.slice. Feb 13 06:27:50.877147 kubelet[2564]: I0213 06:27:50.877121 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-host-proc-sys-kernel\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877283 kubelet[2564]: I0213 06:27:50.877156 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-cilium-run\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877283 kubelet[2564]: I0213 06:27:50.877179 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-host-proc-sys-net\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877283 kubelet[2564]: I0213 06:27:50.877203 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d309600f-fe65-465c-9bf0-2c7d464ba722-cilium-ipsec-secrets\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877283 kubelet[2564]: I0213 06:27:50.877225 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-hostproc\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877283 kubelet[2564]: I0213 06:27:50.877265 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-etc-cni-netd\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877476 kubelet[2564]: I0213 06:27:50.877364 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d309600f-fe65-465c-9bf0-2c7d464ba722-clustermesh-secrets\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877476 kubelet[2564]: I0213 06:27:50.877405 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d309600f-fe65-465c-9bf0-2c7d464ba722-hubble-tls\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877476 kubelet[2564]: I0213 06:27:50.877438 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-cilium-cgroup\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877592 kubelet[2564]: I0213 06:27:50.877475 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-cni-path\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877592 kubelet[2564]: I0213 06:27:50.877526 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-lib-modules\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877592 kubelet[2564]: I0213 06:27:50.877582 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d309600f-fe65-465c-9bf0-2c7d464ba722-cilium-config-path\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877709 kubelet[2564]: I0213 06:27:50.877623 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqx9b\" (UniqueName: \"kubernetes.io/projected/d309600f-fe65-465c-9bf0-2c7d464ba722-kube-api-access-rqx9b\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877709 kubelet[2564]: I0213 06:27:50.877648 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-bpf-maps\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:50.877709 kubelet[2564]: I0213 06:27:50.877667 2564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d309600f-fe65-465c-9bf0-2c7d464ba722-xtables-lock\") pod \"cilium-psfsq\" (UID: \"d309600f-fe65-465c-9bf0-2c7d464ba722\") " pod="kube-system/cilium-psfsq" Feb 13 06:27:51.156426 env[1473]: time="2024-02-13T06:27:51.156178097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psfsq,Uid:d309600f-fe65-465c-9bf0-2c7d464ba722,Namespace:kube-system,Attempt:0,}" Feb 13 06:27:51.177688 env[1473]: time="2024-02-13T06:27:51.177472333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:27:51.177688 env[1473]: time="2024-02-13T06:27:51.177565792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:27:51.177688 env[1473]: time="2024-02-13T06:27:51.177605841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:27:51.178135 env[1473]: time="2024-02-13T06:27:51.177946214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f pid=6981 runtime=io.containerd.runc.v2 Feb 13 06:27:51.205222 systemd[1]: Started cri-containerd-f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f.scope. Feb 13 06:27:51.243093 env[1473]: time="2024-02-13T06:27:51.242987097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psfsq,Uid:d309600f-fe65-465c-9bf0-2c7d464ba722,Namespace:kube-system,Attempt:0,} returns sandbox id \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\"" Feb 13 06:27:51.247258 env[1473]: time="2024-02-13T06:27:51.247164706Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 06:27:51.258535 env[1473]: time="2024-02-13T06:27:51.258439285Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed\"" Feb 13 06:27:51.259118 env[1473]: time="2024-02-13T06:27:51.259029141Z" level=info msg="StartContainer for \"9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed\"" Feb 13 06:27:51.284559 systemd[1]: Started cri-containerd-9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed.scope. Feb 13 06:27:51.323672 env[1473]: time="2024-02-13T06:27:51.323595850Z" level=info msg="StartContainer for \"9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed\" returns successfully" Feb 13 06:27:51.341936 systemd[1]: cri-containerd-9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed.scope: Deactivated successfully. Feb 13 06:27:51.377434 env[1473]: time="2024-02-13T06:27:51.377308646Z" level=info msg="shim disconnected" id=9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed Feb 13 06:27:51.377434 env[1473]: time="2024-02-13T06:27:51.377405062Z" level=warning msg="cleaning up after shim disconnected" id=9b09f2dbacb5bb2b7c2c045767ea539a43f5e7defe86609bcc99acceb1ca2fed namespace=k8s.io Feb 13 06:27:51.377434 env[1473]: time="2024-02-13T06:27:51.377427398Z" level=info msg="cleaning up dead shim" Feb 13 06:27:51.389497 env[1473]: time="2024-02-13T06:27:51.389407563Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7068 runtime=io.containerd.runc.v2\n" Feb 13 06:27:51.835965 env[1473]: time="2024-02-13T06:27:51.835843658Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 06:27:51.849651 env[1473]: time="2024-02-13T06:27:51.849512595Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47\"" Feb 13 06:27:51.850622 env[1473]: time="2024-02-13T06:27:51.850495639Z" level=info msg="StartContainer for \"aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47\"" Feb 13 06:27:51.888479 systemd[1]: Started cri-containerd-aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47.scope. Feb 13 06:27:51.941493 env[1473]: time="2024-02-13T06:27:51.941406034Z" level=info msg="StartContainer for \"aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47\" returns successfully" Feb 13 06:27:51.960605 systemd[1]: cri-containerd-aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47.scope: Deactivated successfully. Feb 13 06:27:51.997064 env[1473]: time="2024-02-13T06:27:51.997004827Z" level=info msg="shim disconnected" id=aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47 Feb 13 06:27:51.997064 env[1473]: time="2024-02-13T06:27:51.997062755Z" level=warning msg="cleaning up after shim disconnected" id=aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47 namespace=k8s.io Feb 13 06:27:51.997375 env[1473]: time="2024-02-13T06:27:51.997079537Z" level=info msg="cleaning up dead shim" Feb 13 06:27:51.997519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa66491ed2a7626c6ae860e831a7d3ce5cd33c81f34ffc40415848be282f0e47-rootfs.mount: Deactivated successfully. Feb 13 06:27:52.006067 env[1473]: time="2024-02-13T06:27:52.005997061Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7129 runtime=io.containerd.runc.v2\n" Feb 13 06:27:52.180555 kubelet[2564]: I0213 06:27:52.180511 2564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3b25b643-c993-4601-afe4-a3bbeeef458b" path="/var/lib/kubelet/pods/3b25b643-c993-4601-afe4-a3bbeeef458b/volumes" Feb 13 06:27:52.834836 env[1473]: time="2024-02-13T06:27:52.834802757Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 06:27:52.842747 env[1473]: time="2024-02-13T06:27:52.842685185Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d\"" Feb 13 06:27:52.843174 env[1473]: time="2024-02-13T06:27:52.843118993Z" level=info msg="StartContainer for \"68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d\"" Feb 13 06:27:52.860233 systemd[1]: Started cri-containerd-68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d.scope. Feb 13 06:27:52.886174 env[1473]: time="2024-02-13T06:27:52.886104982Z" level=info msg="StartContainer for \"68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d\" returns successfully" Feb 13 06:27:52.889736 systemd[1]: cri-containerd-68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d.scope: Deactivated successfully. Feb 13 06:27:52.910719 env[1473]: time="2024-02-13T06:27:52.910662324Z" level=info msg="shim disconnected" id=68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d Feb 13 06:27:52.910719 env[1473]: time="2024-02-13T06:27:52.910714569Z" level=warning msg="cleaning up after shim disconnected" id=68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d namespace=k8s.io Feb 13 06:27:52.911026 env[1473]: time="2024-02-13T06:27:52.910728795Z" level=info msg="cleaning up dead shim" Feb 13 06:27:52.921522 env[1473]: time="2024-02-13T06:27:52.921435262Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7185 runtime=io.containerd.runc.v2\n" Feb 13 06:27:52.987919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f8556adc91bdc1cb8b5f30d6d17e2ef58ae6345aaaa65d11b83a1f9956069d-rootfs.mount: Deactivated successfully. Feb 13 06:27:53.846709 env[1473]: time="2024-02-13T06:27:53.846589626Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 06:27:53.856992 env[1473]: time="2024-02-13T06:27:53.856969086Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3\"" Feb 13 06:27:53.857276 env[1473]: time="2024-02-13T06:27:53.857257168Z" level=info msg="StartContainer for \"a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3\"" Feb 13 06:27:53.857707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156399868.mount: Deactivated successfully. Feb 13 06:27:53.865827 systemd[1]: Started cri-containerd-a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3.scope. Feb 13 06:27:53.877629 env[1473]: time="2024-02-13T06:27:53.877606506Z" level=info msg="StartContainer for \"a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3\" returns successfully" Feb 13 06:27:53.877727 systemd[1]: cri-containerd-a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3.scope: Deactivated successfully. Feb 13 06:27:53.887886 env[1473]: time="2024-02-13T06:27:53.887815956Z" level=info msg="shim disconnected" id=a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3 Feb 13 06:27:53.887886 env[1473]: time="2024-02-13T06:27:53.887884706Z" level=warning msg="cleaning up after shim disconnected" id=a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3 namespace=k8s.io Feb 13 06:27:53.888005 env[1473]: time="2024-02-13T06:27:53.887890682Z" level=info msg="cleaning up dead shim" Feb 13 06:27:53.891279 env[1473]: time="2024-02-13T06:27:53.891258666Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7238 runtime=io.containerd.runc.v2\n" Feb 13 06:27:53.988352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a478bd5d5ba21d553d6b7dbfdb904963c589f1fdafd6c08e6ef13d1c0fce36b3-rootfs.mount: Deactivated successfully. Feb 13 06:27:54.560103 kubelet[2564]: E0213 06:27:54.559998 2564 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 06:27:54.845936 env[1473]: time="2024-02-13T06:27:54.845870004Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 06:27:54.853437 env[1473]: time="2024-02-13T06:27:54.853358585Z" level=info msg="CreateContainer within sandbox \"f12ea814fb971055b65771422d8368d07df4f54b8735f1601aa7eedfe7ee952f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7128f30584b1e8bc4385d484c0d9e51bf23ab45c26ace52b3f0516a365dcbe3e\"" Feb 13 06:27:54.853800 env[1473]: time="2024-02-13T06:27:54.853760066Z" level=info msg="StartContainer for \"7128f30584b1e8bc4385d484c0d9e51bf23ab45c26ace52b3f0516a365dcbe3e\"" Feb 13 06:27:54.864139 systemd[1]: Started cri-containerd-7128f30584b1e8bc4385d484c0d9e51bf23ab45c26ace52b3f0516a365dcbe3e.scope. Feb 13 06:27:54.878837 env[1473]: time="2024-02-13T06:27:54.878804401Z" level=info msg="StartContainer for \"7128f30584b1e8bc4385d484c0d9e51bf23ab45c26ace52b3f0516a365dcbe3e\" returns successfully" Feb 13 06:27:55.037285 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 06:27:55.858997 kubelet[2564]: I0213 06:27:55.858973 2564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-psfsq" podStartSLOduration=5.858942306 podCreationTimestamp="2024-02-13 06:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:27:55.858541019 +0000 UTC m=+1211.754103783" watchObservedRunningTime="2024-02-13 06:27:55.858942306 +0000 UTC m=+1211.754505064" Feb 13 06:27:58.008431 systemd-networkd[1319]: lxc_health: Link UP Feb 13 06:27:58.038260 systemd-networkd[1319]: lxc_health: Gained carrier Feb 13 06:27:58.038405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 06:27:59.145419 systemd-networkd[1319]: lxc_health: Gained IPv6LL Feb 13 06:28:37.365464 systemd[1]: Started sshd@198-145.40.90.207:22-165.154.0.66:52802.service. Feb 13 06:28:38.677182 sshd[8526]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.0.66 user=root Feb 13 06:28:40.468521 sshd[8526]: Failed password for root from 165.154.0.66 port 52802 ssh2 Feb 13 06:28:40.960975 sshd[8526]: Received disconnect from 165.154.0.66 port 52802:11: Bye Bye [preauth] Feb 13 06:28:40.960975 sshd[8526]: Disconnected from authenticating user root 165.154.0.66 port 52802 [preauth] Feb 13 06:28:40.961421 systemd[1]: sshd@198-145.40.90.207:22-165.154.0.66:52802.service: Deactivated successfully. Feb 13 06:28:44.216931 env[1473]: time="2024-02-13T06:28:44.216854900Z" level=info msg="StopPodSandbox for \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\"" Feb 13 06:28:44.217333 env[1473]: time="2024-02-13T06:28:44.216957824Z" level=info msg="TearDown network for sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" successfully" Feb 13 06:28:44.217333 env[1473]: time="2024-02-13T06:28:44.216997673Z" level=info msg="StopPodSandbox for \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" returns successfully" Feb 13 06:28:44.217445 env[1473]: time="2024-02-13T06:28:44.217416862Z" level=info msg="RemovePodSandbox for \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\"" Feb 13 06:28:44.217489 env[1473]: time="2024-02-13T06:28:44.217447667Z" level=info msg="Forcibly stopping sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\"" Feb 13 06:28:44.217543 env[1473]: time="2024-02-13T06:28:44.217524200Z" level=info msg="TearDown network for sandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" successfully" Feb 13 06:28:44.219541 env[1473]: time="2024-02-13T06:28:44.219509738Z" level=info msg="RemovePodSandbox \"67fb2f0e3fa62beb1adfc108f2fafcfb509a9f2cc55440ec77861a383950952e\" returns successfully" Feb 13 06:28:44.219867 env[1473]: time="2024-02-13T06:28:44.219823797Z" level=info msg="StopPodSandbox for \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\"" Feb 13 06:28:44.219925 env[1473]: time="2024-02-13T06:28:44.219885208Z" level=info msg="TearDown network for sandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" successfully" Feb 13 06:28:44.219925 env[1473]: time="2024-02-13T06:28:44.219916404Z" level=info msg="StopPodSandbox for \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" returns successfully" Feb 13 06:28:44.220244 env[1473]: time="2024-02-13T06:28:44.220217932Z" level=info msg="RemovePodSandbox for \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\"" Feb 13 06:28:44.220306 env[1473]: time="2024-02-13T06:28:44.220251341Z" level=info msg="Forcibly stopping sandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\"" Feb 13 06:28:44.220394 env[1473]: time="2024-02-13T06:28:44.220341644Z" level=info msg="TearDown network for sandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" successfully" Feb 13 06:28:44.222356 env[1473]: time="2024-02-13T06:28:44.222324251Z" level=info msg="RemovePodSandbox \"296703274cc67a96e55989a150f571c834c6ce7d4fd86b037481f7814f0a2240\" returns successfully" Feb 13 06:28:51.299372 sshd[6941]: pam_unix(sshd:session): session closed for user core Feb 13 06:28:51.305506 systemd[1]: sshd@197-145.40.90.207:22-139.178.68.195:48802.service: Deactivated successfully. Feb 13 06:28:51.307419 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 06:28:51.307812 systemd[1]: session-91.scope: Consumed 1.030s CPU time. Feb 13 06:28:51.309225 systemd-logind[1461]: Session 91 logged out. Waiting for processes to exit. Feb 13 06:28:51.311516 systemd-logind[1461]: Removed session 91.