Feb 9 12:04:10.551595 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 12:04:10.551608 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 12:04:10.551615 kernel: BIOS-provided physical RAM map: Feb 9 12:04:10.551619 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 12:04:10.551622 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 12:04:10.551626 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 12:04:10.551630 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 12:04:10.551634 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 12:04:10.551638 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000082589fff] usable Feb 9 12:04:10.551642 kernel: BIOS-e820: [mem 0x000000008258a000-0x000000008258afff] ACPI NVS Feb 9 12:04:10.551647 kernel: BIOS-e820: [mem 0x000000008258b000-0x000000008258bfff] reserved Feb 9 12:04:10.551650 kernel: BIOS-e820: [mem 0x000000008258c000-0x000000008afccfff] usable Feb 9 12:04:10.551654 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 9 12:04:10.551658 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 9 12:04:10.551663 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 9 12:04:10.551668 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 9 12:04:10.551672 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 9 12:04:10.551676 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 9 12:04:10.551680 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 12:04:10.551685 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 12:04:10.551689 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 12:04:10.551693 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 12:04:10.551697 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 12:04:10.551701 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 9 12:04:10.551705 kernel: NX (Execute Disable) protection: active Feb 9 12:04:10.551709 kernel: SMBIOS 3.2.1 present. Feb 9 12:04:10.551714 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 9 12:04:10.551718 kernel: tsc: Detected 3400.000 MHz processor Feb 9 12:04:10.551723 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 12:04:10.551727 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 12:04:10.551732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 12:04:10.551736 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 9 12:04:10.551740 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 12:04:10.551745 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 9 12:04:10.551749 kernel: Using GB pages for direct mapping Feb 9 12:04:10.551753 kernel: ACPI: Early table checksum verification disabled Feb 9 12:04:10.551758 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 12:04:10.551762 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 12:04:10.551767 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 9 12:04:10.551771 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 12:04:10.551777 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 9 12:04:10.551782 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 9 12:04:10.551787 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 9 12:04:10.551792 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 12:04:10.551797 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 12:04:10.551802 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 12:04:10.551806 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 12:04:10.551811 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 12:04:10.551815 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 12:04:10.551820 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 12:04:10.551825 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 12:04:10.551830 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 12:04:10.551835 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 12:04:10.551839 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 12:04:10.551844 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 12:04:10.551848 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 12:04:10.551853 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 12:04:10.551858 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 12:04:10.551863 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 12:04:10.551868 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 9 12:04:10.551872 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 12:04:10.551877 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 12:04:10.551881 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 12:04:10.551886 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 9 12:04:10.551891 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 12:04:10.551896 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 12:04:10.551900 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 12:04:10.551906 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 12:04:10.551910 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 12:04:10.551915 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 9 12:04:10.551919 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 9 12:04:10.551924 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 9 12:04:10.551929 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 9 12:04:10.551933 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 9 12:04:10.551938 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 9 12:04:10.551944 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 9 12:04:10.551950 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 9 12:04:10.551955 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 9 12:04:10.551961 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 9 12:04:10.551966 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 9 12:04:10.551972 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 9 12:04:10.551977 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 9 12:04:10.551983 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 9 12:04:10.551988 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 9 12:04:10.551994 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 9 12:04:10.551999 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 9 12:04:10.552003 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 9 12:04:10.552008 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 9 12:04:10.552013 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 9 12:04:10.552017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 9 12:04:10.552022 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 9 12:04:10.552026 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 9 12:04:10.552031 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 9 12:04:10.552036 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 9 12:04:10.552041 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 9 12:04:10.552045 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 9 12:04:10.552050 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 9 12:04:10.552054 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 9 12:04:10.552059 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 9 12:04:10.552064 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 9 12:04:10.552068 kernel: No NUMA configuration found Feb 9 12:04:10.552073 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 9 12:04:10.552078 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 9 12:04:10.552083 kernel: Zone ranges: Feb 9 12:04:10.552088 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 12:04:10.552092 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 12:04:10.552097 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 9 12:04:10.552101 kernel: Movable zone start for each node Feb 9 12:04:10.552106 kernel: Early memory node ranges Feb 9 12:04:10.552111 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 12:04:10.552115 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 12:04:10.552120 kernel: node 0: [mem 0x0000000040400000-0x0000000082589fff] Feb 9 12:04:10.552125 kernel: node 0: [mem 0x000000008258c000-0x000000008afccfff] Feb 9 12:04:10.552130 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 9 12:04:10.552134 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 9 12:04:10.552139 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 9 12:04:10.552144 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 9 12:04:10.552148 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 12:04:10.552156 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 12:04:10.552162 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 12:04:10.552167 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 12:04:10.552172 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 9 12:04:10.552177 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 9 12:04:10.552182 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 9 12:04:10.552187 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 9 12:04:10.552192 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 12:04:10.552197 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 12:04:10.552202 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 12:04:10.552207 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 12:04:10.552213 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 12:04:10.552217 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 12:04:10.552222 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 12:04:10.552227 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 12:04:10.552232 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 12:04:10.552237 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 12:04:10.552242 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 12:04:10.552247 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 12:04:10.552252 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 12:04:10.552257 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 12:04:10.552262 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 12:04:10.552267 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 12:04:10.552272 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 12:04:10.552277 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 12:04:10.552282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 12:04:10.552287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 12:04:10.552292 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 12:04:10.552297 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 12:04:10.552302 kernel: TSC deadline timer available Feb 9 12:04:10.552307 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 12:04:10.552312 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 9 12:04:10.552317 kernel: Booting paravirtualized kernel on bare hardware Feb 9 12:04:10.552322 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 12:04:10.552327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 12:04:10.552332 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 12:04:10.552337 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 12:04:10.552342 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 12:04:10.552348 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 9 12:04:10.552353 kernel: Policy zone: Normal Feb 9 12:04:10.552358 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 12:04:10.552363 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 12:04:10.552368 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 12:04:10.552373 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 12:04:10.552378 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 12:04:10.552387 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 12:04:10.552393 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 12:04:10.552398 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 12:04:10.552403 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 12:04:10.552408 kernel: rcu: Hierarchical RCU implementation. Feb 9 12:04:10.552413 kernel: rcu: RCU event tracing is enabled. Feb 9 12:04:10.552418 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 12:04:10.552423 kernel: Rude variant of Tasks RCU enabled. Feb 9 12:04:10.552428 kernel: Tracing variant of Tasks RCU enabled. Feb 9 12:04:10.552433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 12:04:10.552439 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 12:04:10.552444 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 12:04:10.552449 kernel: random: crng init done Feb 9 12:04:10.552454 kernel: Console: colour dummy device 80x25 Feb 9 12:04:10.552459 kernel: printk: console [tty0] enabled Feb 9 12:04:10.552464 kernel: printk: console [ttyS1] enabled Feb 9 12:04:10.552468 kernel: ACPI: Core revision 20210730 Feb 9 12:04:10.552473 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 9 12:04:10.552478 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 12:04:10.552484 kernel: DMAR: Host address width 39 Feb 9 12:04:10.552489 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 12:04:10.552494 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 12:04:10.552499 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 9 12:04:10.552504 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 9 12:04:10.552509 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 12:04:10.552514 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 12:04:10.552519 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 12:04:10.552524 kernel: x2apic enabled Feb 9 12:04:10.552530 kernel: Switched APIC routing to cluster x2apic. Feb 9 12:04:10.552535 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 12:04:10.552540 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 12:04:10.552545 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 12:04:10.552550 kernel: process: using mwait in idle threads Feb 9 12:04:10.552555 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 12:04:10.552560 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 12:04:10.552564 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 12:04:10.552569 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 12:04:10.552575 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 12:04:10.552580 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 12:04:10.552585 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 12:04:10.552590 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 12:04:10.552594 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 12:04:10.552599 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 12:04:10.552604 kernel: TAA: Mitigation: TSX disabled Feb 9 12:04:10.552609 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 12:04:10.552614 kernel: SRBDS: Mitigation: Microcode Feb 9 12:04:10.552619 kernel: GDS: Vulnerable: No microcode Feb 9 12:04:10.552624 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 12:04:10.552629 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 12:04:10.552634 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 12:04:10.552639 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 12:04:10.552644 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 12:04:10.552649 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 12:04:10.552654 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 12:04:10.552659 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 12:04:10.552664 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 12:04:10.552668 kernel: Freeing SMP alternatives memory: 32K Feb 9 12:04:10.552673 kernel: pid_max: default: 32768 minimum: 301 Feb 9 12:04:10.552678 kernel: LSM: Security Framework initializing Feb 9 12:04:10.552683 kernel: SELinux: Initializing. Feb 9 12:04:10.552689 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 12:04:10.552693 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 12:04:10.552698 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 12:04:10.552703 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 12:04:10.552708 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 12:04:10.552713 kernel: ... version: 4 Feb 9 12:04:10.552718 kernel: ... bit width: 48 Feb 9 12:04:10.552723 kernel: ... generic registers: 4 Feb 9 12:04:10.552728 kernel: ... value mask: 0000ffffffffffff Feb 9 12:04:10.552733 kernel: ... max period: 00007fffffffffff Feb 9 12:04:10.552738 kernel: ... fixed-purpose events: 3 Feb 9 12:04:10.552744 kernel: ... event mask: 000000070000000f Feb 9 12:04:10.552748 kernel: signal: max sigframe size: 2032 Feb 9 12:04:10.552753 kernel: rcu: Hierarchical SRCU implementation. Feb 9 12:04:10.552758 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 12:04:10.552763 kernel: smp: Bringing up secondary CPUs ... Feb 9 12:04:10.552768 kernel: x86: Booting SMP configuration: Feb 9 12:04:10.552773 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 12:04:10.552778 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 12:04:10.552784 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 12:04:10.552789 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 12:04:10.552794 kernel: smpboot: Max logical packages: 1 Feb 9 12:04:10.552799 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 12:04:10.552803 kernel: devtmpfs: initialized Feb 9 12:04:10.552808 kernel: x86/mm: Memory block size: 128MB Feb 9 12:04:10.552813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8258a000-0x8258afff] (4096 bytes) Feb 9 12:04:10.552818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 9 12:04:10.552824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 12:04:10.552829 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 12:04:10.552834 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 12:04:10.552839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 12:04:10.552844 kernel: audit: initializing netlink subsys (disabled) Feb 9 12:04:10.552849 kernel: audit: type=2000 audit(1707480245.040:1): state=initialized audit_enabled=0 res=1 Feb 9 12:04:10.552854 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 12:04:10.552859 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 12:04:10.552864 kernel: cpuidle: using governor menu Feb 9 12:04:10.552869 kernel: ACPI: bus type PCI registered Feb 9 12:04:10.552874 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 12:04:10.552879 kernel: dca service started, version 1.12.1 Feb 9 12:04:10.552884 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 12:04:10.552889 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 12:04:10.552894 kernel: PCI: Using configuration type 1 for base access Feb 9 12:04:10.552899 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 12:04:10.552904 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 12:04:10.552909 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 12:04:10.552914 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 12:04:10.552919 kernel: ACPI: Added _OSI(Module Device) Feb 9 12:04:10.552924 kernel: ACPI: Added _OSI(Processor Device) Feb 9 12:04:10.552929 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 12:04:10.552934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 12:04:10.552939 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 12:04:10.552944 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 12:04:10.552949 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 12:04:10.552954 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 12:04:10.552959 kernel: ACPI: Dynamic OEM Table Load: Feb 9 12:04:10.552964 kernel: ACPI: SSDT 0xFFFF9D4500213400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 12:04:10.552969 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 12:04:10.552974 kernel: ACPI: Dynamic OEM Table Load: Feb 9 12:04:10.552979 kernel: ACPI: SSDT 0xFFFF9D4501AE6400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 12:04:10.552984 kernel: ACPI: Dynamic OEM Table Load: Feb 9 12:04:10.552989 kernel: ACPI: SSDT 0xFFFF9D4501A59800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 12:04:10.552994 kernel: ACPI: Dynamic OEM Table Load: Feb 9 12:04:10.552999 kernel: ACPI: SSDT 0xFFFF9D4501A5F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 12:04:10.553003 kernel: ACPI: Dynamic OEM Table Load: Feb 9 12:04:10.553009 kernel: ACPI: SSDT 0xFFFF9D450014E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 12:04:10.553014 kernel: ACPI: Dynamic OEM Table Load: Feb 9 12:04:10.553019 kernel: ACPI: SSDT 0xFFFF9D4501AE2400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 12:04:10.553024 kernel: ACPI: Interpreter enabled Feb 9 12:04:10.553029 kernel: ACPI: PM: (supports S0 S5) Feb 9 12:04:10.553034 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 12:04:10.553039 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 12:04:10.553044 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 12:04:10.553048 kernel: HEST: Table parsing has been initialized. Feb 9 12:04:10.553054 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 12:04:10.553059 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 12:04:10.553064 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 12:04:10.553069 kernel: ACPI: PM: Power Resource [USBC] Feb 9 12:04:10.553074 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 12:04:10.553079 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 12:04:10.553083 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 12:04:10.553088 kernel: ACPI: PM: Power Resource [WRST] Feb 9 12:04:10.553093 kernel: ACPI: PM: Power Resource [FN00] Feb 9 12:04:10.553099 kernel: ACPI: PM: Power Resource [FN01] Feb 9 12:04:10.553104 kernel: ACPI: PM: Power Resource [FN02] Feb 9 12:04:10.553109 kernel: ACPI: PM: Power Resource [FN03] Feb 9 12:04:10.553113 kernel: ACPI: PM: Power Resource [FN04] Feb 9 12:04:10.553118 kernel: ACPI: PM: Power Resource [PIN] Feb 9 12:04:10.553123 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 12:04:10.553187 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 12:04:10.553232 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 12:04:10.553274 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 12:04:10.553281 kernel: PCI host bridge to bus 0000:00 Feb 9 12:04:10.553327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 12:04:10.553364 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 12:04:10.553408 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 12:04:10.553454 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 9 12:04:10.553499 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 12:04:10.553546 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 12:04:10.553607 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 12:04:10.553665 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 12:04:10.553719 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.553774 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 12:04:10.553826 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 9 12:04:10.553883 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 12:04:10.553936 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 9 12:04:10.553995 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 12:04:10.554048 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 9 12:04:10.554101 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 12:04:10.554156 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 12:04:10.554211 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 9 12:04:10.554261 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 9 12:04:10.554318 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 12:04:10.554369 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 12:04:10.554427 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 12:04:10.554479 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 12:04:10.554535 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 12:04:10.554588 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 9 12:04:10.554639 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 12:04:10.554693 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 12:04:10.554744 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 9 12:04:10.554795 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 12:04:10.554852 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 12:04:10.554906 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 9 12:04:10.554956 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 12:04:10.555010 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 12:04:10.555061 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 9 12:04:10.555111 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 9 12:04:10.555164 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 9 12:04:10.555229 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 9 12:04:10.555286 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 9 12:04:10.555337 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 9 12:04:10.555390 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 12:04:10.555446 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 12:04:10.555498 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.555553 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 12:04:10.555604 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.555662 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 12:04:10.555713 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.555768 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 12:04:10.555819 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.555876 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 9 12:04:10.555929 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.555982 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 12:04:10.556034 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 12:04:10.556090 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 12:04:10.556147 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 12:04:10.556197 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 9 12:04:10.556249 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 12:04:10.556302 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 12:04:10.556353 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 12:04:10.556413 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 12:04:10.556469 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 12:04:10.556522 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 9 12:04:10.556574 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 9 12:04:10.556627 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 12:04:10.556679 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 12:04:10.556737 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 12:04:10.556790 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 12:04:10.556846 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 9 12:04:10.556898 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 9 12:04:10.556951 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 12:04:10.557036 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 12:04:10.557109 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 12:04:10.557159 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 12:04:10.557207 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 12:04:10.557249 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 12:04:10.557300 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 9 12:04:10.557344 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 9 12:04:10.557412 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 12:04:10.557480 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 9 12:04:10.557522 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.557564 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 12:04:10.557604 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 12:04:10.557646 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 12:04:10.557693 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 12:04:10.557737 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 9 12:04:10.557780 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 12:04:10.557824 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 9 12:04:10.557867 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 12:04:10.557908 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 12:04:10.557949 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 12:04:10.557991 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 12:04:10.558034 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 12:04:10.558079 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 12:04:10.558123 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 9 12:04:10.558166 kernel: pci 0000:06:00.0: supports D1 D2 Feb 9 12:04:10.558208 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 12:04:10.558250 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 12:04:10.558291 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 12:04:10.558334 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 12:04:10.558379 kernel: pci_bus 0000:07: extended config space not accessible Feb 9 12:04:10.558473 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 12:04:10.558519 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 9 12:04:10.558564 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 9 12:04:10.558609 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 12:04:10.558653 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 12:04:10.558699 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 12:04:10.558744 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 12:04:10.558787 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 12:04:10.558830 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 12:04:10.558872 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 12:04:10.558879 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 12:04:10.558885 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 12:04:10.558892 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 12:04:10.558897 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 12:04:10.558902 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 12:04:10.558907 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 12:04:10.558912 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 12:04:10.558917 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 12:04:10.558923 kernel: iommu: Default domain type: Translated Feb 9 12:04:10.558928 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 12:04:10.558971 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 9 12:04:10.559017 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 12:04:10.559061 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 9 12:04:10.559069 kernel: vgaarb: loaded Feb 9 12:04:10.559074 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 12:04:10.559080 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 12:04:10.559085 kernel: PTP clock support registered Feb 9 12:04:10.559090 kernel: PCI: Using ACPI for IRQ routing Feb 9 12:04:10.559095 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 12:04:10.559100 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 12:04:10.559107 kernel: e820: reserve RAM buffer [mem 0x8258a000-0x83ffffff] Feb 9 12:04:10.559112 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 9 12:04:10.559117 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 9 12:04:10.559122 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 9 12:04:10.559127 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 9 12:04:10.559132 kernel: clocksource: Switched to clocksource tsc-early Feb 9 12:04:10.559137 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 12:04:10.559142 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 12:04:10.559147 kernel: pnp: PnP ACPI init Feb 9 12:04:10.559193 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 12:04:10.559235 kernel: pnp 00:02: [dma 0 disabled] Feb 9 12:04:10.559275 kernel: pnp 00:03: [dma 0 disabled] Feb 9 12:04:10.559315 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 12:04:10.559353 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 12:04:10.559419 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 12:04:10.559482 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 12:04:10.559519 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 12:04:10.559556 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 12:04:10.559593 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 12:04:10.559629 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 12:04:10.559666 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 12:04:10.559704 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 12:04:10.559743 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 12:04:10.559785 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 12:04:10.559823 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 12:04:10.559859 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 12:04:10.559896 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 12:04:10.559932 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 12:04:10.559968 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 12:04:10.560007 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 12:04:10.560046 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 12:04:10.560054 kernel: pnp: PnP ACPI: found 10 devices Feb 9 12:04:10.560059 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 12:04:10.560065 kernel: NET: Registered PF_INET protocol family Feb 9 12:04:10.560070 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 12:04:10.560075 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 12:04:10.560080 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 12:04:10.560087 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 12:04:10.560092 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 12:04:10.560098 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 12:04:10.560103 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 12:04:10.560108 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 12:04:10.560113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 12:04:10.560118 kernel: NET: Registered PF_XDP protocol family Feb 9 12:04:10.560160 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 9 12:04:10.560204 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 9 12:04:10.560245 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 9 12:04:10.560288 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 12:04:10.560332 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 12:04:10.560375 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 12:04:10.560456 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 12:04:10.560498 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 12:04:10.560539 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 12:04:10.560583 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 12:04:10.560624 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 12:04:10.560665 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 12:04:10.560705 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 12:04:10.560747 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 12:04:10.560789 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 12:04:10.560830 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 12:04:10.560872 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 12:04:10.560912 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 12:04:10.560954 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 12:04:10.560996 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 12:04:10.561038 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 12:04:10.561079 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 12:04:10.561121 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 12:04:10.561163 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 12:04:10.561200 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 12:04:10.561237 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 12:04:10.561272 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 12:04:10.561308 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 12:04:10.561343 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 9 12:04:10.561378 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 12:04:10.561426 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 9 12:04:10.561467 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 12:04:10.561509 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 9 12:04:10.561546 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 9 12:04:10.561588 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 9 12:04:10.561626 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 9 12:04:10.561672 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 9 12:04:10.561713 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 9 12:04:10.561753 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 12:04:10.561793 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 9 12:04:10.561801 kernel: PCI: CLS 64 bytes, default 64 Feb 9 12:04:10.561806 kernel: DMAR: No ATSR found Feb 9 12:04:10.561811 kernel: DMAR: No SATC found Feb 9 12:04:10.561817 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 12:04:10.561858 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 12:04:10.561902 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 12:04:10.561943 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 9 12:04:10.561984 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 9 12:04:10.562025 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 9 12:04:10.562066 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 9 12:04:10.562106 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 9 12:04:10.562146 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 9 12:04:10.562187 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 9 12:04:10.562230 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 9 12:04:10.562271 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 9 12:04:10.562312 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 9 12:04:10.562353 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 9 12:04:10.562398 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 9 12:04:10.562439 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 9 12:04:10.562481 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 9 12:04:10.562522 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 9 12:04:10.562564 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 9 12:04:10.562605 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 9 12:04:10.562647 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 9 12:04:10.562687 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 9 12:04:10.562730 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 9 12:04:10.562772 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 9 12:04:10.562815 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 9 12:04:10.562859 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 12:04:10.562903 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 9 12:04:10.562948 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 9 12:04:10.562955 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 12:04:10.562961 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 12:04:10.562966 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 9 12:04:10.562972 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 9 12:04:10.562977 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 12:04:10.562982 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 12:04:10.562989 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 12:04:10.563032 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 12:04:10.563040 kernel: Initialise system trusted keyrings Feb 9 12:04:10.563046 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 12:04:10.563051 kernel: Key type asymmetric registered Feb 9 12:04:10.563056 kernel: Asymmetric key parser 'x509' registered Feb 9 12:04:10.563061 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 12:04:10.563066 kernel: io scheduler mq-deadline registered Feb 9 12:04:10.563073 kernel: io scheduler kyber registered Feb 9 12:04:10.563078 kernel: io scheduler bfq registered Feb 9 12:04:10.563119 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 9 12:04:10.563161 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 9 12:04:10.563203 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 9 12:04:10.563244 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 9 12:04:10.563284 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 9 12:04:10.563326 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 9 12:04:10.563372 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 12:04:10.563380 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 12:04:10.563389 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 12:04:10.563394 kernel: pstore: Registered erst as persistent store backend Feb 9 12:04:10.563400 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 12:04:10.563405 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 12:04:10.563410 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 12:04:10.563415 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 12:04:10.563422 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 9 12:04:10.563466 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 12:04:10.563474 kernel: i8042: PNP: No PS/2 controller found. Feb 9 12:04:10.563512 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 12:04:10.563550 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 12:04:10.563587 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T12:04:09 UTC (1707480249) Feb 9 12:04:10.563625 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 12:04:10.563632 kernel: fail to initialize ptp_kvm Feb 9 12:04:10.563639 kernel: intel_pstate: Intel P-state driver initializing Feb 9 12:04:10.563644 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 12:04:10.563649 kernel: intel_pstate: HWP enabled Feb 9 12:04:10.563654 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 12:04:10.563660 kernel: vesafb: scrolling: redraw Feb 9 12:04:10.563665 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 12:04:10.563670 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000cf498735, using 768k, total 768k Feb 9 12:04:10.563675 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 12:04:10.563681 kernel: fb0: VESA VGA frame buffer device Feb 9 12:04:10.563687 kernel: NET: Registered PF_INET6 protocol family Feb 9 12:04:10.563692 kernel: Segment Routing with IPv6 Feb 9 12:04:10.563697 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 12:04:10.563702 kernel: NET: Registered PF_PACKET protocol family Feb 9 12:04:10.563707 kernel: Key type dns_resolver registered Feb 9 12:04:10.563713 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 12:04:10.563718 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 12:04:10.563723 kernel: IPI shorthand broadcast: enabled Feb 9 12:04:10.563728 kernel: sched_clock: Marking stable (1677807193, 1339663007)->(4437320182, -1419849982) Feb 9 12:04:10.563734 kernel: registered taskstats version 1 Feb 9 12:04:10.563739 kernel: Loading compiled-in X.509 certificates Feb 9 12:04:10.563745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 12:04:10.563750 kernel: Key type .fscrypt registered Feb 9 12:04:10.563755 kernel: Key type fscrypt-provisioning registered Feb 9 12:04:10.563760 kernel: pstore: Using crash dump compression: deflate Feb 9 12:04:10.563765 kernel: ima: Allocated hash algorithm: sha1 Feb 9 12:04:10.563770 kernel: ima: No architecture policies found Feb 9 12:04:10.563775 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 12:04:10.563781 kernel: Write protecting the kernel read-only data: 28672k Feb 9 12:04:10.563787 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 12:04:10.563792 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 12:04:10.563797 kernel: Run /init as init process Feb 9 12:04:10.563802 kernel: with arguments: Feb 9 12:04:10.563807 kernel: /init Feb 9 12:04:10.563813 kernel: with environment: Feb 9 12:04:10.563818 kernel: HOME=/ Feb 9 12:04:10.563823 kernel: TERM=linux Feb 9 12:04:10.563829 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 12:04:10.563835 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 12:04:10.563842 systemd[1]: Detected architecture x86-64. Feb 9 12:04:10.563847 systemd[1]: Running in initrd. Feb 9 12:04:10.563853 systemd[1]: No hostname configured, using default hostname. Feb 9 12:04:10.563858 systemd[1]: Hostname set to . Feb 9 12:04:10.563863 systemd[1]: Initializing machine ID from random generator. Feb 9 12:04:10.563870 systemd[1]: Queued start job for default target initrd.target. Feb 9 12:04:10.563875 systemd[1]: Started systemd-ask-password-console.path. Feb 9 12:04:10.563881 systemd[1]: Reached target cryptsetup.target. Feb 9 12:04:10.563886 systemd[1]: Reached target paths.target. Feb 9 12:04:10.563891 systemd[1]: Reached target slices.target. Feb 9 12:04:10.563897 systemd[1]: Reached target swap.target. Feb 9 12:04:10.563902 systemd[1]: Reached target timers.target. Feb 9 12:04:10.563907 systemd[1]: Listening on iscsid.socket. Feb 9 12:04:10.563914 systemd[1]: Listening on iscsiuio.socket. Feb 9 12:04:10.563919 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 12:04:10.563925 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 12:04:10.563930 systemd[1]: Listening on systemd-journald.socket. Feb 9 12:04:10.563936 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 9 12:04:10.563941 systemd[1]: Listening on systemd-networkd.socket. Feb 9 12:04:10.563946 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 9 12:04:10.563952 kernel: clocksource: Switched to clocksource tsc Feb 9 12:04:10.563958 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 12:04:10.563963 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 12:04:10.563969 systemd[1]: Reached target sockets.target. Feb 9 12:04:10.563974 systemd[1]: Starting kmod-static-nodes.service... Feb 9 12:04:10.563980 systemd[1]: Finished network-cleanup.service. Feb 9 12:04:10.563985 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 12:04:10.563990 systemd[1]: Starting systemd-journald.service... Feb 9 12:04:10.563996 systemd[1]: Starting systemd-modules-load.service... Feb 9 12:04:10.564003 systemd-journald[266]: Journal started Feb 9 12:04:10.564029 systemd-journald[266]: Runtime Journal (/run/log/journal/c0f5180028514d5ab4ae4e26bd890a8f) is 8.0M, max 640.1M, 632.1M free. Feb 9 12:04:10.565916 systemd-modules-load[267]: Inserted module 'overlay' Feb 9 12:04:10.571000 audit: BPF prog-id=6 op=LOAD Feb 9 12:04:10.590419 kernel: audit: type=1334 audit(1707480250.571:2): prog-id=6 op=LOAD Feb 9 12:04:10.590434 systemd[1]: Starting systemd-resolved.service... Feb 9 12:04:10.640425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 12:04:10.640441 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 12:04:10.672421 kernel: Bridge firewalling registered Feb 9 12:04:10.672439 systemd[1]: Started systemd-journald.service. Feb 9 12:04:10.687085 systemd-modules-load[267]: Inserted module 'br_netfilter' Feb 9 12:04:10.735765 kernel: audit: type=1130 audit(1707480250.694:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.693349 systemd-resolved[269]: Positive Trust Anchors: Feb 9 12:04:10.792641 kernel: SCSI subsystem initialized Feb 9 12:04:10.792661 kernel: audit: type=1130 audit(1707480250.746:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.693354 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 12:04:10.914469 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 12:04:10.914484 kernel: audit: type=1130 audit(1707480250.818:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.914492 kernel: device-mapper: uevent: version 1.0.3 Feb 9 12:04:10.914498 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 12:04:10.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.693374 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 12:04:10.986640 kernel: audit: type=1130 audit(1707480250.913:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.694885 systemd-resolved[269]: Defaulting to hostname 'linux'. Feb 9 12:04:10.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.695593 systemd[1]: Started systemd-resolved.service. Feb 9 12:04:11.095724 kernel: audit: type=1130 audit(1707480250.994:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.095744 kernel: audit: type=1130 audit(1707480251.048:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:10.747553 systemd[1]: Finished kmod-static-nodes.service. Feb 9 12:04:10.819534 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 12:04:10.915305 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 12:04:10.960440 systemd-modules-load[267]: Inserted module 'dm_multipath' Feb 9 12:04:10.995749 systemd[1]: Finished systemd-modules-load.service. Feb 9 12:04:11.049674 systemd[1]: Reached target nss-lookup.target. Feb 9 12:04:11.105074 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 12:04:11.119050 systemd[1]: Starting systemd-sysctl.service... Feb 9 12:04:11.119389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 12:04:11.122235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 12:04:11.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.122988 systemd[1]: Finished systemd-sysctl.service. Feb 9 12:04:11.220119 kernel: audit: type=1130 audit(1707480251.120:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.220137 kernel: audit: type=1130 audit(1707480251.170:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.171852 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 12:04:11.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.229002 systemd[1]: Starting dracut-cmdline.service... Feb 9 12:04:11.249489 dracut-cmdline[292]: dracut-dracut-053 Feb 9 12:04:11.249489 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 12:04:11.249489 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 12:04:11.316478 kernel: Loading iSCSI transport class v2.0-870. Feb 9 12:04:11.316491 kernel: iscsi: registered transport (tcp) Feb 9 12:04:11.367354 kernel: iscsi: registered transport (qla4xxx) Feb 9 12:04:11.367369 kernel: QLogic iSCSI HBA Driver Feb 9 12:04:11.383432 systemd[1]: Finished dracut-cmdline.service. Feb 9 12:04:11.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.383949 systemd[1]: Starting dracut-pre-udev.service... Feb 9 12:04:11.440424 kernel: raid6: avx2x4 gen() 48955 MB/s Feb 9 12:04:11.476463 kernel: raid6: avx2x4 xor() 20567 MB/s Feb 9 12:04:11.511416 kernel: raid6: avx2x2 gen() 52535 MB/s Feb 9 12:04:11.546459 kernel: raid6: avx2x2 xor() 32170 MB/s Feb 9 12:04:11.581419 kernel: raid6: avx2x1 gen() 45270 MB/s Feb 9 12:04:11.616418 kernel: raid6: avx2x1 xor() 27960 MB/s Feb 9 12:04:11.650417 kernel: raid6: sse2x4 gen() 21380 MB/s Feb 9 12:04:11.684462 kernel: raid6: sse2x4 xor() 12000 MB/s Feb 9 12:04:11.718459 kernel: raid6: sse2x2 gen() 21656 MB/s Feb 9 12:04:11.752458 kernel: raid6: sse2x2 xor() 13457 MB/s Feb 9 12:04:11.786463 kernel: raid6: sse2x1 gen() 18306 MB/s Feb 9 12:04:11.838287 kernel: raid6: sse2x1 xor() 8928 MB/s Feb 9 12:04:11.838303 kernel: raid6: using algorithm avx2x2 gen() 52535 MB/s Feb 9 12:04:11.838311 kernel: raid6: .... xor() 32170 MB/s, rmw enabled Feb 9 12:04:11.856458 kernel: raid6: using avx2x2 recovery algorithm Feb 9 12:04:11.902389 kernel: xor: automatically using best checksumming function avx Feb 9 12:04:11.981418 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 12:04:11.986164 systemd[1]: Finished dracut-pre-udev.service. Feb 9 12:04:11.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:11.994000 audit: BPF prog-id=7 op=LOAD Feb 9 12:04:11.994000 audit: BPF prog-id=8 op=LOAD Feb 9 12:04:11.996460 systemd[1]: Starting systemd-udevd.service... Feb 9 12:04:12.005354 systemd-udevd[472]: Using default interface naming scheme 'v252'. Feb 9 12:04:12.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:12.009529 systemd[1]: Started systemd-udevd.service. Feb 9 12:04:12.049498 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Feb 9 12:04:12.027376 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 12:04:12.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:12.054247 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 12:04:12.066129 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 12:04:12.114685 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 12:04:12.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:12.142393 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 12:04:12.144395 kernel: libata version 3.00 loaded. Feb 9 12:04:12.144420 kernel: ACPI: bus type USB registered Feb 9 12:04:12.179016 kernel: usbcore: registered new interface driver usbfs Feb 9 12:04:12.179051 kernel: usbcore: registered new interface driver hub Feb 9 12:04:12.197391 kernel: usbcore: registered new device driver usb Feb 9 12:04:12.248527 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 12:04:12.248557 kernel: AES CTR mode by8 optimization enabled Feb 9 12:04:12.248567 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 9 12:04:12.286676 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 12:04:12.287390 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 12:04:12.320094 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 12:04:12.320391 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 12:04:12.359781 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 9 12:04:12.359859 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 12:04:12.391691 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 12:04:12.391796 kernel: pps pps0: new PPS source ptp0 Feb 9 12:04:12.391862 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 12:04:12.391915 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 9 12:04:12.393390 kernel: scsi host0: ahci Feb 9 12:04:12.393481 kernel: scsi host1: ahci Feb 9 12:04:12.393571 kernel: scsi host2: ahci Feb 9 12:04:12.393638 kernel: scsi host3: ahci Feb 9 12:04:12.393694 kernel: scsi host4: ahci Feb 9 12:04:12.393743 kernel: scsi host5: ahci Feb 9 12:04:12.393792 kernel: scsi host6: ahci Feb 9 12:04:12.393840 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 9 12:04:12.393850 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 9 12:04:12.393857 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 9 12:04:12.393863 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 9 12:04:12.393869 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 9 12:04:12.393876 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 9 12:04:12.393882 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 9 12:04:12.436758 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 12:04:12.436827 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 12:04:12.448903 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 12:04:12.448972 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b6 Feb 9 12:04:12.460597 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 12:04:12.482648 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 9 12:04:12.482719 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 12:04:12.492978 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 12:04:12.531443 kernel: pps pps1: new PPS source ptp1 Feb 9 12:04:12.545389 kernel: hub 1-0:1.0: USB hub found Feb 9 12:04:12.545470 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 12:04:12.545529 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 9 12:04:12.560414 kernel: hub 1-0:1.0: 16 ports detected Feb 9 12:04:12.560489 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 12:04:12.585547 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 12:04:12.598773 kernel: hub 2-0:1.0: USB hub found Feb 9 12:04:12.598859 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b7 Feb 9 12:04:12.598916 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 9 12:04:12.627274 kernel: hub 2-0:1.0: 10 ports detected Feb 9 12:04:12.627348 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 12:04:12.652787 kernel: usb: port power management may be unreliable Feb 9 12:04:12.709388 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 12:04:12.762387 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 12:04:12.762467 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 12:04:12.791416 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 9 12:04:12.791491 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 12:04:12.817936 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 12:04:12.818000 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 12:04:12.832425 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 12:04:12.968433 kernel: hub 1-14:1.0: USB hub found Feb 9 12:04:12.968515 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 12:04:13.003429 kernel: hub 1-14:1.0: 4 ports detected Feb 9 12:04:13.003538 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 12:04:13.075440 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 12:04:13.089449 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 12:04:13.104417 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 12:04:13.138419 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 12:04:13.138513 kernel: ata1.00: Features: NCQ-prio Feb 9 12:04:13.152436 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 12:04:13.169421 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 12:04:13.188394 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 12:04:13.218935 kernel: ata2.00: Features: NCQ-prio Feb 9 12:04:13.218948 kernel: ata1.00: configured for UDMA/133 Feb 9 12:04:13.234421 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 12:04:13.253434 kernel: ata2.00: configured for UDMA/133 Feb 9 12:04:13.268442 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 12:04:13.306389 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 9 12:04:13.327527 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 12:04:13.327544 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 12:04:13.327561 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:13.346390 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 9 12:04:13.363409 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 12:04:13.363635 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 12:04:13.363787 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 9 12:04:13.363938 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 9 12:04:13.364089 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 12:04:13.364251 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 12:04:13.364407 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:13.371450 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 12:04:13.519441 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 9 12:04:13.519519 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 12:04:13.519528 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 9 12:04:13.551456 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 12:04:13.551532 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 12:04:13.551591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 12:04:13.604443 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 12:04:13.604459 kernel: GPT:9289727 != 937703087 Feb 9 12:04:13.634141 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 12:04:13.634156 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 12:04:13.634164 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 9 12:04:13.666049 kernel: GPT:9289727 != 937703087 Feb 9 12:04:13.695607 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 12:04:13.695623 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 12:04:13.725974 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:13.725990 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 9 12:04:13.759390 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 9 12:04:13.768233 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 12:04:13.869505 kernel: usbcore: registered new interface driver usbhid Feb 9 12:04:13.869522 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (526) Feb 9 12:04:13.869533 kernel: usbhid: USB HID core driver Feb 9 12:04:13.869539 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 12:04:13.869546 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 9 12:04:13.829488 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 12:04:13.883344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 12:04:13.926062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 12:04:14.000308 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 12:04:14.000448 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 12:04:14.000457 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 12:04:13.971771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 12:04:14.010179 systemd[1]: Starting disk-uuid.service... Feb 9 12:04:14.063507 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:14.063518 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 12:04:14.063524 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:14.063573 disk-uuid[691]: Primary Header is updated. Feb 9 12:04:14.063573 disk-uuid[691]: Secondary Entries is updated. Feb 9 12:04:14.063573 disk-uuid[691]: Secondary Header is updated. Feb 9 12:04:14.122456 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 12:04:14.122468 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:14.122475 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 12:04:15.109374 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 12:04:15.129127 disk-uuid[692]: The operation has completed successfully. Feb 9 12:04:15.138471 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 12:04:15.167569 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 12:04:15.280945 kernel: kauditd_printk_skb: 10 callbacks suppressed Feb 9 12:04:15.280959 kernel: audit: type=1130 audit(1707480255.174:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.280968 kernel: audit: type=1131 audit(1707480255.174:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.167626 systemd[1]: Finished disk-uuid.service. Feb 9 12:04:15.310468 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 12:04:15.178832 systemd[1]: Starting verity-setup.service... Feb 9 12:04:15.341612 systemd[1]: Found device dev-mapper-usr.device. Feb 9 12:04:15.351349 systemd[1]: Mounting sysusr-usr.mount... Feb 9 12:04:15.358601 systemd[1]: Finished verity-setup.service. Feb 9 12:04:15.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.424389 kernel: audit: type=1130 audit(1707480255.376:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.453221 systemd[1]: Mounted sysusr-usr.mount. Feb 9 12:04:15.469491 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 12:04:15.461680 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 12:04:15.462074 systemd[1]: Starting ignition-setup.service... Feb 9 12:04:15.567648 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 12:04:15.567662 kernel: BTRFS info (device sdb6): using free space tree Feb 9 12:04:15.567670 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 12:04:15.567677 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 12:04:15.469867 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 12:04:15.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.560806 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 12:04:15.689568 kernel: audit: type=1130 audit(1707480255.576:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.689584 kernel: audit: type=1130 audit(1707480255.632:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.577740 systemd[1]: Finished ignition-setup.service. Feb 9 12:04:15.696000 audit: BPF prog-id=9 op=LOAD Feb 9 12:04:15.634086 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 12:04:15.734518 kernel: audit: type=1334 audit(1707480255.696:24): prog-id=9 op=LOAD Feb 9 12:04:15.698323 systemd[1]: Starting systemd-networkd.service... Feb 9 12:04:15.734937 systemd-networkd[881]: lo: Link UP Feb 9 12:04:15.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.753462 ignition[869]: Ignition 2.14.0 Feb 9 12:04:15.815600 kernel: audit: type=1130 audit(1707480255.750:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.734939 systemd-networkd[881]: lo: Gained carrier Feb 9 12:04:15.753466 ignition[869]: Stage: fetch-offline Feb 9 12:04:15.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.735218 systemd-networkd[881]: Enumeration completed Feb 9 12:04:15.973623 kernel: audit: type=1130 audit(1707480255.825:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.973640 kernel: audit: type=1130 audit(1707480255.885:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.973647 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 12:04:15.973730 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 9 12:04:15.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.753492 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 12:04:15.735259 systemd[1]: Started systemd-networkd.service. Feb 9 12:04:15.753505 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 12:04:15.736090 systemd-networkd[881]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 12:04:15.761845 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 12:04:16.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.751508 systemd[1]: Reached target network.target. Feb 9 12:04:16.095605 kernel: audit: type=1130 audit(1707480256.024:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:16.095619 iscsid[911]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 12:04:16.095619 iscsid[911]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 12:04:16.095619 iscsid[911]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 12:04:16.095619 iscsid[911]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 12:04:16.095619 iscsid[911]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 12:04:16.095619 iscsid[911]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 12:04:16.095619 iscsid[911]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 12:04:16.227681 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 12:04:16.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.761909 ignition[869]: parsed url from cmdline: "" Feb 9 12:04:15.783545 unknown[869]: fetched base config from "system" Feb 9 12:04:16.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:15.761911 ignition[869]: no config URL provided Feb 9 12:04:15.783549 unknown[869]: fetched user config from "system" Feb 9 12:04:15.761914 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 12:04:15.810890 systemd[1]: Starting iscsiuio.service... Feb 9 12:04:15.761944 ignition[869]: parsing config with SHA512: 1506fb73ae8feca53437551275527b70b2e0dec6723954146e3eaed198a1e233c146f5ac69472751dfd45f72fcb3e19f4a5424d848b62e619556b1a3c973b98b Feb 9 12:04:15.815741 systemd[1]: Started iscsiuio.service. Feb 9 12:04:15.784320 ignition[869]: fetch-offline: fetch-offline passed Feb 9 12:04:15.826736 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 12:04:15.784324 ignition[869]: POST message to Packet Timeline Feb 9 12:04:15.886637 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 12:04:15.784329 ignition[869]: POST Status error: resource requires networking Feb 9 12:04:15.887093 systemd[1]: Starting ignition-kargs.service... Feb 9 12:04:15.784361 ignition[869]: Ignition finished successfully Feb 9 12:04:15.961616 systemd-networkd[881]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 12:04:15.964563 ignition[899]: Ignition 2.14.0 Feb 9 12:04:15.993095 systemd[1]: Starting iscsid.service... Feb 9 12:04:15.964567 ignition[899]: Stage: kargs Feb 9 12:04:16.013548 systemd[1]: Started iscsid.service. Feb 9 12:04:15.964622 ignition[899]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 12:04:16.025978 systemd[1]: Starting dracut-initqueue.service... Feb 9 12:04:15.964631 ignition[899]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 12:04:16.087705 systemd[1]: Finished dracut-initqueue.service. Feb 9 12:04:15.967013 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 12:04:16.103606 systemd[1]: Reached target remote-fs-pre.target. Feb 9 12:04:15.967792 ignition[899]: kargs: kargs passed Feb 9 12:04:16.153462 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 12:04:15.967795 ignition[899]: POST message to Packet Timeline Feb 9 12:04:16.157347 systemd[1]: Reached target remote-fs.target. Feb 9 12:04:15.967804 ignition[899]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 12:04:16.158862 systemd-networkd[881]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 12:04:15.969576 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50151->[::1]:53: read: connection refused Feb 9 12:04:16.184190 systemd[1]: Starting dracut-pre-mount.service... Feb 9 12:04:16.170073 ignition[899]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 12:04:16.186336 systemd-networkd[881]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 12:04:16.170321 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36499->[::1]:53: read: connection refused Feb 9 12:04:16.215065 systemd-networkd[881]: enp1s0f1np1: Link UP Feb 9 12:04:16.215158 systemd-networkd[881]: enp1s0f1np1: Gained carrier Feb 9 12:04:16.571177 ignition[899]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 12:04:16.215530 systemd[1]: Finished dracut-pre-mount.service. Feb 9 12:04:16.572280 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45713->[::1]:53: read: connection refused Feb 9 12:04:16.220534 systemd-networkd[881]: enp1s0f0np0: Link UP Feb 9 12:04:16.220619 systemd-networkd[881]: eno2: Link UP Feb 9 12:04:16.220696 systemd-networkd[881]: eno1: Link UP Feb 9 12:04:16.996418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 9 12:04:16.996443 systemd-networkd[881]: enp1s0f0np0: Gained carrier Feb 9 12:04:17.029595 systemd-networkd[881]: enp1s0f0np0: DHCPv4 address 139.178.89.23/31, gateway 139.178.89.22 acquired from 145.40.83.140 Feb 9 12:04:17.372814 ignition[899]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 12:04:17.374220 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50285->[::1]:53: read: connection refused Feb 9 12:04:17.676793 systemd-networkd[881]: enp1s0f1np1: Gained IPv6LL Feb 9 12:04:18.956961 systemd-networkd[881]: enp1s0f0np0: Gained IPv6LL Feb 9 12:04:18.975679 ignition[899]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 12:04:18.976774 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34538->[::1]:53: read: connection refused Feb 9 12:04:22.180416 ignition[899]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 12:04:22.214686 ignition[899]: GET result: OK Feb 9 12:04:22.425322 ignition[899]: Ignition finished successfully Feb 9 12:04:22.427777 systemd[1]: Finished ignition-kargs.service. Feb 9 12:04:22.518002 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 12:04:22.518023 kernel: audit: type=1130 audit(1707480262.439:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:22.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:22.451618 ignition[928]: Ignition 2.14.0 Feb 9 12:04:22.442655 systemd[1]: Starting ignition-disks.service... Feb 9 12:04:22.451622 ignition[928]: Stage: disks Feb 9 12:04:22.451678 ignition[928]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 12:04:22.451689 ignition[928]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 12:04:22.453600 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 12:04:22.454337 ignition[928]: disks: disks passed Feb 9 12:04:22.454340 ignition[928]: POST message to Packet Timeline Feb 9 12:04:22.454350 ignition[928]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 12:04:22.477731 ignition[928]: GET result: OK Feb 9 12:04:22.660231 ignition[928]: Ignition finished successfully Feb 9 12:04:22.662515 systemd[1]: Finished ignition-disks.service. Feb 9 12:04:22.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:22.675984 systemd[1]: Reached target initrd-root-device.target. Feb 9 12:04:22.762625 kernel: audit: type=1130 audit(1707480262.674:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:22.748569 systemd[1]: Reached target local-fs-pre.target. Feb 9 12:04:22.748608 systemd[1]: Reached target local-fs.target. Feb 9 12:04:22.770592 systemd[1]: Reached target sysinit.target. Feb 9 12:04:22.784546 systemd[1]: Reached target basic.target. Feb 9 12:04:22.785192 systemd[1]: Starting systemd-fsck-root.service... Feb 9 12:04:22.810995 systemd-fsck[944]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 12:04:22.831166 systemd[1]: Finished systemd-fsck-root.service. Feb 9 12:04:22.920150 kernel: audit: type=1130 audit(1707480262.838:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:22.920164 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 12:04:22.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:22.840019 systemd[1]: Mounting sysroot.mount... Feb 9 12:04:22.928015 systemd[1]: Mounted sysroot.mount. Feb 9 12:04:22.941653 systemd[1]: Reached target initrd-root-fs.target. Feb 9 12:04:22.949300 systemd[1]: Mounting sysroot-usr.mount... Feb 9 12:04:22.974232 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 12:04:22.982913 systemd[1]: Starting flatcar-static-network.service... Feb 9 12:04:22.999517 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 12:04:22.999632 systemd[1]: Reached target ignition-diskful.target. Feb 9 12:04:23.018406 systemd[1]: Mounted sysroot-usr.mount. Feb 9 12:04:23.043194 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 12:04:23.054990 systemd[1]: Starting initrd-setup-root.service... Feb 9 12:04:23.184661 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (955) Feb 9 12:04:23.184676 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 12:04:23.184690 kernel: BTRFS info (device sdb6): using free space tree Feb 9 12:04:23.184699 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 12:04:23.184706 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 12:04:23.184713 initrd-setup-root[962]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 12:04:23.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.106673 systemd[1]: Finished initrd-setup-root.service. Feb 9 12:04:23.312683 kernel: audit: type=1130 audit(1707480263.192:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.312697 kernel: audit: type=1130 audit(1707480263.255:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.312724 coreos-metadata[951]: Feb 09 12:04:23.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 12:04:23.312724 coreos-metadata[951]: Feb 09 12:04:23.120 INFO Fetch successful Feb 9 12:04:23.312724 coreos-metadata[951]: Feb 09 12:04:23.137 INFO wrote hostname ci-3510.3.2-a-b58f4ff548 to /sysroot/etc/hostname Feb 9 12:04:23.465648 kernel: audit: type=1130 audit(1707480263.320:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.465661 kernel: audit: type=1131 audit(1707480263.320:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.465703 coreos-metadata[952]: Feb 09 12:04:23.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 12:04:23.465703 coreos-metadata[952]: Feb 09 12:04:23.119 INFO Fetch successful Feb 9 12:04:23.500484 initrd-setup-root[970]: cut: /sysroot/etc/group: No such file or directory Feb 9 12:04:23.194694 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 12:04:23.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.546586 initrd-setup-root[978]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 12:04:23.585632 kernel: audit: type=1130 audit(1707480263.517:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.256700 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 12:04:23.595644 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 12:04:23.256739 systemd[1]: Finished flatcar-static-network.service. Feb 9 12:04:23.321653 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 12:04:23.622614 ignition[1026]: INFO : Ignition 2.14.0 Feb 9 12:04:23.622614 ignition[1026]: INFO : Stage: mount Feb 9 12:04:23.622614 ignition[1026]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 12:04:23.622614 ignition[1026]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 12:04:23.622614 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 12:04:23.622614 ignition[1026]: INFO : mount: mount passed Feb 9 12:04:23.622614 ignition[1026]: INFO : POST message to Packet Timeline Feb 9 12:04:23.622614 ignition[1026]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 12:04:23.622614 ignition[1026]: INFO : GET result: OK Feb 9 12:04:23.444015 systemd[1]: Starting ignition-mount.service... Feb 9 12:04:23.472937 systemd[1]: Starting sysroot-boot.service... Feb 9 12:04:23.487848 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 12:04:23.488075 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 12:04:23.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.811057 ignition[1026]: INFO : Ignition finished successfully Feb 9 12:04:23.825480 kernel: audit: type=1130 audit(1707480263.743:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:23.503577 systemd[1]: Finished sysroot-boot.service. Feb 9 12:04:23.732094 systemd[1]: Finished ignition-mount.service. Feb 9 12:04:23.961602 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1044) Feb 9 12:04:23.961614 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 12:04:23.961621 kernel: BTRFS info (device sdb6): using free space tree Feb 9 12:04:23.961628 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 12:04:23.961634 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 12:04:23.746763 systemd[1]: Starting ignition-files.service... Feb 9 12:04:23.819408 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 12:04:23.957953 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 12:04:23.993495 ignition[1063]: INFO : Ignition 2.14.0 Feb 9 12:04:23.993495 ignition[1063]: INFO : Stage: files Feb 9 12:04:23.993495 ignition[1063]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 12:04:23.993495 ignition[1063]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 12:04:23.993495 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 12:04:23.993495 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Feb 9 12:04:23.993495 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 12:04:23.993495 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 12:04:23.993495 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 12:04:23.993495 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 12:04:23.993495 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 12:04:23.993495 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 12:04:23.993495 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 12:04:23.982239 unknown[1063]: wrote ssh authorized keys file for user: core Feb 9 12:04:24.162638 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 12:04:24.162638 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 12:04:24.162638 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 12:04:24.162638 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 12:04:24.162638 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 12:04:24.162638 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 12:04:24.401365 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 12:04:24.478873 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 12:04:24.503658 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 12:04:24.503658 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 12:04:24.503658 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 12:04:24.749993 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 12:04:24.799053 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 12:04:24.799053 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 12:04:24.840616 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 12:04:24.840616 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 12:04:24.872543 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 12:04:25.161834 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 12:04:25.161834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 12:04:25.203647 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 12:04:25.203647 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 12:04:25.235456 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 12:04:25.556837 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 12:04:25.556837 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 12:04:25.599598 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 12:04:25.599598 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 12:04:25.631505 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 12:04:25.732085 ignition[1063]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 12:04:25.732085 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 12:04:25.773614 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 12:04:25.773614 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 12:04:25.773614 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 12:04:25.773614 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 12:04:26.154447 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 12:04:26.185769 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 12:04:26.185769 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 12:04:26.234468 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1087) Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem318049776" Feb 9 12:04:26.234488 ignition[1063]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem318049776": device or resource busy Feb 9 12:04:26.234488 ignition[1063]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem318049776", trying btrfs: device or resource busy Feb 9 12:04:26.234488 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem318049776" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem318049776" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem318049776" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem318049776" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(16): [started] processing unit "packet-phone-home.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(16): [finished] processing unit "packet-phone-home.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(17): [started] processing unit "containerd.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(17): [finished] processing unit "containerd.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 9 12:04:26.491713 ignition[1063]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 12:04:26.910604 kernel: audit: type=1130 audit(1707480266.515:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(22): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(22): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 12:04:26.910986 ignition[1063]: INFO : files: files passed Feb 9 12:04:26.910986 ignition[1063]: INFO : POST message to Packet Timeline Feb 9 12:04:26.910986 ignition[1063]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 12:04:26.910986 ignition[1063]: INFO : GET result: OK Feb 9 12:04:26.910986 ignition[1063]: INFO : Ignition finished successfully Feb 9 12:04:26.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.501873 systemd[1]: Finished ignition-files.service. Feb 9 12:04:26.522082 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 12:04:27.413825 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 12:04:27.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.583673 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 12:04:27.526663 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 12:04:27.526678 kernel: audit: type=1131 audit(1707480267.444:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.584020 systemd[1]: Starting ignition-quench.service... Feb 9 12:04:27.594776 kernel: audit: type=1131 audit(1707480267.534:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.623785 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 12:04:26.646900 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 12:04:26.647004 systemd[1]: Finished ignition-quench.service. Feb 9 12:04:26.668001 systemd[1]: Reached target ignition-complete.target. Feb 9 12:04:26.690431 systemd[1]: Starting initrd-parse-etc.service... Feb 9 12:04:27.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.726820 ignition[1113]: INFO : Ignition 2.14.0 Feb 9 12:04:27.726820 ignition[1113]: INFO : Stage: umount Feb 9 12:04:27.726820 ignition[1113]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 12:04:27.726820 ignition[1113]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 12:04:27.726820 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 12:04:27.726820 ignition[1113]: INFO : umount: umount passed Feb 9 12:04:27.726820 ignition[1113]: INFO : POST message to Packet Timeline Feb 9 12:04:27.726820 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 12:04:27.726820 ignition[1113]: INFO : GET result: OK Feb 9 12:04:28.193592 kernel: audit: type=1131 audit(1707480267.660:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.193609 kernel: audit: type=1131 audit(1707480267.734:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.193617 kernel: audit: type=1131 audit(1707480267.800:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.193624 kernel: audit: type=1131 audit(1707480267.867:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.193631 kernel: audit: type=1131 audit(1707480267.934:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.193639 kernel: audit: type=1131 audit(1707480268.000:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.193646 kernel: audit: type=1334 audit(1707480268.000:60): prog-id=6 op=UNLOAD Feb 9 12:04:28.193652 kernel: audit: type=1131 audit(1707480268.119:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.000000 audit: BPF prog-id=6 op=UNLOAD Feb 9 12:04:28.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.725625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 12:04:28.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.211677 ignition[1113]: INFO : Ignition finished successfully Feb 9 12:04:28.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.725688 systemd[1]: Finished initrd-parse-etc.service. Feb 9 12:04:26.738810 systemd[1]: Reached target initrd-fs.target. Feb 9 12:04:28.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.771689 systemd[1]: Reached target initrd.target. Feb 9 12:04:28.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.789934 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 12:04:28.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.792154 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 12:04:26.825425 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 12:04:28.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.838466 systemd[1]: Starting initrd-cleanup.service... Feb 9 12:04:28.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.868311 systemd[1]: Stopped target network.target. Feb 9 12:04:26.893676 systemd[1]: Stopped target nss-lookup.target. Feb 9 12:04:26.918778 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 12:04:26.944139 systemd[1]: Stopped target timers.target. Feb 9 12:04:28.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.963070 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 12:04:28.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.963455 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 12:04:28.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:26.982269 systemd[1]: Stopped target initrd.target. Feb 9 12:04:28.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.007078 systemd[1]: Stopped target basic.target. Feb 9 12:04:28.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.032084 systemd[1]: Stopped target ignition-complete.target. Feb 9 12:04:28.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.050986 systemd[1]: Stopped target ignition-diskful.target. Feb 9 12:04:28.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:28.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.072077 systemd[1]: Stopped target initrd-root-device.target. Feb 9 12:04:27.092995 systemd[1]: Stopped target remote-fs.target. Feb 9 12:04:27.114074 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 12:04:27.135108 systemd[1]: Stopped target sysinit.target. Feb 9 12:04:27.158092 systemd[1]: Stopped target local-fs.target. Feb 9 12:04:27.181076 systemd[1]: Stopped target local-fs-pre.target. Feb 9 12:04:27.202077 systemd[1]: Stopped target swap.target. Feb 9 12:04:27.221967 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 12:04:27.222331 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 12:04:27.244190 systemd[1]: Stopped target cryptsetup.target. Feb 9 12:04:27.265838 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 12:04:27.266199 systemd[1]: Stopped dracut-initqueue.service. Feb 9 12:04:27.289233 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 12:04:27.289607 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 12:04:28.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:27.312257 systemd[1]: Stopped target paths.target. Feb 9 12:04:27.325835 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 12:04:27.329742 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 12:04:27.341985 systemd[1]: Stopped target slices.target. Feb 9 12:04:28.654000 audit: BPF prog-id=5 op=UNLOAD Feb 9 12:04:28.654000 audit: BPF prog-id=4 op=UNLOAD Feb 9 12:04:28.654000 audit: BPF prog-id=3 op=UNLOAD Feb 9 12:04:27.359074 systemd[1]: Stopped target sockets.target. Feb 9 12:04:28.659000 audit: BPF prog-id=8 op=UNLOAD Feb 9 12:04:28.659000 audit: BPF prog-id=7 op=UNLOAD Feb 9 12:04:27.373941 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 12:04:27.374193 systemd[1]: Closed iscsid.socket. Feb 9 12:04:27.389906 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 12:04:28.706249 iscsid[911]: iscsid shutting down. Feb 9 12:04:27.390141 systemd[1]: Closed iscsiuio.socket. Feb 9 12:04:27.404010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 12:04:27.404366 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 12:04:27.424076 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 12:04:27.424434 systemd[1]: Stopped ignition-files.service. Feb 9 12:04:28.706400 systemd-journald[266]: Received SIGTERM from PID 1 (n/a). Feb 9 12:04:27.446170 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 12:04:27.446553 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 12:04:27.536158 systemd[1]: Stopping ignition-mount.service... Feb 9 12:04:27.602117 systemd[1]: Stopping sysroot-boot.service... Feb 9 12:04:27.616701 systemd[1]: Stopping systemd-networkd.service... Feb 9 12:04:27.623400 systemd-networkd[881]: enp1s0f1np1: DHCPv6 lease lost Feb 9 12:04:27.630588 systemd-networkd[881]: enp1s0f0np0: DHCPv6 lease lost Feb 9 12:04:28.705000 audit: BPF prog-id=9 op=UNLOAD Feb 9 12:04:27.631654 systemd[1]: Stopping systemd-resolved.service... Feb 9 12:04:27.645569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 12:04:27.645959 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 12:04:27.661975 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 12:04:27.662300 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 12:04:27.737093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 12:04:27.737605 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 12:04:27.737647 systemd[1]: Stopped systemd-resolved.service. Feb 9 12:04:27.802595 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 12:04:27.802638 systemd[1]: Stopped systemd-networkd.service. Feb 9 12:04:27.868840 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 12:04:27.868879 systemd[1]: Stopped ignition-mount.service. Feb 9 12:04:27.935774 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 12:04:27.935811 systemd[1]: Stopped sysroot-boot.service. Feb 9 12:04:28.001842 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 12:04:28.001889 systemd[1]: Closed systemd-networkd.socket. Feb 9 12:04:28.095632 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 12:04:28.095653 systemd[1]: Stopped ignition-disks.service. Feb 9 12:04:28.120604 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 12:04:28.120642 systemd[1]: Stopped ignition-kargs.service. Feb 9 12:04:28.186620 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 12:04:28.186642 systemd[1]: Stopped ignition-setup.service. Feb 9 12:04:28.202604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 12:04:28.202626 systemd[1]: Stopped initrd-setup-root.service. Feb 9 12:04:28.221208 systemd[1]: Stopping network-cleanup.service... Feb 9 12:04:28.237569 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 12:04:28.237713 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 12:04:28.253763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 12:04:28.253834 systemd[1]: Stopped systemd-sysctl.service. Feb 9 12:04:28.269954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 12:04:28.270059 systemd[1]: Stopped systemd-modules-load.service. Feb 9 12:04:28.286111 systemd[1]: Stopping systemd-udevd.service... Feb 9 12:04:28.307683 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 12:04:28.309543 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 12:04:28.309864 systemd[1]: Stopped systemd-udevd.service. Feb 9 12:04:28.321144 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 12:04:28.321363 systemd[1]: Finished initrd-cleanup.service. Feb 9 12:04:28.339126 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 12:04:28.339235 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 12:04:28.349867 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 12:04:28.349960 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 12:04:28.364749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 12:04:28.364879 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 12:04:28.385605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 12:04:28.385707 systemd[1]: Stopped dracut-cmdline.service. Feb 9 12:04:28.400718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 12:04:28.400772 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 12:04:28.417570 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 12:04:28.431460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 12:04:28.431493 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 12:04:28.431662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 12:04:28.431682 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 12:04:28.453489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 12:04:28.453599 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 12:04:28.470632 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 12:04:28.471293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 12:04:28.471390 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 12:04:28.587792 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 12:04:28.588001 systemd[1]: Stopped network-cleanup.service. Feb 9 12:04:28.604175 systemd[1]: Reached target initrd-switch-root.target. Feb 9 12:04:28.621696 systemd[1]: Starting initrd-switch-root.service... Feb 9 12:04:28.641493 systemd[1]: Switching root. Feb 9 12:04:28.707946 systemd-journald[266]: Journal stopped Feb 9 12:04:32.403218 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 12:04:32.403232 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 12:04:32.403241 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 12:04:32.403247 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 12:04:32.403252 kernel: SELinux: policy capability open_perms=1 Feb 9 12:04:32.403257 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 12:04:32.403263 kernel: SELinux: policy capability always_check_network=0 Feb 9 12:04:32.403269 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 12:04:32.403274 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 12:04:32.403281 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 12:04:32.403286 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 12:04:32.403292 systemd[1]: Successfully loaded SELinux policy in 319.049ms. Feb 9 12:04:32.403299 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.717ms. Feb 9 12:04:32.403306 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 12:04:32.403314 systemd[1]: Detected architecture x86-64. Feb 9 12:04:32.403320 systemd[1]: Detected first boot. Feb 9 12:04:32.403326 systemd[1]: Hostname set to . Feb 9 12:04:32.403333 systemd[1]: Initializing machine ID from random generator. Feb 9 12:04:32.403339 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 12:04:32.403345 systemd[1]: Populated /etc with preset unit settings. Feb 9 12:04:32.403351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 12:04:32.403358 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 12:04:32.403365 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 12:04:32.403372 systemd[1]: Queued start job for default target multi-user.target. Feb 9 12:04:32.403378 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 12:04:32.403387 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 12:04:32.403395 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 12:04:32.403426 systemd[1]: Created slice system-getty.slice. Feb 9 12:04:32.403433 systemd[1]: Created slice system-modprobe.slice. Feb 9 12:04:32.403440 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 12:04:32.403462 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 12:04:32.403469 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 12:04:32.403475 systemd[1]: Created slice user.slice. Feb 9 12:04:32.403481 systemd[1]: Started systemd-ask-password-console.path. Feb 9 12:04:32.403487 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 12:04:32.403493 systemd[1]: Set up automount boot.automount. Feb 9 12:04:32.403501 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 12:04:32.403507 systemd[1]: Reached target integritysetup.target. Feb 9 12:04:32.403513 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 12:04:32.403520 systemd[1]: Reached target remote-fs.target. Feb 9 12:04:32.403528 systemd[1]: Reached target slices.target. Feb 9 12:04:32.403535 systemd[1]: Reached target swap.target. Feb 9 12:04:32.403541 systemd[1]: Reached target torcx.target. Feb 9 12:04:32.403548 systemd[1]: Reached target veritysetup.target. Feb 9 12:04:32.403555 systemd[1]: Listening on systemd-coredump.socket. Feb 9 12:04:32.403561 systemd[1]: Listening on systemd-initctl.socket. Feb 9 12:04:32.403568 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 12:04:32.403575 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 12:04:32.403581 systemd[1]: Listening on systemd-journald.socket. Feb 9 12:04:32.403588 systemd[1]: Listening on systemd-networkd.socket. Feb 9 12:04:32.403594 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 12:04:32.403601 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 12:04:32.403609 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 12:04:32.403616 systemd[1]: Mounting dev-hugepages.mount... Feb 9 12:04:32.403622 systemd[1]: Mounting dev-mqueue.mount... Feb 9 12:04:32.403629 systemd[1]: Mounting media.mount... Feb 9 12:04:32.403636 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 12:04:32.403642 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 12:04:32.403650 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 12:04:32.403657 systemd[1]: Mounting tmp.mount... Feb 9 12:04:32.403664 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 12:04:32.403670 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 12:04:32.403677 systemd[1]: Starting kmod-static-nodes.service... Feb 9 12:04:32.403684 systemd[1]: Starting modprobe@configfs.service... Feb 9 12:04:32.403690 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 12:04:32.403697 systemd[1]: Starting modprobe@drm.service... Feb 9 12:04:32.403704 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 12:04:32.403711 systemd[1]: Starting modprobe@fuse.service... Feb 9 12:04:32.403718 kernel: fuse: init (API version 7.34) Feb 9 12:04:32.403724 systemd[1]: Starting modprobe@loop.service... Feb 9 12:04:32.403731 kernel: loop: module loaded Feb 9 12:04:32.403738 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 12:04:32.403745 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 12:04:32.403752 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 12:04:32.403758 systemd[1]: Starting systemd-journald.service... Feb 9 12:04:32.403766 systemd[1]: Starting systemd-modules-load.service... Feb 9 12:04:32.403775 systemd-journald[1304]: Journal started Feb 9 12:04:32.403799 systemd-journald[1304]: Runtime Journal (/run/log/journal/430e2aae3c6b4914b00a413056e115c8) is 8.0M, max 640.1M, 632.1M free. Feb 9 12:04:31.787000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 12:04:31.787000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 12:04:32.399000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 12:04:32.399000 audit[1304]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd6b3b4b10 a2=4000 a3=7ffd6b3b4bac items=0 ppid=1 pid=1304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 12:04:32.399000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 12:04:32.436577 systemd[1]: Starting systemd-network-generator.service... Feb 9 12:04:32.458569 systemd[1]: Starting systemd-remount-fs.service... Feb 9 12:04:32.479438 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 12:04:32.514436 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 12:04:32.529416 systemd[1]: Started systemd-journald.service. Feb 9 12:04:32.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.538748 systemd[1]: Mounted dev-hugepages.mount. Feb 9 12:04:32.551393 kernel: kauditd_printk_skb: 41 callbacks suppressed Feb 9 12:04:32.551437 kernel: audit: type=1130 audit(1707480272.536:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.598659 systemd[1]: Mounted dev-mqueue.mount. Feb 9 12:04:32.605656 systemd[1]: Mounted media.mount. Feb 9 12:04:32.612644 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 12:04:32.621650 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 12:04:32.630652 systemd[1]: Mounted tmp.mount. Feb 9 12:04:32.637740 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 12:04:32.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.646822 systemd[1]: Finished kmod-static-nodes.service. Feb 9 12:04:32.689554 kernel: audit: type=1130 audit(1707480272.645:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.697713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 12:04:32.697789 systemd[1]: Finished modprobe@configfs.service. Feb 9 12:04:32.741428 kernel: audit: type=1130 audit(1707480272.696:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.749749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 12:04:32.749821 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 12:04:32.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.795387 kernel: audit: type=1130 audit(1707480272.748:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.795400 kernel: audit: type=1131 audit(1707480272.748:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.850743 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 12:04:32.850817 systemd[1]: Finished modprobe@drm.service. Feb 9 12:04:32.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.899387 kernel: audit: type=1130 audit(1707480272.849:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.899400 kernel: audit: type=1131 audit(1707480272.849:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.957746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 12:04:32.957821 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 12:04:33.009434 kernel: audit: type=1130 audit(1707480272.956:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.009453 kernel: audit: type=1131 audit(1707480272.956:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:32.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.070724 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 12:04:33.070795 systemd[1]: Finished modprobe@fuse.service. Feb 9 12:04:33.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.124394 kernel: audit: type=1130 audit(1707480273.069:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.132721 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 12:04:33.132798 systemd[1]: Finished modprobe@loop.service. Feb 9 12:04:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.141804 systemd[1]: Finished systemd-modules-load.service. Feb 9 12:04:33.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.150807 systemd[1]: Finished systemd-network-generator.service. Feb 9 12:04:33.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.159753 systemd[1]: Finished systemd-remount-fs.service. Feb 9 12:04:33.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.169772 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 12:04:33.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.178776 systemd[1]: Reached target network-pre.target. Feb 9 12:04:33.188404 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 12:04:33.197586 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 12:04:33.204579 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 12:04:33.206460 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 12:04:33.213985 systemd[1]: Starting systemd-journal-flush.service... Feb 9 12:04:33.217100 systemd-journald[1304]: Time spent on flushing to /var/log/journal/430e2aae3c6b4914b00a413056e115c8 is 14.002ms for 1557 entries. Feb 9 12:04:33.217100 systemd-journald[1304]: System Journal (/var/log/journal/430e2aae3c6b4914b00a413056e115c8) is 8.0M, max 195.6M, 187.6M free. Feb 9 12:04:33.266766 systemd-journald[1304]: Received client request to flush runtime journal. Feb 9 12:04:33.230498 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 12:04:33.231039 systemd[1]: Starting systemd-random-seed.service... Feb 9 12:04:33.249507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 12:04:33.250069 systemd[1]: Starting systemd-sysctl.service... Feb 9 12:04:33.257064 systemd[1]: Starting systemd-sysusers.service... Feb 9 12:04:33.265138 systemd[1]: Starting systemd-udev-settle.service... Feb 9 12:04:33.273714 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 12:04:33.281580 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 12:04:33.289657 systemd[1]: Finished systemd-journal-flush.service. Feb 9 12:04:33.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.297665 systemd[1]: Finished systemd-random-seed.service. Feb 9 12:04:33.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.305640 systemd[1]: Finished systemd-sysctl.service. Feb 9 12:04:33.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.314625 systemd[1]: Finished systemd-sysusers.service. Feb 9 12:04:33.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.323554 systemd[1]: Reached target first-boot-complete.target. Feb 9 12:04:33.332185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 12:04:33.341650 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 12:04:33.350693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 12:04:33.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.513746 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 12:04:33.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.523358 systemd[1]: Starting systemd-udevd.service... Feb 9 12:04:33.535161 systemd-udevd[1339]: Using default interface naming scheme 'v252'. Feb 9 12:04:33.554587 systemd[1]: Started systemd-udevd.service. Feb 9 12:04:33.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.566022 systemd[1]: Found device dev-ttyS1.device. Feb 9 12:04:33.592682 systemd[1]: Starting systemd-networkd.service... Feb 9 12:04:33.612712 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 12:04:33.612791 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 12:04:33.612818 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1353) Feb 9 12:04:33.612841 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 12:04:33.664393 kernel: IPMI message handler: version 39.2 Feb 9 12:04:33.687529 kernel: ACPI: button: Power Button [PWRF] Feb 9 12:04:33.692757 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 12:04:33.693544 systemd[1]: Starting systemd-userdbd.service... Feb 9 12:04:33.708391 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 12:04:33.641000 audit[1397]: AVC avc: denied { confidentiality } for pid=1397 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 12:04:33.641000 audit[1397]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b73556ef20 a1=4d8bc a2=7f3857e19bc5 a3=5 items=42 ppid=1339 pid=1397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 12:04:33.641000 audit: CWD cwd="/" Feb 9 12:04:33.641000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=1 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=2 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=3 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=4 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=5 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=6 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=7 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=8 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=9 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=10 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=11 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=12 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=13 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=14 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=15 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=16 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=17 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=18 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=19 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=20 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=21 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=22 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=23 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=24 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=25 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=26 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=27 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=28 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=29 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=30 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=31 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=32 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=33 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=34 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=35 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=36 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=37 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=38 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=39 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=40 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PATH item=41 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 12:04:33.641000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 12:04:33.770397 kernel: ipmi device interface Feb 9 12:04:33.789797 systemd[1]: Started systemd-userdbd.service. Feb 9 12:04:33.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:33.803392 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 12:04:33.803518 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 12:04:33.803614 kernel: ipmi_si: IPMI System Interface driver Feb 9 12:04:33.803630 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 12:04:33.803709 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 12:04:33.803723 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 12:04:33.803735 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 12:04:33.803809 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 12:04:33.987387 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 12:04:34.030379 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 12:04:34.030599 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 12:04:34.077001 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 12:04:34.077211 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 12:04:34.125058 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 12:04:34.153425 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 12:04:34.153450 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 12:04:34.248146 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 9 12:04:34.248264 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 9 12:04:34.248333 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 9 12:04:34.321571 kernel: intel_rapl_common: Found RAPL domain package Feb 9 12:04:34.321622 kernel: intel_rapl_common: Found RAPL domain core Feb 9 12:04:34.321641 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 12:04:34.326299 systemd-networkd[1414]: bond0: netdev ready Feb 9 12:04:34.328493 systemd-networkd[1414]: lo: Link UP Feb 9 12:04:34.328497 systemd-networkd[1414]: lo: Gained carrier Feb 9 12:04:34.329007 systemd-networkd[1414]: Enumeration completed Feb 9 12:04:34.329089 systemd[1]: Started systemd-networkd.service. Feb 9 12:04:34.329324 systemd-networkd[1414]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 12:04:34.331713 systemd-networkd[1414]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:79.network. Feb 9 12:04:34.341805 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 12:04:34.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:34.395390 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 12:04:34.398653 systemd[1]: Finished systemd-udev-settle.service. Feb 9 12:04:34.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:34.408205 systemd[1]: Starting lvm2-activation-early.service... Feb 9 12:04:34.424145 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 12:04:34.456764 systemd[1]: Finished lvm2-activation-early.service. Feb 9 12:04:34.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:34.466564 systemd[1]: Reached target cryptsetup.target. Feb 9 12:04:34.475103 systemd[1]: Starting lvm2-activation.service... Feb 9 12:04:34.477191 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 12:04:34.510850 systemd[1]: Finished lvm2-activation.service. Feb 9 12:04:34.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:34.519564 systemd[1]: Reached target local-fs-pre.target. Feb 9 12:04:34.527469 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 12:04:34.527484 systemd[1]: Reached target local-fs.target. Feb 9 12:04:34.535472 systemd[1]: Reached target machines.target. Feb 9 12:04:34.544154 systemd[1]: Starting ldconfig.service... Feb 9 12:04:34.551107 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 12:04:34.551129 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 12:04:34.551688 systemd[1]: Starting systemd-boot-update.service... Feb 9 12:04:34.558903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 12:04:34.569007 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 12:04:34.569226 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 12:04:34.569249 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 12:04:34.569774 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 12:04:34.569959 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1447 (bootctl) Feb 9 12:04:34.570700 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 12:04:34.589871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 12:04:34.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:34.597213 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 12:04:34.606875 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 12:04:34.615832 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 12:04:34.795468 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 12:04:34.822436 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 9 12:04:34.824412 systemd-networkd[1414]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 9 12:04:34.876390 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 12:04:35.000434 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 12:04:35.000767 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 12:04:35.023444 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 9 12:04:35.044434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 12:04:35.065697 systemd-networkd[1414]: bond0: Link UP Feb 9 12:04:35.065889 systemd-networkd[1414]: enp1s0f1np1: Link UP Feb 9 12:04:35.066013 systemd-networkd[1414]: enp1s0f1np1: Gained carrier Feb 9 12:04:35.066952 systemd-networkd[1414]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 9 12:04:35.105425 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.126423 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.148387 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.170389 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.170972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 12:04:35.171358 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 12:04:35.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:35.189852 systemd-fsck[1456]: fsck.fat 4.2 (2021-01-31) Feb 9 12:04:35.189852 systemd-fsck[1456]: /dev/sdb1: 789 files, 115332/258078 clusters Feb 9 12:04:35.190387 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.190502 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 12:04:35.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:35.209206 systemd[1]: Mounting boot.mount... Feb 9 12:04:35.210388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.223557 systemd[1]: Mounted boot.mount. Feb 9 12:04:35.230388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.250387 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.255165 systemd[1]: Finished systemd-boot-update.service. Feb 9 12:04:35.269390 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:35.288388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.307388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.312521 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 12:04:35.324389 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 12:04:35.341234 systemd[1]: Starting audit-rules.service... Feb 9 12:04:35.341388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.355000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 12:04:35.355000 audit[1480]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd8ccd08b0 a2=420 a3=0 items=0 ppid=1465 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 12:04:35.355000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 12:04:35.356822 augenrules[1480]: No rules Feb 9 12:04:35.357179 systemd[1]: Starting clean-ca-certificates.service... Feb 9 12:04:35.358387 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.375086 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 12:04:35.376391 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.392262 systemd[1]: Starting systemd-resolved.service... Feb 9 12:04:35.394388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.409236 systemd[1]: Starting systemd-timesyncd.service... Feb 9 12:04:35.412178 ldconfig[1446]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 12:04:35.412388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.428010 systemd[1]: Starting systemd-update-utmp.service... Feb 9 12:04:35.430388 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.444754 systemd[1]: Finished ldconfig.service. Feb 9 12:04:35.448029 systemd-networkd[1414]: enp1s0f0np0: Link UP Feb 9 12:04:35.448196 systemd-networkd[1414]: bond0: Gained carrier Feb 9 12:04:35.448282 systemd-networkd[1414]: enp1s0f0np0: Gained carrier Feb 9 12:04:35.448389 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.462868 systemd[1]: Finished audit-rules.service. Feb 9 12:04:35.481441 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 12:04:35.481462 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave Feb 9 12:04:35.481480 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 12:04:35.481754 systemd[1]: Finished clean-ca-certificates.service. Feb 9 12:04:35.528313 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 12:04:35.528335 kernel: bond0: active interface up! Feb 9 12:04:35.528558 systemd-networkd[1414]: enp1s0f1np1: Link DOWN Feb 9 12:04:35.528561 systemd-networkd[1414]: enp1s0f1np1: Lost carrier Feb 9 12:04:35.533628 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 12:04:35.545277 systemd[1]: Starting systemd-update-done.service... Feb 9 12:04:35.552473 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 12:04:35.552800 systemd[1]: Finished systemd-update-utmp.service. Feb 9 12:04:35.560619 systemd[1]: Finished systemd-update-done.service. Feb 9 12:04:35.578489 systemd[1]: Started systemd-timesyncd.service. Feb 9 12:04:35.579895 systemd-resolved[1490]: Positive Trust Anchors: Feb 9 12:04:35.579902 systemd-resolved[1490]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 12:04:35.579921 systemd-resolved[1490]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 12:04:35.583881 systemd-resolved[1490]: Using system hostname 'ci-3510.3.2-a-b58f4ff548'. Feb 9 12:04:35.586598 systemd[1]: Reached target time-set.target. Feb 9 12:04:35.672390 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 12:04:35.676043 systemd-networkd[1414]: enp1s0f1np1: Link UP Feb 9 12:04:35.676209 systemd-networkd[1414]: enp1s0f1np1: Gained carrier Feb 9 12:04:35.676939 systemd[1]: Started systemd-resolved.service. Feb 9 12:04:35.685492 systemd[1]: Reached target network.target. Feb 9 12:04:35.693466 systemd[1]: Reached target nss-lookup.target. Feb 9 12:04:35.701503 systemd[1]: Reached target sysinit.target. Feb 9 12:04:35.709531 systemd[1]: Started motdgen.path. Feb 9 12:04:35.716481 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 12:04:35.726551 systemd[1]: Started logrotate.timer. Feb 9 12:04:35.733530 systemd[1]: Started mdadm.timer. Feb 9 12:04:35.747596 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 12:04:35.755394 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms Feb 9 12:04:35.755422 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave Feb 9 12:04:35.777465 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 12:04:35.777479 systemd[1]: Reached target paths.target. Feb 9 12:04:35.784461 systemd[1]: Reached target timers.target. Feb 9 12:04:35.791591 systemd[1]: Listening on dbus.socket. Feb 9 12:04:35.799080 systemd[1]: Starting docker.socket... Feb 9 12:04:35.806229 systemd[1]: Listening on sshd.socket. Feb 9 12:04:35.812534 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 12:04:35.812750 systemd[1]: Listening on docker.socket. Feb 9 12:04:35.819503 systemd[1]: Reached target sockets.target. Feb 9 12:04:35.827480 systemd[1]: Reached target basic.target. Feb 9 12:04:35.834522 systemd[1]: System is tainted: cgroupsv1 Feb 9 12:04:35.834546 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 12:04:35.834558 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 12:04:35.835059 systemd[1]: Starting containerd.service... Feb 9 12:04:35.841910 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 12:04:35.851032 systemd[1]: Starting coreos-metadata.service... Feb 9 12:04:35.858969 systemd[1]: Starting dbus.service... Feb 9 12:04:35.865294 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 12:04:35.870034 jq[1509]: false Feb 9 12:04:35.871964 coreos-metadata[1502]: Feb 09 12:04:35.871 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 12:04:35.874125 systemd[1]: Starting extend-filesystems.service... Feb 9 12:04:35.878376 dbus-daemon[1508]: [system] SELinux support is enabled Feb 9 12:04:35.880525 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 12:04:35.881174 systemd[1]: Starting motdgen.service... Feb 9 12:04:35.881239 coreos-metadata[1505]: Feb 09 12:04:35.881 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 12:04:35.881951 extend-filesystems[1512]: Found sda Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb1 Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb2 Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb3 Feb 9 12:04:35.900139 extend-filesystems[1512]: Found usr Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb4 Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb6 Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb7 Feb 9 12:04:35.900139 extend-filesystems[1512]: Found sdb9 Feb 9 12:04:35.900139 extend-filesystems[1512]: Checking size of /dev/sdb9 Feb 9 12:04:35.900139 extend-filesystems[1512]: Resized partition /dev/sdb9 Feb 9 12:04:36.016506 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 9 12:04:36.016527 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 12:04:35.888321 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 12:04:36.016622 extend-filesystems[1527]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 12:04:35.921228 systemd[1]: Starting prepare-critools.service... Feb 9 12:04:35.949111 systemd[1]: Starting prepare-helm.service... Feb 9 12:04:35.967046 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 12:04:35.991074 systemd[1]: Starting sshd-keygen.service... Feb 9 12:04:36.009519 systemd[1]: Starting systemd-logind.service... Feb 9 12:04:36.028445 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 12:04:36.029052 systemd[1]: Starting tcsd.service... Feb 9 12:04:36.033325 systemd-logind[1545]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 12:04:36.033335 systemd-logind[1545]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 12:04:36.033343 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 12:04:36.033453 systemd-logind[1545]: New seat seat0. Feb 9 12:04:36.042065 systemd[1]: Starting update-engine.service... Feb 9 12:04:36.048984 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 12:04:36.050569 jq[1548]: true Feb 9 12:04:36.058805 systemd[1]: Started dbus.service. Feb 9 12:04:36.068371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 12:04:36.068500 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 12:04:36.068673 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 12:04:36.068795 systemd[1]: Finished motdgen.service. Feb 9 12:04:36.077570 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 12:04:36.077682 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 12:04:36.083840 tar[1552]: ./ Feb 9 12:04:36.083840 tar[1552]: ./macvlan Feb 9 12:04:36.085861 update_engine[1547]: I0209 12:04:36.085304 1547 main.cc:92] Flatcar Update Engine starting Feb 9 12:04:36.088167 jq[1558]: true Feb 9 12:04:36.088826 tar[1554]: linux-amd64/helm Feb 9 12:04:36.089229 dbus-daemon[1508]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 12:04:36.089487 update_engine[1547]: I0209 12:04:36.089470 1547 update_check_scheduler.cc:74] Next update check in 2m27s Feb 9 12:04:36.090030 tar[1553]: crictl Feb 9 12:04:36.093137 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 12:04:36.093281 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 12:04:36.096703 systemd[1]: Started systemd-logind.service. Feb 9 12:04:36.099799 env[1559]: time="2024-02-09T12:04:36.099746097Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 12:04:36.105351 tar[1552]: ./static Feb 9 12:04:36.108614 systemd[1]: Started update-engine.service. Feb 9 12:04:36.111659 env[1559]: time="2024-02-09T12:04:36.111640586Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 12:04:36.111729 env[1559]: time="2024-02-09T12:04:36.111718312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 12:04:36.112346 env[1559]: time="2024-02-09T12:04:36.112327981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 12:04:36.112346 env[1559]: time="2024-02-09T12:04:36.112344963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113424 env[1559]: time="2024-02-09T12:04:36.113410872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113699 env[1559]: time="2024-02-09T12:04:36.113423553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113699 env[1559]: time="2024-02-09T12:04:36.113433993Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 12:04:36.113699 env[1559]: time="2024-02-09T12:04:36.113440109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113699 env[1559]: time="2024-02-09T12:04:36.113482710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113781 env[1559]: time="2024-02-09T12:04:36.113772097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113868 env[1559]: time="2024-02-09T12:04:36.113854087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 12:04:36.113868 env[1559]: time="2024-02-09T12:04:36.113867430Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 12:04:36.113919 env[1559]: time="2024-02-09T12:04:36.113892819Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 12:04:36.113919 env[1559]: time="2024-02-09T12:04:36.113899990Z" level=info msg="metadata content store policy set" policy=shared Feb 9 12:04:36.116107 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Feb 9 12:04:36.116704 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 12:04:36.123041 env[1559]: time="2024-02-09T12:04:36.123004328Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 12:04:36.123041 env[1559]: time="2024-02-09T12:04:36.123020179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 12:04:36.123041 env[1559]: time="2024-02-09T12:04:36.123029233Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123047456Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123056159Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123063668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123070316Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123077535Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123085063Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123092442Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123103 env[1559]: time="2024-02-09T12:04:36.123099316Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123233 env[1559]: time="2024-02-09T12:04:36.123106190Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 12:04:36.123233 env[1559]: time="2024-02-09T12:04:36.123165660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 12:04:36.123233 env[1559]: time="2024-02-09T12:04:36.123209978Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 12:04:36.123723 env[1559]: time="2024-02-09T12:04:36.123699630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 12:04:36.123838 env[1559]: time="2024-02-09T12:04:36.123796018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123838 env[1559]: time="2024-02-09T12:04:36.123814048Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 12:04:36.123892 env[1559]: time="2024-02-09T12:04:36.123864922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123892 env[1559]: time="2024-02-09T12:04:36.123878790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123923 env[1559]: time="2024-02-09T12:04:36.123890905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123923 env[1559]: time="2024-02-09T12:04:36.123902180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123923 env[1559]: time="2024-02-09T12:04:36.123914073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123987 env[1559]: time="2024-02-09T12:04:36.123925322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123987 env[1559]: time="2024-02-09T12:04:36.123936043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123987 env[1559]: time="2024-02-09T12:04:36.123946876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.123987 env[1559]: time="2024-02-09T12:04:36.123959625Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 12:04:36.124051 env[1559]: time="2024-02-09T12:04:36.124041917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.124069 env[1559]: time="2024-02-09T12:04:36.124055168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.124086 env[1559]: time="2024-02-09T12:04:36.124066687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.124086 env[1559]: time="2024-02-09T12:04:36.124077569Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 12:04:36.124131 env[1559]: time="2024-02-09T12:04:36.124091315Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 12:04:36.124131 env[1559]: time="2024-02-09T12:04:36.124103847Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 12:04:36.124131 env[1559]: time="2024-02-09T12:04:36.124118488Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 12:04:36.124180 env[1559]: time="2024-02-09T12:04:36.124144529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 12:04:36.124341 env[1559]: time="2024-02-09T12:04:36.124297771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124349575Z" level=info msg="Connect containerd service" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124373403Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124725010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124816362Z" level=info msg="Start subscribing containerd event" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124846601Z" level=info msg="Start recovering state" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124878626Z" level=info msg="Start event monitor" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124878894Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124885783Z" level=info msg="Start snapshots syncer" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124895294Z" level=info msg="Start cni network conf syncer for default" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124902593Z" level=info msg="Start streaming server" Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124908666Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 12:04:36.126201 env[1559]: time="2024-02-09T12:04:36.124940053Z" level=info msg="containerd successfully booted in 0.025657s" Feb 9 12:04:36.126706 systemd[1]: Started containerd.service. Feb 9 12:04:36.129922 tar[1552]: ./vlan Feb 9 12:04:36.135094 systemd[1]: Started locksmithd.service. Feb 9 12:04:36.141519 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 12:04:36.141606 systemd[1]: Reached target system-config.target. Feb 9 12:04:36.149500 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 12:04:36.149591 systemd[1]: Reached target user-config.target. Feb 9 12:04:36.150013 tar[1552]: ./portmap Feb 9 12:04:36.169114 tar[1552]: ./host-local Feb 9 12:04:36.185904 tar[1552]: ./vrf Feb 9 12:04:36.194948 locksmithd[1600]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 12:04:36.204072 tar[1552]: ./bridge Feb 9 12:04:36.225811 tar[1552]: ./tuning Feb 9 12:04:36.243137 tar[1552]: ./firewall Feb 9 12:04:36.265591 tar[1552]: ./host-device Feb 9 12:04:36.285179 tar[1552]: ./sbr Feb 9 12:04:36.303094 tar[1552]: ./loopback Feb 9 12:04:36.320075 tar[1552]: ./dhcp Feb 9 12:04:36.341129 tar[1554]: linux-amd64/LICENSE Feb 9 12:04:36.341208 tar[1554]: linux-amd64/README.md Feb 9 12:04:36.343881 systemd[1]: Finished prepare-helm.service. Feb 9 12:04:36.352810 systemd[1]: Finished prepare-critools.service. Feb 9 12:04:36.369324 tar[1552]: ./ptp Feb 9 12:04:36.390360 tar[1552]: ./ipvlan Feb 9 12:04:36.397435 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 9 12:04:36.424105 extend-filesystems[1527]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 9 12:04:36.424105 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 12:04:36.424105 extend-filesystems[1527]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 9 12:04:36.461486 extend-filesystems[1512]: Resized filesystem in /dev/sdb9 Feb 9 12:04:36.424482 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 12:04:36.469547 tar[1552]: ./bandwidth Feb 9 12:04:36.424607 systemd[1]: Finished extend-filesystems.service. Feb 9 12:04:36.461936 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 12:04:36.560513 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 12:04:36.571755 systemd[1]: Finished sshd-keygen.service. Feb 9 12:04:36.579390 systemd[1]: Starting issuegen.service... Feb 9 12:04:36.586692 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 12:04:36.586794 systemd[1]: Finished issuegen.service. Feb 9 12:04:36.594231 systemd[1]: Starting systemd-user-sessions.service... Feb 9 12:04:36.602691 systemd[1]: Finished systemd-user-sessions.service. Feb 9 12:04:36.611057 systemd[1]: Started getty@tty1.service. Feb 9 12:04:36.618053 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 12:04:36.626549 systemd[1]: Reached target getty.target. Feb 9 12:04:37.068526 systemd-networkd[1414]: bond0: Gained IPv6LL Feb 9 12:04:37.742619 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 12:04:41.638525 login[1637]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 12:04:41.646740 systemd-logind[1545]: New session 1 of user core. Feb 9 12:04:41.647173 systemd[1]: Created slice user-500.slice. Feb 9 12:04:41.647646 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 12:04:41.647686 login[1636]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 12:04:41.649538 systemd-logind[1545]: New session 2 of user core. Feb 9 12:04:41.653017 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 12:04:41.653623 systemd[1]: Starting user@500.service... Feb 9 12:04:41.655550 (systemd)[1643]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:04:41.741012 coreos-metadata[1505]: Feb 09 12:04:41.740 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 12:04:41.741488 coreos-metadata[1502]: Feb 09 12:04:41.740 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 12:04:41.785020 systemd[1643]: Queued start job for default target default.target. Feb 9 12:04:41.785120 systemd[1643]: Reached target paths.target. Feb 9 12:04:41.785131 systemd[1643]: Reached target sockets.target. Feb 9 12:04:41.785139 systemd[1643]: Reached target timers.target. Feb 9 12:04:41.785145 systemd[1643]: Reached target basic.target. Feb 9 12:04:41.785165 systemd[1643]: Reached target default.target. Feb 9 12:04:41.785178 systemd[1643]: Startup finished in 126ms. Feb 9 12:04:41.785239 systemd[1]: Started user@500.service. Feb 9 12:04:41.785751 systemd[1]: Started session-1.scope. Feb 9 12:04:41.786055 systemd[1]: Started session-2.scope. Feb 9 12:04:42.741424 coreos-metadata[1505]: Feb 09 12:04:42.741 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 12:04:42.742497 coreos-metadata[1502]: Feb 09 12:04:42.741 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 12:04:43.163428 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 9 12:04:43.170435 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 12:04:43.795006 systemd[1]: Created slice system-sshd.slice. Feb 9 12:04:43.795662 systemd[1]: Started sshd@0-139.178.89.23:22-147.75.109.163:49630.service. Feb 9 12:04:43.811713 coreos-metadata[1505]: Feb 09 12:04:43.811 INFO Fetch successful Feb 9 12:04:43.812176 coreos-metadata[1502]: Feb 09 12:04:43.812 INFO Fetch successful Feb 9 12:04:43.834159 systemd[1]: Finished coreos-metadata.service. Feb 9 12:04:43.835241 systemd[1]: Started packet-phone-home.service. Feb 9 12:04:43.836876 unknown[1502]: wrote ssh authorized keys file for user: core Feb 9 12:04:43.841930 curl[1672]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 12:04:43.841930 curl[1672]: Dload Upload Total Spent Left Speed Feb 9 12:04:43.848336 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Feb 9 12:04:43.848568 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 12:04:43.848732 systemd[1]: Reached target multi-user.target. Feb 9 12:04:43.849449 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 12:04:43.853013 sshd[1665]: Accepted publickey for core from 147.75.109.163 port 49630 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:04:43.853191 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 12:04:43.853294 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 12:04:43.853450 systemd[1]: Startup finished in 20.944s (kernel) + 14.924s (userspace) = 35.869s. Feb 9 12:04:43.853777 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:04:43.855829 systemd-logind[1545]: New session 3 of user core. Feb 9 12:04:43.856477 systemd[1]: Started session-3.scope. Feb 9 12:04:43.903105 systemd[1]: Started sshd@1-139.178.89.23:22-147.75.109.163:57542.service. Feb 9 12:04:43.939250 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 57542 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:04:43.939954 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:04:43.941956 systemd-logind[1545]: New session 4 of user core. Feb 9 12:04:43.942622 systemd[1]: Started session-4.scope. Feb 9 12:04:44.851974 systemd-timesyncd[1491]: Contacted time server 205.233.73.201:123 (0.flatcar.pool.ntp.org). Feb 9 12:04:44.851980 systemd-resolved[1490]: Clock change detected. Flushing caches. Feb 9 12:04:44.852002 systemd-timesyncd[1491]: Initial clock synchronization to Fri 2024-02-09 12:04:44.851903 UTC. Feb 9 12:04:44.882891 sshd[1682]: pam_unix(sshd:session): session closed for user core Feb 9 12:04:44.884626 systemd[1]: Started sshd@2-139.178.89.23:22-147.75.109.163:57544.service. Feb 9 12:04:44.885026 systemd[1]: sshd@1-139.178.89.23:22-147.75.109.163:57542.service: Deactivated successfully. Feb 9 12:04:44.885560 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Feb 9 12:04:44.885632 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 12:04:44.886150 systemd-logind[1545]: Removed session 4. Feb 9 12:04:44.921606 sshd[1688]: Accepted publickey for core from 147.75.109.163 port 57544 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:04:44.922513 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:04:44.922681 curl[1672]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 12:04:44.923221 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 12:04:44.925468 systemd-logind[1545]: New session 5 of user core. Feb 9 12:04:44.926528 systemd[1]: Started session-5.scope. Feb 9 12:04:44.979094 sshd[1688]: pam_unix(sshd:session): session closed for user core Feb 9 12:04:44.981008 systemd[1]: Started sshd@3-139.178.89.23:22-147.75.109.163:57556.service. Feb 9 12:04:44.981407 systemd[1]: sshd@2-139.178.89.23:22-147.75.109.163:57544.service: Deactivated successfully. Feb 9 12:04:44.981902 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Feb 9 12:04:44.981973 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 12:04:44.982493 systemd-logind[1545]: Removed session 5. Feb 9 12:04:45.017979 sshd[1696]: Accepted publickey for core from 147.75.109.163 port 57556 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:04:45.019009 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:04:45.022296 systemd-logind[1545]: New session 6 of user core. Feb 9 12:04:45.023440 systemd[1]: Started session-6.scope. Feb 9 12:04:45.078529 sshd[1696]: pam_unix(sshd:session): session closed for user core Feb 9 12:04:45.080195 systemd[1]: Started sshd@4-139.178.89.23:22-147.75.109.163:57560.service. Feb 9 12:04:45.080634 systemd[1]: sshd@3-139.178.89.23:22-147.75.109.163:57556.service: Deactivated successfully. Feb 9 12:04:45.081115 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Feb 9 12:04:45.081182 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 12:04:45.081680 systemd-logind[1545]: Removed session 6. Feb 9 12:04:45.117037 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 57560 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:04:45.117822 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:04:45.120575 systemd-logind[1545]: New session 7 of user core. Feb 9 12:04:45.122505 systemd[1]: Started session-7.scope. Feb 9 12:04:45.123369 systemd[1]: Started sshd@5-139.178.89.23:22-209.97.179.25:51650.service. Feb 9 12:04:45.204889 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 12:04:45.205514 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 12:04:45.929884 sshd[1707]: Invalid user hayoung from 209.97.179.25 port 51650 Feb 9 12:04:45.930581 systemd[1]: Started sshd@6-139.178.89.23:22-198.12.118.109:46477.service. Feb 9 12:04:45.931128 sshd[1707]: pam_faillock(sshd:auth): User unknown Feb 9 12:04:45.931445 sshd[1707]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:04:45.931478 sshd[1707]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:04:45.931722 sshd[1707]: pam_faillock(sshd:auth): User unknown Feb 9 12:04:46.111868 sshd[1715]: Invalid user yolanda from 198.12.118.109 port 46477 Feb 9 12:04:46.117981 sshd[1715]: pam_faillock(sshd:auth): User unknown Feb 9 12:04:46.118691 sshd[1715]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:04:46.118708 sshd[1715]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:04:46.118902 sshd[1715]: pam_faillock(sshd:auth): User unknown Feb 9 12:04:47.764397 sshd[1715]: Failed password for invalid user yolanda from 198.12.118.109 port 46477 ssh2 Feb 9 12:04:47.773019 sshd[1707]: Failed password for invalid user hayoung from 209.97.179.25 port 51650 ssh2 Feb 9 12:04:47.858538 sshd[1715]: Received disconnect from 198.12.118.109 port 46477:11: Bye Bye [preauth] Feb 9 12:04:47.858538 sshd[1715]: Disconnected from invalid user yolanda 198.12.118.109 port 46477 [preauth] Feb 9 12:04:47.860924 systemd[1]: sshd@6-139.178.89.23:22-198.12.118.109:46477.service: Deactivated successfully. Feb 9 12:04:48.235120 sshd[1707]: Received disconnect from 209.97.179.25 port 51650:11: Bye Bye [preauth] Feb 9 12:04:48.235120 sshd[1707]: Disconnected from invalid user hayoung 209.97.179.25 port 51650 [preauth] Feb 9 12:04:48.235769 systemd[1]: sshd@5-139.178.89.23:22-209.97.179.25:51650.service: Deactivated successfully. Feb 9 12:04:48.742010 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 12:04:48.746681 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 12:04:48.746911 systemd[1]: Reached target network-online.target. Feb 9 12:04:48.747814 systemd[1]: Starting docker.service... Feb 9 12:04:48.771190 env[1737]: time="2024-02-09T12:04:48.771120822Z" level=info msg="Starting up" Feb 9 12:04:48.771962 env[1737]: time="2024-02-09T12:04:48.771918623Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 12:04:48.771962 env[1737]: time="2024-02-09T12:04:48.771932872Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 12:04:48.771962 env[1737]: time="2024-02-09T12:04:48.771948524Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 12:04:48.771962 env[1737]: time="2024-02-09T12:04:48.771956976Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 12:04:48.772984 env[1737]: time="2024-02-09T12:04:48.772947731Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 12:04:48.772984 env[1737]: time="2024-02-09T12:04:48.772959620Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 12:04:48.772984 env[1737]: time="2024-02-09T12:04:48.772970607Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 12:04:48.772984 env[1737]: time="2024-02-09T12:04:48.772977860Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 12:04:49.173479 env[1737]: time="2024-02-09T12:04:49.173436568Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 12:04:49.173479 env[1737]: time="2024-02-09T12:04:49.173449550Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 12:04:49.173588 env[1737]: time="2024-02-09T12:04:49.173523376Z" level=info msg="Loading containers: start." Feb 9 12:04:49.273223 kernel: Initializing XFRM netlink socket Feb 9 12:04:49.318573 env[1737]: time="2024-02-09T12:04:49.318525609Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 12:04:49.374051 systemd-networkd[1414]: docker0: Link UP Feb 9 12:04:49.378824 env[1737]: time="2024-02-09T12:04:49.378782946Z" level=info msg="Loading containers: done." Feb 9 12:04:49.385150 env[1737]: time="2024-02-09T12:04:49.385101132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 12:04:49.385254 env[1737]: time="2024-02-09T12:04:49.385227769Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 12:04:49.385340 env[1737]: time="2024-02-09T12:04:49.385294891Z" level=info msg="Daemon has completed initialization" Feb 9 12:04:49.394294 systemd[1]: Started docker.service. Feb 9 12:04:49.400112 env[1737]: time="2024-02-09T12:04:49.400075339Z" level=info msg="API listen on /run/docker.sock" Feb 9 12:04:49.418329 systemd[1]: Reloading. Feb 9 12:04:49.447480 /usr/lib/systemd/system-generators/torcx-generator[1891]: time="2024-02-09T12:04:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 12:04:49.447495 /usr/lib/systemd/system-generators/torcx-generator[1891]: time="2024-02-09T12:04:49Z" level=info msg="torcx already run" Feb 9 12:04:49.513511 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 12:04:49.513520 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 12:04:49.527537 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 12:04:49.576544 systemd[1]: Started kubelet.service. Feb 9 12:04:49.599550 kubelet[1953]: E0209 12:04:49.599499 1953 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 12:04:49.600798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 12:04:49.600911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 12:04:50.243092 env[1559]: time="2024-02-09T12:04:50.242987637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 12:04:50.871667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407459461.mount: Deactivated successfully. Feb 9 12:04:51.982433 systemd[1]: Started sshd@7-139.178.89.23:22-119.91.207.218:55954.service. Feb 9 12:04:52.533834 env[1559]: time="2024-02-09T12:04:52.533808158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:52.534553 env[1559]: time="2024-02-09T12:04:52.534508862Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:52.535656 env[1559]: time="2024-02-09T12:04:52.535642824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:52.536697 env[1559]: time="2024-02-09T12:04:52.536685944Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:52.537105 env[1559]: time="2024-02-09T12:04:52.537090031Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 12:04:52.542914 env[1559]: time="2024-02-09T12:04:52.542866737Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 12:04:53.680907 sshd[2003]: Invalid user liuye from 119.91.207.218 port 55954 Feb 9 12:04:53.683060 sshd[2003]: pam_faillock(sshd:auth): User unknown Feb 9 12:04:53.683441 sshd[2003]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:04:53.683477 sshd[2003]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.207.218 Feb 9 12:04:53.683839 sshd[2003]: pam_faillock(sshd:auth): User unknown Feb 9 12:04:54.392844 env[1559]: time="2024-02-09T12:04:54.392799838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:54.394074 env[1559]: time="2024-02-09T12:04:54.394005334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:54.394898 env[1559]: time="2024-02-09T12:04:54.394854484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:54.396220 env[1559]: time="2024-02-09T12:04:54.396181490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:54.396619 env[1559]: time="2024-02-09T12:04:54.396566712Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 12:04:54.402580 env[1559]: time="2024-02-09T12:04:54.402565604Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 12:04:55.289220 sshd[2003]: Failed password for invalid user liuye from 119.91.207.218 port 55954 ssh2 Feb 9 12:04:55.607324 env[1559]: time="2024-02-09T12:04:55.607229730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:55.608019 env[1559]: time="2024-02-09T12:04:55.607975105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:55.609734 env[1559]: time="2024-02-09T12:04:55.609722102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:55.610879 env[1559]: time="2024-02-09T12:04:55.610825979Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:55.611409 env[1559]: time="2024-02-09T12:04:55.611367266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 12:04:55.619997 env[1559]: time="2024-02-09T12:04:55.619980329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 12:04:56.509568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534372363.mount: Deactivated successfully. Feb 9 12:04:56.648517 sshd[2003]: Received disconnect from 119.91.207.218 port 55954:11: Bye Bye [preauth] Feb 9 12:04:56.648517 sshd[2003]: Disconnected from invalid user liuye 119.91.207.218 port 55954 [preauth] Feb 9 12:04:56.649183 systemd[1]: sshd@7-139.178.89.23:22-119.91.207.218:55954.service: Deactivated successfully. Feb 9 12:04:56.799172 env[1559]: time="2024-02-09T12:04:56.799092396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:56.799699 env[1559]: time="2024-02-09T12:04:56.799684833Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:56.800424 env[1559]: time="2024-02-09T12:04:56.800412490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:56.800981 env[1559]: time="2024-02-09T12:04:56.800969020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:56.801408 env[1559]: time="2024-02-09T12:04:56.801363904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 12:04:56.809648 env[1559]: time="2024-02-09T12:04:56.809601726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 12:04:57.318964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037297370.mount: Deactivated successfully. Feb 9 12:04:57.320438 env[1559]: time="2024-02-09T12:04:57.320417056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:57.321091 env[1559]: time="2024-02-09T12:04:57.321077795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:57.321870 env[1559]: time="2024-02-09T12:04:57.321841399Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:57.322739 env[1559]: time="2024-02-09T12:04:57.322728050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:04:57.322910 env[1559]: time="2024-02-09T12:04:57.322896574Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 12:04:57.328499 env[1559]: time="2024-02-09T12:04:57.328478691Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 12:04:58.048334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358011621.mount: Deactivated successfully. Feb 9 12:04:59.671557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 12:04:59.671672 systemd[1]: Stopped kubelet.service. Feb 9 12:04:59.672592 systemd[1]: Started kubelet.service. Feb 9 12:04:59.696910 kubelet[2058]: E0209 12:04:59.696834 2058 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 12:04:59.699120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 12:04:59.699203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 12:05:00.956401 env[1559]: time="2024-02-09T12:05:00.956369943Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:00.957109 env[1559]: time="2024-02-09T12:05:00.957097895Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:00.957967 env[1559]: time="2024-02-09T12:05:00.957957478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:00.958903 env[1559]: time="2024-02-09T12:05:00.958889163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:00.959380 env[1559]: time="2024-02-09T12:05:00.959325050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 12:05:00.965714 env[1559]: time="2024-02-09T12:05:00.965660597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 12:05:01.567616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522292211.mount: Deactivated successfully. Feb 9 12:05:02.021179 env[1559]: time="2024-02-09T12:05:02.021125951Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:02.021744 env[1559]: time="2024-02-09T12:05:02.021711153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:02.022527 env[1559]: time="2024-02-09T12:05:02.022482021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:02.023461 env[1559]: time="2024-02-09T12:05:02.023427387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:02.023705 env[1559]: time="2024-02-09T12:05:02.023652379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 12:05:02.405101 systemd[1]: Started sshd@8-139.178.89.23:22-45.64.3.61:41826.service. Feb 9 12:05:03.772596 sshd[2120]: Invalid user liuye from 45.64.3.61 port 41826 Feb 9 12:05:03.773894 sshd[2120]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:03.774107 sshd[2120]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:05:03.774123 sshd[2120]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:05:03.774327 sshd[2120]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:03.999084 systemd[1]: Stopped kubelet.service. Feb 9 12:05:04.010690 systemd[1]: Reloading. Feb 9 12:05:04.037973 /usr/lib/systemd/system-generators/torcx-generator[2227]: time="2024-02-09T12:05:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 12:05:04.037997 /usr/lib/systemd/system-generators/torcx-generator[2227]: time="2024-02-09T12:05:04Z" level=info msg="torcx already run" Feb 9 12:05:04.094261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 12:05:04.094269 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 12:05:04.106434 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 12:05:04.158741 systemd[1]: Started kubelet.service. Feb 9 12:05:04.180682 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 12:05:04.180682 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 12:05:04.180682 kubelet[2292]: I0209 12:05:04.180671 2292 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 12:05:04.181415 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 12:05:04.181415 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 12:05:04.425415 kubelet[2292]: I0209 12:05:04.425383 2292 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 12:05:04.425415 kubelet[2292]: I0209 12:05:04.425410 2292 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 12:05:04.425562 kubelet[2292]: I0209 12:05:04.425535 2292 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 12:05:04.426925 kubelet[2292]: I0209 12:05:04.426914 2292 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 12:05:04.427411 kubelet[2292]: E0209 12:05:04.427375 2292 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.89.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.450664 kubelet[2292]: I0209 12:05:04.450627 2292 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 12:05:04.450896 kubelet[2292]: I0209 12:05:04.450862 2292 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 12:05:04.450927 kubelet[2292]: I0209 12:05:04.450913 2292 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 12:05:04.450927 kubelet[2292]: I0209 12:05:04.450921 2292 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 12:05:04.450927 kubelet[2292]: I0209 12:05:04.450928 2292 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 12:05:04.451017 kubelet[2292]: I0209 12:05:04.450973 2292 state_mem.go:36] "Initialized new in-memory state store" Feb 9 12:05:04.452646 kubelet[2292]: I0209 12:05:04.452637 2292 kubelet.go:398] "Attempting to sync node with API server" Feb 9 12:05:04.452682 kubelet[2292]: I0209 12:05:04.452650 2292 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 12:05:04.452682 kubelet[2292]: I0209 12:05:04.452663 2292 kubelet.go:297] "Adding apiserver pod source" Feb 9 12:05:04.452682 kubelet[2292]: I0209 12:05:04.452671 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 12:05:04.452910 kubelet[2292]: I0209 12:05:04.452902 2292 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 12:05:04.452939 kubelet[2292]: W0209 12:05:04.452921 2292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.89.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.452959 kubelet[2292]: E0209 12:05:04.452952 2292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.89.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.452979 kubelet[2292]: W0209 12:05:04.452956 2292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.89.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b58f4ff548&limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.453014 kubelet[2292]: E0209 12:05:04.452984 2292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.89.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b58f4ff548&limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.453061 kubelet[2292]: W0209 12:05:04.453055 2292 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 12:05:04.453325 kubelet[2292]: I0209 12:05:04.453296 2292 server.go:1186] "Started kubelet" Feb 9 12:05:04.453387 kubelet[2292]: I0209 12:05:04.453379 2292 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 12:05:04.453566 kubelet[2292]: E0209 12:05:04.453486 2292 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a4fe9ebe8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 453282792, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 453282792, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.89.23:6443/api/v1/namespaces/default/events": dial tcp 139.178.89.23:6443: connect: connection refused'(may retry after sleeping) Feb 9 12:05:04.453646 kubelet[2292]: E0209 12:05:04.453572 2292 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 12:05:04.453646 kubelet[2292]: E0209 12:05:04.453583 2292 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 12:05:04.453865 kubelet[2292]: I0209 12:05:04.453859 2292 server.go:451] "Adding debug handlers to kubelet server" Feb 9 12:05:04.463253 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 12:05:04.463305 kubelet[2292]: I0209 12:05:04.463299 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 12:05:04.463469 kubelet[2292]: I0209 12:05:04.463410 2292 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 12:05:04.463469 kubelet[2292]: I0209 12:05:04.463459 2292 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 12:05:04.463655 kubelet[2292]: W0209 12:05:04.463628 2292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.463688 kubelet[2292]: E0209 12:05:04.463635 2292 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.89.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b58f4ff548?timeout=10s": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.463688 kubelet[2292]: E0209 12:05:04.463664 2292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.482720 kubelet[2292]: I0209 12:05:04.482679 2292 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 12:05:04.493163 kubelet[2292]: I0209 12:05:04.493120 2292 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 12:05:04.493163 kubelet[2292]: I0209 12:05:04.493130 2292 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 12:05:04.493163 kubelet[2292]: I0209 12:05:04.493139 2292 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 12:05:04.493259 kubelet[2292]: E0209 12:05:04.493168 2292 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 12:05:04.493422 kubelet[2292]: W0209 12:05:04.493397 2292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.493471 kubelet[2292]: E0209 12:05:04.493429 2292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.593451 kubelet[2292]: E0209 12:05:04.593347 2292 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 12:05:04.621380 kubelet[2292]: I0209 12:05:04.621334 2292 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.622022 kubelet[2292]: E0209 12:05:04.621989 2292 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.89.23:6443/api/v1/nodes\": dial tcp 139.178.89.23:6443: connect: connection refused" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.622502 kubelet[2292]: I0209 12:05:04.622464 2292 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 12:05:04.622502 kubelet[2292]: I0209 12:05:04.622506 2292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 12:05:04.622759 kubelet[2292]: I0209 12:05:04.622542 2292 state_mem.go:36] "Initialized new in-memory state store" Feb 9 12:05:04.624271 kubelet[2292]: I0209 12:05:04.624235 2292 policy_none.go:49] "None policy: Start" Feb 9 12:05:04.625327 kubelet[2292]: I0209 12:05:04.625287 2292 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 12:05:04.625475 kubelet[2292]: I0209 12:05:04.625338 2292 state_mem.go:35] "Initializing new in-memory state store" Feb 9 12:05:04.636788 kubelet[2292]: I0209 12:05:04.636709 2292 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 12:05:04.637305 kubelet[2292]: I0209 12:05:04.637266 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 12:05:04.638116 kubelet[2292]: E0209 12:05:04.638077 2292 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-b58f4ff548\" not found" Feb 9 12:05:04.665105 kubelet[2292]: E0209 12:05:04.664983 2292 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.89.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b58f4ff548?timeout=10s": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:04.794284 kubelet[2292]: I0209 12:05:04.794050 2292 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:04.798098 kubelet[2292]: I0209 12:05:04.798025 2292 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:04.803729 kubelet[2292]: I0209 12:05:04.803660 2292 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:04.805271 kubelet[2292]: I0209 12:05:04.805191 2292 status_manager.go:698] "Failed to get status for pod" podUID=8229ca27448aae1601ce8167cbc6a357 pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" err="Get \"https://139.178.89.23:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-b58f4ff548\": dial tcp 139.178.89.23:6443: connect: connection refused" Feb 9 12:05:04.806386 kubelet[2292]: I0209 12:05:04.806353 2292 status_manager.go:698] "Failed to get status for pod" podUID=d2e689ac28a222af744e3ac2040f9341 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" err="Get \"https://139.178.89.23:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-b58f4ff548\": dial tcp 139.178.89.23:6443: connect: connection refused" Feb 9 12:05:04.807201 kubelet[2292]: I0209 12:05:04.807163 2292 status_manager.go:698] "Failed to get status for pod" podUID=7865da3cc9442487a288a285a9056548 pod="kube-system/kube-scheduler-ci-3510.3.2-a-b58f4ff548" err="Get \"https://139.178.89.23:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-b58f4ff548\": dial tcp 139.178.89.23:6443: connect: connection refused" Feb 9 12:05:04.823574 kubelet[2292]: I0209 12:05:04.823533 2292 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.823752 kubelet[2292]: E0209 12:05:04.823713 2292 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.89.23:6443/api/v1/nodes\": dial tcp 139.178.89.23:6443: connect: connection refused" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865145 kubelet[2292]: I0209 12:05:04.865085 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8229ca27448aae1601ce8167cbc6a357-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" (UID: \"8229ca27448aae1601ce8167cbc6a357\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865291 kubelet[2292]: I0209 12:05:04.865157 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8229ca27448aae1601ce8167cbc6a357-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" (UID: \"8229ca27448aae1601ce8167cbc6a357\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865291 kubelet[2292]: I0209 12:05:04.865194 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865291 kubelet[2292]: I0209 12:05:04.865240 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865291 kubelet[2292]: I0209 12:05:04.865274 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865478 kubelet[2292]: I0209 12:05:04.865317 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865478 kubelet[2292]: I0209 12:05:04.865347 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7865da3cc9442487a288a285a9056548-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b58f4ff548\" (UID: \"7865da3cc9442487a288a285a9056548\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865478 kubelet[2292]: I0209 12:05:04.865387 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8229ca27448aae1601ce8167cbc6a357-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" (UID: \"8229ca27448aae1601ce8167cbc6a357\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:04.865478 kubelet[2292]: I0209 12:05:04.865416 2292 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:05.066984 kubelet[2292]: E0209 12:05:05.066748 2292 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.89.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b58f4ff548?timeout=10s": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:05.108974 env[1559]: time="2024-02-09T12:05:05.108837535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b58f4ff548,Uid:8229ca27448aae1601ce8167cbc6a357,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:05.108974 env[1559]: time="2024-02-09T12:05:05.108869067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b58f4ff548,Uid:d2e689ac28a222af744e3ac2040f9341,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:05.109948 env[1559]: time="2024-02-09T12:05:05.109293770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b58f4ff548,Uid:7865da3cc9442487a288a285a9056548,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:05.227894 kubelet[2292]: I0209 12:05:05.227808 2292 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:05.228666 kubelet[2292]: E0209 12:05:05.228510 2292 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.89.23:6443/api/v1/nodes\": dial tcp 139.178.89.23:6443: connect: connection refused" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:05.274257 kubelet[2292]: W0209 12:05:05.274105 2292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:05.274257 kubelet[2292]: E0209 12:05:05.274244 2292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:05.622983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684542100.mount: Deactivated successfully. Feb 9 12:05:05.624564 env[1559]: time="2024-02-09T12:05:05.624507002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.625348 env[1559]: time="2024-02-09T12:05:05.625314120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.625938 env[1559]: time="2024-02-09T12:05:05.625903480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.626616 env[1559]: time="2024-02-09T12:05:05.626569473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.626983 env[1559]: time="2024-02-09T12:05:05.626972401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.627733 env[1559]: time="2024-02-09T12:05:05.627720516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.628960 env[1559]: time="2024-02-09T12:05:05.628920829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.630614 env[1559]: time="2024-02-09T12:05:05.630591840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.630977 env[1559]: time="2024-02-09T12:05:05.630965024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.632157 env[1559]: time="2024-02-09T12:05:05.632146282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.632675 env[1559]: time="2024-02-09T12:05:05.632627512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.633181 env[1559]: time="2024-02-09T12:05:05.633168600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:05.638064 env[1559]: time="2024-02-09T12:05:05.638029813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:05.638064 env[1559]: time="2024-02-09T12:05:05.638051955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:05.638064 env[1559]: time="2024-02-09T12:05:05.638058926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:05.638177 env[1559]: time="2024-02-09T12:05:05.638123777Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d69a6cf196490de49fa4bf616ca9ac227ad13b6531337947b07c15e1521cf25 pid=2378 runtime=io.containerd.runc.v2 Feb 9 12:05:05.639961 env[1559]: time="2024-02-09T12:05:05.639912962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:05.639961 env[1559]: time="2024-02-09T12:05:05.639940463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:05.639961 env[1559]: time="2024-02-09T12:05:05.639947734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:05.640108 env[1559]: time="2024-02-09T12:05:05.640021992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ecd4cd45080cc5776327c8ef56ce98289a566d6b3caa52c8c20ba39f72a5f1b pid=2401 runtime=io.containerd.runc.v2 Feb 9 12:05:05.641068 env[1559]: time="2024-02-09T12:05:05.641044226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:05.641068 env[1559]: time="2024-02-09T12:05:05.641061774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:05.641118 env[1559]: time="2024-02-09T12:05:05.641068386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:05.641141 env[1559]: time="2024-02-09T12:05:05.641123872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8025f8d4d66fe9aa281db6b14b830eed5d01002b47f345b2a573c5756785e46e pid=2411 runtime=io.containerd.runc.v2 Feb 9 12:05:05.680004 env[1559]: time="2024-02-09T12:05:05.679964973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b58f4ff548,Uid:7865da3cc9442487a288a285a9056548,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d69a6cf196490de49fa4bf616ca9ac227ad13b6531337947b07c15e1521cf25\"" Feb 9 12:05:05.680118 env[1559]: time="2024-02-09T12:05:05.680039276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b58f4ff548,Uid:8229ca27448aae1601ce8167cbc6a357,Namespace:kube-system,Attempt:0,} returns sandbox id \"8025f8d4d66fe9aa281db6b14b830eed5d01002b47f345b2a573c5756785e46e\"" Feb 9 12:05:05.680118 env[1559]: time="2024-02-09T12:05:05.679994760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b58f4ff548,Uid:d2e689ac28a222af744e3ac2040f9341,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ecd4cd45080cc5776327c8ef56ce98289a566d6b3caa52c8c20ba39f72a5f1b\"" Feb 9 12:05:05.681509 env[1559]: time="2024-02-09T12:05:05.681496862Z" level=info msg="CreateContainer within sandbox \"8ecd4cd45080cc5776327c8ef56ce98289a566d6b3caa52c8c20ba39f72a5f1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 12:05:05.681509 env[1559]: time="2024-02-09T12:05:05.681498759Z" level=info msg="CreateContainer within sandbox \"8025f8d4d66fe9aa281db6b14b830eed5d01002b47f345b2a573c5756785e46e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 12:05:05.681566 env[1559]: time="2024-02-09T12:05:05.681504876Z" level=info msg="CreateContainer within sandbox \"7d69a6cf196490de49fa4bf616ca9ac227ad13b6531337947b07c15e1521cf25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 12:05:05.687842 env[1559]: time="2024-02-09T12:05:05.687798690Z" level=info msg="CreateContainer within sandbox \"8025f8d4d66fe9aa281db6b14b830eed5d01002b47f345b2a573c5756785e46e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1020bce5009ac287d17a6659949f84846282dbdd6594f77376b7a31fc6d06e9c\"" Feb 9 12:05:05.688052 env[1559]: time="2024-02-09T12:05:05.688017242Z" level=info msg="StartContainer for \"1020bce5009ac287d17a6659949f84846282dbdd6594f77376b7a31fc6d06e9c\"" Feb 9 12:05:05.689288 env[1559]: time="2024-02-09T12:05:05.689249303Z" level=info msg="CreateContainer within sandbox \"7d69a6cf196490de49fa4bf616ca9ac227ad13b6531337947b07c15e1521cf25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"65b3a892c5aef0286526100b9b04a82ca264d022c1d6a72dad6914334a4c4667\"" Feb 9 12:05:05.689729 env[1559]: time="2024-02-09T12:05:05.689703948Z" level=info msg="StartContainer for \"65b3a892c5aef0286526100b9b04a82ca264d022c1d6a72dad6914334a4c4667\"" Feb 9 12:05:05.689931 env[1559]: time="2024-02-09T12:05:05.689763727Z" level=info msg="CreateContainer within sandbox \"8ecd4cd45080cc5776327c8ef56ce98289a566d6b3caa52c8c20ba39f72a5f1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"421264c12facd9dc5e38545e58441bafe1db9586d98e7256e1a140f9decb6979\"" Feb 9 12:05:05.690109 env[1559]: time="2024-02-09T12:05:05.690092924Z" level=info msg="StartContainer for \"421264c12facd9dc5e38545e58441bafe1db9586d98e7256e1a140f9decb6979\"" Feb 9 12:05:05.733227 kubelet[2292]: W0209 12:05:05.733098 2292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:05.733454 kubelet[2292]: E0209 12:05:05.733272 2292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Feb 9 12:05:05.774791 env[1559]: time="2024-02-09T12:05:05.774761110Z" level=info msg="StartContainer for \"65b3a892c5aef0286526100b9b04a82ca264d022c1d6a72dad6914334a4c4667\" returns successfully" Feb 9 12:05:05.775027 env[1559]: time="2024-02-09T12:05:05.775009267Z" level=info msg="StartContainer for \"1020bce5009ac287d17a6659949f84846282dbdd6594f77376b7a31fc6d06e9c\" returns successfully" Feb 9 12:05:05.775185 env[1559]: time="2024-02-09T12:05:05.775165679Z" level=info msg="StartContainer for \"421264c12facd9dc5e38545e58441bafe1db9586d98e7256e1a140f9decb6979\" returns successfully" Feb 9 12:05:06.030103 kubelet[2292]: I0209 12:05:06.030088 2292 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:06.086322 sshd[2120]: Failed password for invalid user liuye from 45.64.3.61 port 41826 ssh2 Feb 9 12:05:06.601002 kubelet[2292]: E0209 12:05:06.600985 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-b58f4ff548\" not found" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:06.700399 kubelet[2292]: I0209 12:05:06.700332 2292 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:06.836825 sshd[2120]: Received disconnect from 45.64.3.61 port 41826:11: Bye Bye [preauth] Feb 9 12:05:06.836825 sshd[2120]: Disconnected from invalid user liuye 45.64.3.61 port 41826 [preauth] Feb 9 12:05:06.838999 systemd[1]: sshd@8-139.178.89.23:22-45.64.3.61:41826.service: Deactivated successfully. Feb 9 12:05:06.963359 kubelet[2292]: E0209 12:05:06.963115 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a4fe9ebe8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 453282792, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 453282792, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.021012 kubelet[2292]: E0209 12:05:07.020795 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a4fee6bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 453577674, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 453577674, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.077374 kubelet[2292]: E0209 12:05:07.077126 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ecd13a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621244730, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621244730, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.139445 kubelet[2292]: E0209 12:05:07.139196 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ecd13a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621244730, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621245794, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.196670 kubelet[2292]: E0209 12:05:07.196419 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ed1372", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621261682, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621261682, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.257350 kubelet[2292]: E0209 12:05:07.257030 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ed1372", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621261682, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621271191, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.314980 kubelet[2292]: E0209 12:05:07.314757 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ed3d35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621272373, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621272373, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.376733 kubelet[2292]: E0209 12:05:07.376530 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ed3d35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621272373, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621279161, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.433733 kubelet[2292]: E0209 12:05:07.433505 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a5ae660ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 637599981, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 637599981, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:07.454997 kubelet[2292]: I0209 12:05:07.454895 2292 apiserver.go:52] "Watching apiserver" Feb 9 12:05:07.664248 kubelet[2292]: I0209 12:05:07.664039 2292 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 12:05:07.684546 kubelet[2292]: I0209 12:05:07.684494 2292 reconciler.go:41] "Reconciler: start to sync state" Feb 9 12:05:07.761378 kubelet[2292]: E0209 12:05:07.761146 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ecd13a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621244730, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 797836368, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:08.162417 kubelet[2292]: E0209 12:05:08.162245 2292 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b58f4ff548.17b2304a59ed1372", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b58f4ff548", UID:"ci-3510.3.2-a-b58f4ff548", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-b58f4ff548 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b58f4ff548"}, FirstTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 621261682, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 12, 5, 4, 797851807, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 12:05:09.833492 systemd[1]: Reloading. Feb 9 12:05:09.885140 /usr/lib/systemd/system-generators/torcx-generator[2666]: time="2024-02-09T12:05:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 12:05:09.885163 /usr/lib/systemd/system-generators/torcx-generator[2666]: time="2024-02-09T12:05:09Z" level=info msg="torcx already run" Feb 9 12:05:09.970166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 12:05:09.970176 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 12:05:09.985374 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 12:05:10.040957 systemd[1]: Stopping kubelet.service... Feb 9 12:05:10.058677 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 12:05:10.058834 systemd[1]: Stopped kubelet.service. Feb 9 12:05:10.059781 systemd[1]: Started kubelet.service. Feb 9 12:05:10.083236 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 12:05:10.083236 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 12:05:10.083443 kubelet[2733]: I0209 12:05:10.083241 2733 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 12:05:10.084074 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 12:05:10.084074 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 12:05:10.085900 kubelet[2733]: I0209 12:05:10.085888 2733 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 12:05:10.085900 kubelet[2733]: I0209 12:05:10.085899 2733 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 12:05:10.086024 kubelet[2733]: I0209 12:05:10.086017 2733 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 12:05:10.086742 kubelet[2733]: I0209 12:05:10.086733 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 12:05:10.087217 kubelet[2733]: I0209 12:05:10.087208 2733 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 12:05:10.104168 kubelet[2733]: I0209 12:05:10.104154 2733 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 12:05:10.104400 kubelet[2733]: I0209 12:05:10.104391 2733 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 12:05:10.104453 kubelet[2733]: I0209 12:05:10.104448 2733 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 12:05:10.104529 kubelet[2733]: I0209 12:05:10.104467 2733 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 12:05:10.104529 kubelet[2733]: I0209 12:05:10.104479 2733 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 12:05:10.104529 kubelet[2733]: I0209 12:05:10.104505 2733 state_mem.go:36] "Initialized new in-memory state store" Feb 9 12:05:10.106102 kubelet[2733]: I0209 12:05:10.106093 2733 kubelet.go:398] "Attempting to sync node with API server" Feb 9 12:05:10.106145 kubelet[2733]: I0209 12:05:10.106104 2733 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 12:05:10.106145 kubelet[2733]: I0209 12:05:10.106120 2733 kubelet.go:297] "Adding apiserver pod source" Feb 9 12:05:10.106145 kubelet[2733]: I0209 12:05:10.106129 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 12:05:10.106415 kubelet[2733]: I0209 12:05:10.106403 2733 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 12:05:10.106763 kubelet[2733]: I0209 12:05:10.106751 2733 server.go:1186] "Started kubelet" Feb 9 12:05:10.106820 kubelet[2733]: I0209 12:05:10.106809 2733 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 12:05:10.106970 kubelet[2733]: E0209 12:05:10.106957 2733 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 12:05:10.107015 kubelet[2733]: E0209 12:05:10.106980 2733 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 12:05:10.107680 kubelet[2733]: I0209 12:05:10.107672 2733 server.go:451] "Adding debug handlers to kubelet server" Feb 9 12:05:10.108284 kubelet[2733]: I0209 12:05:10.108259 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 12:05:10.108479 kubelet[2733]: I0209 12:05:10.108463 2733 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 12:05:10.108586 kubelet[2733]: I0209 12:05:10.108500 2733 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 12:05:10.120683 kubelet[2733]: I0209 12:05:10.120669 2733 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 12:05:10.127706 kubelet[2733]: I0209 12:05:10.127691 2733 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 12:05:10.127706 kubelet[2733]: I0209 12:05:10.127706 2733 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 12:05:10.127836 kubelet[2733]: I0209 12:05:10.127720 2733 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 12:05:10.127836 kubelet[2733]: E0209 12:05:10.127763 2733 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 12:05:10.143146 kubelet[2733]: I0209 12:05:10.143129 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 12:05:10.143146 kubelet[2733]: I0209 12:05:10.143143 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 12:05:10.143146 kubelet[2733]: I0209 12:05:10.143154 2733 state_mem.go:36] "Initialized new in-memory state store" Feb 9 12:05:10.143275 kubelet[2733]: I0209 12:05:10.143260 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 12:05:10.143275 kubelet[2733]: I0209 12:05:10.143269 2733 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 12:05:10.143275 kubelet[2733]: I0209 12:05:10.143273 2733 policy_none.go:49] "None policy: Start" Feb 9 12:05:10.143575 kubelet[2733]: I0209 12:05:10.143567 2733 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 12:05:10.143614 kubelet[2733]: I0209 12:05:10.143579 2733 state_mem.go:35] "Initializing new in-memory state store" Feb 9 12:05:10.143662 kubelet[2733]: I0209 12:05:10.143656 2733 state_mem.go:75] "Updated machine memory state" Feb 9 12:05:10.144363 kubelet[2733]: I0209 12:05:10.144358 2733 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 12:05:10.144516 kubelet[2733]: I0209 12:05:10.144477 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 12:05:10.157614 sudo[2796]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 12:05:10.157790 sudo[2796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 12:05:10.210994 kubelet[2733]: I0209 12:05:10.210979 2733 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.216064 kubelet[2733]: I0209 12:05:10.216023 2733 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.216064 kubelet[2733]: I0209 12:05:10.216057 2733 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.228186 kubelet[2733]: I0209 12:05:10.228142 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:10.228249 kubelet[2733]: I0209 12:05:10.228189 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:10.228249 kubelet[2733]: I0209 12:05:10.228214 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:10.231260 kubelet[2733]: E0209 12:05:10.231209 2733 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-b58f4ff548\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.309958 kubelet[2733]: I0209 12:05:10.309907 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.309958 kubelet[2733]: E0209 12:05:10.309911 2733 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.309958 kubelet[2733]: I0209 12:05:10.309928 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310071 kubelet[2733]: I0209 12:05:10.309978 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7865da3cc9442487a288a285a9056548-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b58f4ff548\" (UID: \"7865da3cc9442487a288a285a9056548\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310071 kubelet[2733]: I0209 12:05:10.310005 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310071 kubelet[2733]: I0209 12:05:10.310028 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8229ca27448aae1601ce8167cbc6a357-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" (UID: \"8229ca27448aae1601ce8167cbc6a357\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310071 kubelet[2733]: I0209 12:05:10.310053 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8229ca27448aae1601ce8167cbc6a357-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" (UID: \"8229ca27448aae1601ce8167cbc6a357\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310146 kubelet[2733]: I0209 12:05:10.310074 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310146 kubelet[2733]: I0209 12:05:10.310094 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2e689ac28a222af744e3ac2040f9341-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" (UID: \"d2e689ac28a222af744e3ac2040f9341\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.310146 kubelet[2733]: I0209 12:05:10.310109 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8229ca27448aae1601ce8167cbc6a357-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" (UID: \"8229ca27448aae1601ce8167cbc6a357\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:10.486995 sudo[2796]: pam_unix(sudo:session): session closed for user root Feb 9 12:05:10.511081 kubelet[2733]: E0209 12:05:10.511063 2733 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:11.107451 kubelet[2733]: I0209 12:05:11.107353 2733 apiserver.go:52] "Watching apiserver" Feb 9 12:05:11.208910 kubelet[2733]: I0209 12:05:11.208845 2733 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 12:05:11.213197 kubelet[2733]: I0209 12:05:11.213085 2733 reconciler.go:41] "Reconciler: start to sync state" Feb 9 12:05:11.453974 sudo[1710]: pam_unix(sudo:session): session closed for user root Feb 9 12:05:11.454865 sshd[1703]: pam_unix(sshd:session): session closed for user core Feb 9 12:05:11.456335 systemd[1]: sshd@4-139.178.89.23:22-147.75.109.163:57560.service: Deactivated successfully. Feb 9 12:05:11.457006 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Feb 9 12:05:11.457022 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 12:05:11.457707 systemd-logind[1545]: Removed session 7. Feb 9 12:05:11.515596 kubelet[2733]: E0209 12:05:11.515496 2733 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-b58f4ff548\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:11.715680 kubelet[2733]: E0209 12:05:11.715470 2733 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-b58f4ff548\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:11.914604 kubelet[2733]: E0209 12:05:11.914544 2733 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-b58f4ff548\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b58f4ff548" Feb 9 12:05:12.116969 kubelet[2733]: I0209 12:05:12.116888 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b58f4ff548" podStartSLOduration=5.116864788 pod.CreationTimestamp="2024-02-09 12:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:12.116821605 +0000 UTC m=+2.055337555" watchObservedRunningTime="2024-02-09 12:05:12.116864788 +0000 UTC m=+2.055380734" Feb 9 12:05:12.914640 kubelet[2733]: I0209 12:05:12.914596 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b58f4ff548" podStartSLOduration=4.914573746 pod.CreationTimestamp="2024-02-09 12:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:12.515412466 +0000 UTC m=+2.453928415" watchObservedRunningTime="2024-02-09 12:05:12.914573746 +0000 UTC m=+2.853089696" Feb 9 12:05:18.291542 kubelet[2733]: I0209 12:05:18.291526 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b58f4ff548" podStartSLOduration=10.291496957 pod.CreationTimestamp="2024-02-09 12:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:12.914672967 +0000 UTC m=+2.853188917" watchObservedRunningTime="2024-02-09 12:05:18.291496957 +0000 UTC m=+8.230012907" Feb 9 12:05:21.884476 update_engine[1547]: I0209 12:05:21.884373 1547 update_attempter.cc:509] Updating boot flags... Feb 9 12:05:22.261306 systemd[1]: Started sshd@9-139.178.89.23:22-180.107.140.47:36612.service. Feb 9 12:05:22.612998 kubelet[2733]: I0209 12:05:22.612869 2733 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 12:05:22.613920 env[1559]: time="2024-02-09T12:05:22.613422232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 12:05:22.614698 kubelet[2733]: I0209 12:05:22.613920 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 12:05:23.509469 kubelet[2733]: I0209 12:05:23.509384 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:23.516376 kubelet[2733]: I0209 12:05:23.516286 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:23.596950 kubelet[2733]: I0209 12:05:23.596878 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-lib-modules\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.596950 kubelet[2733]: I0209 12:05:23.596949 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4daafc3d-df67-4944-bd59-55ff6abf45b3-kube-proxy\") pod \"kube-proxy-2rkfx\" (UID: \"4daafc3d-df67-4944-bd59-55ff6abf45b3\") " pod="kube-system/kube-proxy-2rkfx" Feb 9 12:05:23.597312 kubelet[2733]: I0209 12:05:23.596997 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-net\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597312 kubelet[2733]: I0209 12:05:23.597084 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hubble-tls\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597312 kubelet[2733]: I0209 12:05:23.597249 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4daafc3d-df67-4944-bd59-55ff6abf45b3-xtables-lock\") pod \"kube-proxy-2rkfx\" (UID: \"4daafc3d-df67-4944-bd59-55ff6abf45b3\") " pod="kube-system/kube-proxy-2rkfx" Feb 9 12:05:23.597312 kubelet[2733]: I0209 12:05:23.597301 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-etc-cni-netd\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597606 kubelet[2733]: I0209 12:05:23.597423 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84k22\" (UniqueName: \"kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-kube-api-access-84k22\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597606 kubelet[2733]: I0209 12:05:23.597493 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4daafc3d-df67-4944-bd59-55ff6abf45b3-lib-modules\") pod \"kube-proxy-2rkfx\" (UID: \"4daafc3d-df67-4944-bd59-55ff6abf45b3\") " pod="kube-system/kube-proxy-2rkfx" Feb 9 12:05:23.597606 kubelet[2733]: I0209 12:05:23.597548 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-bpf-maps\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597606 kubelet[2733]: I0209 12:05:23.597590 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hostproc\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597876 kubelet[2733]: I0209 12:05:23.597631 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cni-path\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597876 kubelet[2733]: I0209 12:05:23.597678 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtfpz\" (UniqueName: \"kubernetes.io/projected/4daafc3d-df67-4944-bd59-55ff6abf45b3-kube-api-access-rtfpz\") pod \"kube-proxy-2rkfx\" (UID: \"4daafc3d-df67-4944-bd59-55ff6abf45b3\") " pod="kube-system/kube-proxy-2rkfx" Feb 9 12:05:23.597876 kubelet[2733]: I0209 12:05:23.597720 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-clustermesh-secrets\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597876 kubelet[2733]: I0209 12:05:23.597795 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-xtables-lock\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.597876 kubelet[2733]: I0209 12:05:23.597859 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-cgroup\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.598273 kubelet[2733]: I0209 12:05:23.597997 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-run\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.598273 kubelet[2733]: I0209 12:05:23.598081 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-config-path\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.598273 kubelet[2733]: I0209 12:05:23.598184 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-kernel\") pod \"cilium-2jbsm\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " pod="kube-system/cilium-2jbsm" Feb 9 12:05:23.649498 kubelet[2733]: I0209 12:05:23.649415 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:23.699086 kubelet[2733]: I0209 12:05:23.699045 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7wh8\" (UniqueName: \"kubernetes.io/projected/132653b9-67de-4d54-9702-f91f31f47975-kube-api-access-q7wh8\") pod \"cilium-operator-f59cbd8c6-7p6hl\" (UID: \"132653b9-67de-4d54-9702-f91f31f47975\") " pod="kube-system/cilium-operator-f59cbd8c6-7p6hl" Feb 9 12:05:23.699403 kubelet[2733]: I0209 12:05:23.699373 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/132653b9-67de-4d54-9702-f91f31f47975-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-7p6hl\" (UID: \"132653b9-67de-4d54-9702-f91f31f47975\") " pod="kube-system/cilium-operator-f59cbd8c6-7p6hl" Feb 9 12:05:23.944438 sshd[2917]: Invalid user wangxiao from 180.107.140.47 port 36612 Feb 9 12:05:23.950452 sshd[2917]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:23.951507 sshd[2917]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:05:23.951593 sshd[2917]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:05:23.952523 sshd[2917]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:24.116832 env[1559]: time="2024-02-09T12:05:24.116698608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2rkfx,Uid:4daafc3d-df67-4944-bd59-55ff6abf45b3,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:24.133868 env[1559]: time="2024-02-09T12:05:24.133809021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:24.133868 env[1559]: time="2024-02-09T12:05:24.133829646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:24.133868 env[1559]: time="2024-02-09T12:05:24.133839885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:24.133979 env[1559]: time="2024-02-09T12:05:24.133900152Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0e23c304b16bbbf8a2512efb22790767424ae38d83ee67bbe463917551a294f pid=2931 runtime=io.containerd.runc.v2 Feb 9 12:05:24.175268 env[1559]: time="2024-02-09T12:05:24.175210738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2rkfx,Uid:4daafc3d-df67-4944-bd59-55ff6abf45b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0e23c304b16bbbf8a2512efb22790767424ae38d83ee67bbe463917551a294f\"" Feb 9 12:05:24.176449 env[1559]: time="2024-02-09T12:05:24.176432201Z" level=info msg="CreateContainer within sandbox \"c0e23c304b16bbbf8a2512efb22790767424ae38d83ee67bbe463917551a294f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 12:05:24.181514 env[1559]: time="2024-02-09T12:05:24.181467626Z" level=info msg="CreateContainer within sandbox \"c0e23c304b16bbbf8a2512efb22790767424ae38d83ee67bbe463917551a294f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6563e1017a9cf3adb7712a375d5a96a7d02d2ecfd4625062767464c55c226e5d\"" Feb 9 12:05:24.181730 env[1559]: time="2024-02-09T12:05:24.181696871Z" level=info msg="StartContainer for \"6563e1017a9cf3adb7712a375d5a96a7d02d2ecfd4625062767464c55c226e5d\"" Feb 9 12:05:24.245025 env[1559]: time="2024-02-09T12:05:24.244860783Z" level=info msg="StartContainer for \"6563e1017a9cf3adb7712a375d5a96a7d02d2ecfd4625062767464c55c226e5d\" returns successfully" Feb 9 12:05:24.422551 env[1559]: time="2024-02-09T12:05:24.422422690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jbsm,Uid:2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:24.445587 env[1559]: time="2024-02-09T12:05:24.445401122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:24.445587 env[1559]: time="2024-02-09T12:05:24.445497577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:24.445587 env[1559]: time="2024-02-09T12:05:24.445536734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:24.446239 env[1559]: time="2024-02-09T12:05:24.446015979Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d pid=3049 runtime=io.containerd.runc.v2 Feb 9 12:05:24.518833 env[1559]: time="2024-02-09T12:05:24.518754458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jbsm,Uid:2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\"" Feb 9 12:05:24.519836 env[1559]: time="2024-02-09T12:05:24.519813750Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 12:05:24.558971 env[1559]: time="2024-02-09T12:05:24.558933884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-7p6hl,Uid:132653b9-67de-4d54-9702-f91f31f47975,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:24.569531 env[1559]: time="2024-02-09T12:05:24.569441519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:24.569531 env[1559]: time="2024-02-09T12:05:24.569482130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:24.569531 env[1559]: time="2024-02-09T12:05:24.569496538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:24.569752 env[1559]: time="2024-02-09T12:05:24.569628243Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0 pid=3162 runtime=io.containerd.runc.v2 Feb 9 12:05:24.663805 env[1559]: time="2024-02-09T12:05:24.663735857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-7p6hl,Uid:132653b9-67de-4d54-9702-f91f31f47975,Namespace:kube-system,Attempt:0,} returns sandbox id \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\"" Feb 9 12:05:25.181510 kubelet[2733]: I0209 12:05:25.181446 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2rkfx" podStartSLOduration=2.181404986 pod.CreationTimestamp="2024-02-09 12:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:25.181380594 +0000 UTC m=+15.119896547" watchObservedRunningTime="2024-02-09 12:05:25.181404986 +0000 UTC m=+15.119920933" Feb 9 12:05:26.009451 sshd[2917]: Failed password for invalid user wangxiao from 180.107.140.47 port 36612 ssh2 Feb 9 12:05:27.635976 sshd[2917]: Received disconnect from 180.107.140.47 port 36612:11: Bye Bye [preauth] Feb 9 12:05:27.635976 sshd[2917]: Disconnected from invalid user wangxiao 180.107.140.47 port 36612 [preauth] Feb 9 12:05:27.638998 systemd[1]: sshd@9-139.178.89.23:22-180.107.140.47:36612.service: Deactivated successfully. Feb 9 12:05:29.997992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495153822.mount: Deactivated successfully. Feb 9 12:05:32.099116 env[1559]: time="2024-02-09T12:05:32.099043519Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:32.195221 env[1559]: time="2024-02-09T12:05:32.195051137Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:32.241160 env[1559]: time="2024-02-09T12:05:32.241009879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:32.243983 env[1559]: time="2024-02-09T12:05:32.243858535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 12:05:32.245149 env[1559]: time="2024-02-09T12:05:32.245046633Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 12:05:32.248247 env[1559]: time="2024-02-09T12:05:32.248152412Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 12:05:32.368470 env[1559]: time="2024-02-09T12:05:32.368416211Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\"" Feb 9 12:05:32.368668 env[1559]: time="2024-02-09T12:05:32.368650765Z" level=info msg="StartContainer for \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\"" Feb 9 12:05:32.429595 env[1559]: time="2024-02-09T12:05:32.429525992Z" level=info msg="StartContainer for \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\" returns successfully" Feb 9 12:05:33.353829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602-rootfs.mount: Deactivated successfully. Feb 9 12:05:33.724296 env[1559]: time="2024-02-09T12:05:33.724170513Z" level=info msg="shim disconnected" id=3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602 Feb 9 12:05:33.724296 env[1559]: time="2024-02-09T12:05:33.724267213Z" level=warning msg="cleaning up after shim disconnected" id=3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602 namespace=k8s.io Feb 9 12:05:33.724296 env[1559]: time="2024-02-09T12:05:33.724290371Z" level=info msg="cleaning up dead shim" Feb 9 12:05:33.747257 env[1559]: time="2024-02-09T12:05:33.747206138Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:05:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3245 runtime=io.containerd.runc.v2\n" Feb 9 12:05:34.048207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291049910.mount: Deactivated successfully. Feb 9 12:05:34.187514 env[1559]: time="2024-02-09T12:05:34.187488508Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 12:05:34.192522 env[1559]: time="2024-02-09T12:05:34.192466182Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\"" Feb 9 12:05:34.192800 env[1559]: time="2024-02-09T12:05:34.192752767Z" level=info msg="StartContainer for \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\"" Feb 9 12:05:34.251326 env[1559]: time="2024-02-09T12:05:34.251251526Z" level=info msg="StartContainer for \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\" returns successfully" Feb 9 12:05:34.275377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 12:05:34.276078 systemd[1]: Stopped systemd-sysctl.service. Feb 9 12:05:34.276506 systemd[1]: Stopping systemd-sysctl.service... Feb 9 12:05:34.279843 systemd[1]: Starting systemd-sysctl.service... Feb 9 12:05:34.294530 systemd[1]: Finished systemd-sysctl.service. Feb 9 12:05:34.372522 env[1559]: time="2024-02-09T12:05:34.372303936Z" level=info msg="shim disconnected" id=509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1 Feb 9 12:05:34.372522 env[1559]: time="2024-02-09T12:05:34.372411367Z" level=warning msg="cleaning up after shim disconnected" id=509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1 namespace=k8s.io Feb 9 12:05:34.372522 env[1559]: time="2024-02-09T12:05:34.372442479Z" level=info msg="cleaning up dead shim" Feb 9 12:05:34.397066 env[1559]: time="2024-02-09T12:05:34.396990682Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:05:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3313 runtime=io.containerd.runc.v2\n" Feb 9 12:05:34.800750 env[1559]: time="2024-02-09T12:05:34.800660645Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:34.801821 env[1559]: time="2024-02-09T12:05:34.801751282Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:34.803867 env[1559]: time="2024-02-09T12:05:34.803798673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 12:05:34.804841 env[1559]: time="2024-02-09T12:05:34.804769073Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 12:05:34.806959 env[1559]: time="2024-02-09T12:05:34.806861535Z" level=info msg="CreateContainer within sandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 12:05:34.814272 env[1559]: time="2024-02-09T12:05:34.814217325Z" level=info msg="CreateContainer within sandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\"" Feb 9 12:05:34.814733 env[1559]: time="2024-02-09T12:05:34.814693609Z" level=info msg="StartContainer for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\"" Feb 9 12:05:34.849838 env[1559]: time="2024-02-09T12:05:34.849807618Z" level=info msg="StartContainer for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" returns successfully" Feb 9 12:05:35.189897 env[1559]: time="2024-02-09T12:05:35.189866914Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 12:05:35.194377 kubelet[2733]: I0209 12:05:35.194358 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-7p6hl" podStartSLOduration=-9.22337202466044e+09 pod.CreationTimestamp="2024-02-09 12:05:23 +0000 UTC" firstStartedPulling="2024-02-09 12:05:24.664696942 +0000 UTC m=+14.603212912" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:35.19413384 +0000 UTC m=+25.132649793" watchObservedRunningTime="2024-02-09 12:05:35.194334798 +0000 UTC m=+25.132850744" Feb 9 12:05:35.195211 env[1559]: time="2024-02-09T12:05:35.195184910Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\"" Feb 9 12:05:35.195510 env[1559]: time="2024-02-09T12:05:35.195496219Z" level=info msg="StartContainer for \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\"" Feb 9 12:05:35.234525 env[1559]: time="2024-02-09T12:05:35.234497300Z" level=info msg="StartContainer for \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\" returns successfully" Feb 9 12:05:35.383735 env[1559]: time="2024-02-09T12:05:35.383708430Z" level=info msg="shim disconnected" id=077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b Feb 9 12:05:35.383735 env[1559]: time="2024-02-09T12:05:35.383737124Z" level=warning msg="cleaning up after shim disconnected" id=077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b namespace=k8s.io Feb 9 12:05:35.383875 env[1559]: time="2024-02-09T12:05:35.383742923Z" level=info msg="cleaning up dead shim" Feb 9 12:05:35.400260 env[1559]: time="2024-02-09T12:05:35.400193582Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:05:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3418 runtime=io.containerd.runc.v2\n" Feb 9 12:05:36.201447 env[1559]: time="2024-02-09T12:05:36.201337463Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 12:05:36.217019 env[1559]: time="2024-02-09T12:05:36.216986777Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\"" Feb 9 12:05:36.217277 env[1559]: time="2024-02-09T12:05:36.217260671Z" level=info msg="StartContainer for \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\"" Feb 9 12:05:36.263398 env[1559]: time="2024-02-09T12:05:36.263370388Z" level=info msg="StartContainer for \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\" returns successfully" Feb 9 12:05:36.299715 env[1559]: time="2024-02-09T12:05:36.299664387Z" level=info msg="shim disconnected" id=f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353 Feb 9 12:05:36.299917 env[1559]: time="2024-02-09T12:05:36.299717593Z" level=warning msg="cleaning up after shim disconnected" id=f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353 namespace=k8s.io Feb 9 12:05:36.299917 env[1559]: time="2024-02-09T12:05:36.299734966Z" level=info msg="cleaning up dead shim" Feb 9 12:05:36.308245 env[1559]: time="2024-02-09T12:05:36.308168923Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:05:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3472 runtime=io.containerd.runc.v2\n" Feb 9 12:05:36.353907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353-rootfs.mount: Deactivated successfully. Feb 9 12:05:37.210679 env[1559]: time="2024-02-09T12:05:37.210552534Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 12:05:37.224127 env[1559]: time="2024-02-09T12:05:37.224094679Z" level=info msg="CreateContainer within sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\"" Feb 9 12:05:37.224408 env[1559]: time="2024-02-09T12:05:37.224389635Z" level=info msg="StartContainer for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\"" Feb 9 12:05:37.259898 env[1559]: time="2024-02-09T12:05:37.259867215Z" level=info msg="StartContainer for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" returns successfully" Feb 9 12:05:37.301186 systemd[1]: Started sshd@10-139.178.89.23:22-198.12.118.109:4010.service. Feb 9 12:05:37.314280 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 12:05:37.364522 kubelet[2733]: I0209 12:05:37.364507 2733 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 12:05:37.374973 kubelet[2733]: I0209 12:05:37.374955 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:37.375983 kubelet[2733]: I0209 12:05:37.375971 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:05:37.398601 kubelet[2733]: I0209 12:05:37.398584 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4kvt\" (UniqueName: \"kubernetes.io/projected/4734f6c8-8e3e-489a-b6f6-ba42e83020e6-kube-api-access-h4kvt\") pod \"coredns-787d4945fb-pkgp5\" (UID: \"4734f6c8-8e3e-489a-b6f6-ba42e83020e6\") " pod="kube-system/coredns-787d4945fb-pkgp5" Feb 9 12:05:37.398601 kubelet[2733]: I0209 12:05:37.398606 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f8bc7a-2fee-4997-85e1-9169196d8e53-config-volume\") pod \"coredns-787d4945fb-d28cx\" (UID: \"b5f8bc7a-2fee-4997-85e1-9169196d8e53\") " pod="kube-system/coredns-787d4945fb-d28cx" Feb 9 12:05:37.398725 kubelet[2733]: I0209 12:05:37.398622 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzhpl\" (UniqueName: \"kubernetes.io/projected/b5f8bc7a-2fee-4997-85e1-9169196d8e53-kube-api-access-dzhpl\") pod \"coredns-787d4945fb-d28cx\" (UID: \"b5f8bc7a-2fee-4997-85e1-9169196d8e53\") " pod="kube-system/coredns-787d4945fb-d28cx" Feb 9 12:05:37.398725 kubelet[2733]: I0209 12:05:37.398642 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4734f6c8-8e3e-489a-b6f6-ba42e83020e6-config-volume\") pod \"coredns-787d4945fb-pkgp5\" (UID: \"4734f6c8-8e3e-489a-b6f6-ba42e83020e6\") " pod="kube-system/coredns-787d4945fb-pkgp5" Feb 9 12:05:37.464251 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 12:05:37.493094 sshd[3567]: Invalid user yklee from 198.12.118.109 port 4010 Feb 9 12:05:37.494282 sshd[3567]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:37.494467 sshd[3567]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:05:37.494483 sshd[3567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:05:37.494623 sshd[3567]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:37.678637 env[1559]: time="2024-02-09T12:05:37.678550119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-pkgp5,Uid:4734f6c8-8e3e-489a-b6f6-ba42e83020e6,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:37.678637 env[1559]: time="2024-02-09T12:05:37.678551728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-d28cx,Uid:b5f8bc7a-2fee-4997-85e1-9169196d8e53,Namespace:kube-system,Attempt:0,}" Feb 9 12:05:38.248107 kubelet[2733]: I0209 12:05:38.248046 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2jbsm" podStartSLOduration=-9.223372021606817e+09 pod.CreationTimestamp="2024-02-09 12:05:23 +0000 UTC" firstStartedPulling="2024-02-09 12:05:24.51948809 +0000 UTC m=+14.458004044" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:38.247572786 +0000 UTC m=+28.186088805" watchObservedRunningTime="2024-02-09 12:05:38.247958896 +0000 UTC m=+28.186474895" Feb 9 12:05:39.067481 systemd-networkd[1414]: cilium_host: Link UP Feb 9 12:05:39.067579 systemd-networkd[1414]: cilium_net: Link UP Feb 9 12:05:39.067582 systemd-networkd[1414]: cilium_net: Gained carrier Feb 9 12:05:39.067689 systemd-networkd[1414]: cilium_host: Gained carrier Feb 9 12:05:39.075215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 12:05:39.075611 systemd-networkd[1414]: cilium_host: Gained IPv6LL Feb 9 12:05:39.119438 systemd-networkd[1414]: cilium_vxlan: Link UP Feb 9 12:05:39.119441 systemd-networkd[1414]: cilium_vxlan: Gained carrier Feb 9 12:05:39.251212 kernel: NET: Registered PF_ALG protocol family Feb 9 12:05:39.342334 systemd-networkd[1414]: cilium_net: Gained IPv6LL Feb 9 12:05:39.791545 systemd-networkd[1414]: lxc_health: Link UP Feb 9 12:05:39.807305 sshd[3567]: Failed password for invalid user yklee from 198.12.118.109 port 4010 ssh2 Feb 9 12:05:39.810989 systemd-networkd[1414]: lxc_health: Gained carrier Feb 9 12:05:39.811255 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 12:05:39.866412 systemd[1]: Started sshd@11-139.178.89.23:22-209.97.179.25:42300.service. Feb 9 12:05:40.237849 systemd-networkd[1414]: lxc083ea12a7420: Link UP Feb 9 12:05:40.280211 kernel: eth0: renamed from tmp7f30c Feb 9 12:05:40.299286 kernel: eth0: renamed from tmpb112b Feb 9 12:05:40.322614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 12:05:40.322662 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9ff3450f9f77: link becomes ready Feb 9 12:05:40.323205 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 12:05:40.336850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc083ea12a7420: link becomes ready Feb 9 12:05:40.336967 systemd-networkd[1414]: lxc9ff3450f9f77: Link UP Feb 9 12:05:40.337429 systemd-networkd[1414]: lxc9ff3450f9f77: Gained carrier Feb 9 12:05:40.337520 systemd-networkd[1414]: lxc083ea12a7420: Gained carrier Feb 9 12:05:40.689737 sshd[4107]: Invalid user wuchl from 209.97.179.25 port 42300 Feb 9 12:05:40.690866 sshd[4107]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:40.691068 sshd[4107]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:05:40.691085 sshd[4107]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:05:40.691290 sshd[4107]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:40.998345 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Feb 9 12:05:41.028400 sshd[3567]: Received disconnect from 198.12.118.109 port 4010:11: Bye Bye [preauth] Feb 9 12:05:41.028400 sshd[3567]: Disconnected from invalid user yklee 198.12.118.109 port 4010 [preauth] Feb 9 12:05:41.029052 systemd[1]: sshd@10-139.178.89.23:22-198.12.118.109:4010.service: Deactivated successfully. Feb 9 12:05:41.534621 systemd[1]: Started sshd@12-139.178.89.23:22-39.109.116.167:59826.service. Feb 9 12:05:41.638367 systemd-networkd[1414]: lxc083ea12a7420: Gained IPv6LL Feb 9 12:05:41.830310 systemd-networkd[1414]: lxc_health: Gained IPv6LL Feb 9 12:05:42.086315 systemd-networkd[1414]: lxc9ff3450f9f77: Gained IPv6LL Feb 9 12:05:42.416915 sshd[4107]: Failed password for invalid user wuchl from 209.97.179.25 port 42300 ssh2 Feb 9 12:05:42.464669 sshd[4146]: Invalid user lll from 39.109.116.167 port 59826 Feb 9 12:05:42.465982 sshd[4146]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:42.466207 sshd[4146]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:05:42.466242 sshd[4146]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:05:42.466500 sshd[4146]: pam_faillock(sshd:auth): User unknown Feb 9 12:05:42.639459 env[1559]: time="2024-02-09T12:05:42.639418006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:42.639459 env[1559]: time="2024-02-09T12:05:42.639441128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:42.639459 env[1559]: time="2024-02-09T12:05:42.639449131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:42.639727 env[1559]: time="2024-02-09T12:05:42.639517222Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b112b0ad669d4c982c870d8c1207df8a7099aeb67aa92da1c851bebc99cbb993 pid=4168 runtime=io.containerd.runc.v2 Feb 9 12:05:42.640230 env[1559]: time="2024-02-09T12:05:42.640197211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:05:42.640230 env[1559]: time="2024-02-09T12:05:42.640222402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:05:42.640322 env[1559]: time="2024-02-09T12:05:42.640232744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:05:42.640357 env[1559]: time="2024-02-09T12:05:42.640322937Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f30ccfa20183ec0383640d59b36dff9137e22011b013cd8dcf3e5579b8ffdb6 pid=4176 runtime=io.containerd.runc.v2 Feb 9 12:05:42.659713 sshd[4107]: Received disconnect from 209.97.179.25 port 42300:11: Bye Bye [preauth] Feb 9 12:05:42.659713 sshd[4107]: Disconnected from invalid user wuchl 209.97.179.25 port 42300 [preauth] Feb 9 12:05:42.660317 systemd[1]: sshd@11-139.178.89.23:22-209.97.179.25:42300.service: Deactivated successfully. Feb 9 12:05:42.682130 env[1559]: time="2024-02-09T12:05:42.682100637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-d28cx,Uid:b5f8bc7a-2fee-4997-85e1-9169196d8e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"b112b0ad669d4c982c870d8c1207df8a7099aeb67aa92da1c851bebc99cbb993\"" Feb 9 12:05:42.682232 env[1559]: time="2024-02-09T12:05:42.682176827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-pkgp5,Uid:4734f6c8-8e3e-489a-b6f6-ba42e83020e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f30ccfa20183ec0383640d59b36dff9137e22011b013cd8dcf3e5579b8ffdb6\"" Feb 9 12:05:42.683388 env[1559]: time="2024-02-09T12:05:42.683372250Z" level=info msg="CreateContainer within sandbox \"7f30ccfa20183ec0383640d59b36dff9137e22011b013cd8dcf3e5579b8ffdb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 12:05:42.683443 env[1559]: time="2024-02-09T12:05:42.683382703Z" level=info msg="CreateContainer within sandbox \"b112b0ad669d4c982c870d8c1207df8a7099aeb67aa92da1c851bebc99cbb993\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 12:05:42.688018 env[1559]: time="2024-02-09T12:05:42.687974087Z" level=info msg="CreateContainer within sandbox \"7f30ccfa20183ec0383640d59b36dff9137e22011b013cd8dcf3e5579b8ffdb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a089b73a0c120c4981e8417dc81d8c5804d5716310de686a797e9fbfe669691d\"" Feb 9 12:05:42.688166 env[1559]: time="2024-02-09T12:05:42.688153803Z" level=info msg="StartContainer for \"a089b73a0c120c4981e8417dc81d8c5804d5716310de686a797e9fbfe669691d\"" Feb 9 12:05:42.688850 env[1559]: time="2024-02-09T12:05:42.688831592Z" level=info msg="CreateContainer within sandbox \"b112b0ad669d4c982c870d8c1207df8a7099aeb67aa92da1c851bebc99cbb993\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6d4dac127f13d8c54652a5ae5dde84c92d5e8e78944c7969f103669f6f53ace\"" Feb 9 12:05:42.689026 env[1559]: time="2024-02-09T12:05:42.689011368Z" level=info msg="StartContainer for \"b6d4dac127f13d8c54652a5ae5dde84c92d5e8e78944c7969f103669f6f53ace\"" Feb 9 12:05:42.728036 env[1559]: time="2024-02-09T12:05:42.728006748Z" level=info msg="StartContainer for \"a089b73a0c120c4981e8417dc81d8c5804d5716310de686a797e9fbfe669691d\" returns successfully" Feb 9 12:05:42.729488 env[1559]: time="2024-02-09T12:05:42.729465137Z" level=info msg="StartContainer for \"b6d4dac127f13d8c54652a5ae5dde84c92d5e8e78944c7969f103669f6f53ace\" returns successfully" Feb 9 12:05:43.240074 kubelet[2733]: I0209 12:05:43.240053 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-pkgp5" podStartSLOduration=20.240027008 pod.CreationTimestamp="2024-02-09 12:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:43.239663625 +0000 UTC m=+33.178179574" watchObservedRunningTime="2024-02-09 12:05:43.240027008 +0000 UTC m=+33.178542954" Feb 9 12:05:43.250029 kubelet[2733]: I0209 12:05:43.250010 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-d28cx" podStartSLOduration=20.24998071 pod.CreationTimestamp="2024-02-09 12:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:05:43.249569785 +0000 UTC m=+33.188085739" watchObservedRunningTime="2024-02-09 12:05:43.24998071 +0000 UTC m=+33.188496662" Feb 9 12:05:44.131290 sshd[4146]: Failed password for invalid user lll from 39.109.116.167 port 59826 ssh2 Feb 9 12:05:45.740076 sshd[4146]: Received disconnect from 39.109.116.167 port 59826:11: Bye Bye [preauth] Feb 9 12:05:45.740076 sshd[4146]: Disconnected from invalid user lll 39.109.116.167 port 59826 [preauth] Feb 9 12:05:45.742768 systemd[1]: sshd@12-139.178.89.23:22-39.109.116.167:59826.service: Deactivated successfully. Feb 9 12:06:16.257190 systemd[1]: Started sshd@13-139.178.89.23:22-45.64.3.61:60642.service. Feb 9 12:06:17.320125 sshd[4419]: Invalid user liondark from 45.64.3.61 port 60642 Feb 9 12:06:17.326088 sshd[4419]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:17.327091 sshd[4419]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:06:17.327196 sshd[4419]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:06:17.328177 sshd[4419]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:19.465779 sshd[4419]: Failed password for invalid user liondark from 45.64.3.61 port 60642 ssh2 Feb 9 12:06:19.792902 sshd[4419]: Received disconnect from 45.64.3.61 port 60642:11: Bye Bye [preauth] Feb 9 12:06:19.792902 sshd[4419]: Disconnected from invalid user liondark 45.64.3.61 port 60642 [preauth] Feb 9 12:06:19.795234 systemd[1]: sshd@13-139.178.89.23:22-45.64.3.61:60642.service: Deactivated successfully. Feb 9 12:06:27.853756 systemd[1]: Started sshd@14-139.178.89.23:22-198.12.118.109:22544.service. Feb 9 12:06:28.025747 sshd[4425]: Invalid user mahanmn from 198.12.118.109 port 22544 Feb 9 12:06:28.031748 sshd[4425]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:28.032602 sshd[4425]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:06:28.032642 sshd[4425]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:06:28.032881 sshd[4425]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:30.014287 sshd[4425]: Failed password for invalid user mahanmn from 198.12.118.109 port 22544 ssh2 Feb 9 12:06:30.247537 sshd[4425]: Received disconnect from 198.12.118.109 port 22544:11: Bye Bye [preauth] Feb 9 12:06:30.247537 sshd[4425]: Disconnected from invalid user mahanmn 198.12.118.109 port 22544 [preauth] Feb 9 12:06:30.250049 systemd[1]: sshd@14-139.178.89.23:22-198.12.118.109:22544.service: Deactivated successfully. Feb 9 12:06:31.915430 systemd[1]: Started sshd@15-139.178.89.23:22-180.107.140.47:48152.service. Feb 9 12:06:32.787468 sshd[4429]: Invalid user xqy from 180.107.140.47 port 48152 Feb 9 12:06:32.793420 sshd[4429]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:32.794498 sshd[4429]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:06:32.794587 sshd[4429]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:06:32.795596 sshd[4429]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:34.993366 sshd[4429]: Failed password for invalid user xqy from 180.107.140.47 port 48152 ssh2 Feb 9 12:06:35.258453 sshd[4429]: Received disconnect from 180.107.140.47 port 48152:11: Bye Bye [preauth] Feb 9 12:06:35.258453 sshd[4429]: Disconnected from invalid user xqy 180.107.140.47 port 48152 [preauth] Feb 9 12:06:35.260807 systemd[1]: sshd@15-139.178.89.23:22-180.107.140.47:48152.service: Deactivated successfully. Feb 9 12:06:35.963919 systemd[1]: Started sshd@16-139.178.89.23:22-209.97.179.25:32936.service. Feb 9 12:06:36.779748 sshd[4433]: Invalid user zamrukbr from 209.97.179.25 port 32936 Feb 9 12:06:36.785679 sshd[4433]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:36.786643 sshd[4433]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:06:36.786733 sshd[4433]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:06:36.787620 sshd[4433]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:38.533718 sshd[4433]: Failed password for invalid user zamrukbr from 209.97.179.25 port 32936 ssh2 Feb 9 12:06:38.888129 sshd[4433]: Received disconnect from 209.97.179.25 port 32936:11: Bye Bye [preauth] Feb 9 12:06:38.888129 sshd[4433]: Disconnected from invalid user zamrukbr 209.97.179.25 port 32936 [preauth] Feb 9 12:06:38.890538 systemd[1]: sshd@16-139.178.89.23:22-209.97.179.25:32936.service: Deactivated successfully. Feb 9 12:06:50.009806 systemd[1]: Started sshd@17-139.178.89.23:22-39.109.116.167:41003.service. Feb 9 12:06:50.925958 sshd[4437]: Invalid user xjwu from 39.109.116.167 port 41003 Feb 9 12:06:50.932134 sshd[4437]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:50.933135 sshd[4437]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:06:50.933251 sshd[4437]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:06:50.934311 sshd[4437]: pam_faillock(sshd:auth): User unknown Feb 9 12:06:52.936408 sshd[4437]: Failed password for invalid user xjwu from 39.109.116.167 port 41003 ssh2 Feb 9 12:06:55.024348 sshd[4437]: Received disconnect from 39.109.116.167 port 41003:11: Bye Bye [preauth] Feb 9 12:06:55.024348 sshd[4437]: Disconnected from invalid user xjwu 39.109.116.167 port 41003 [preauth] Feb 9 12:06:55.026813 systemd[1]: sshd@17-139.178.89.23:22-39.109.116.167:41003.service: Deactivated successfully. Feb 9 12:07:03.966374 update_engine[1547]: I0209 12:07:03.966248 1547 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 12:07:03.966374 update_engine[1547]: I0209 12:07:03.966329 1547 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 12:07:03.967586 update_engine[1547]: I0209 12:07:03.967152 1547 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 12:07:03.968113 update_engine[1547]: I0209 12:07:03.968039 1547 omaha_request_params.cc:62] Current group set to lts Feb 9 12:07:03.968376 update_engine[1547]: I0209 12:07:03.968348 1547 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 12:07:03.968376 update_engine[1547]: I0209 12:07:03.968367 1547 update_attempter.cc:643] Scheduling an action processor start. Feb 9 12:07:03.968604 update_engine[1547]: I0209 12:07:03.968400 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 12:07:03.968604 update_engine[1547]: I0209 12:07:03.968469 1547 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 12:07:03.968791 update_engine[1547]: I0209 12:07:03.968613 1547 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 12:07:03.968791 update_engine[1547]: I0209 12:07:03.968631 1547 omaha_request_action.cc:271] Request: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: Feb 9 12:07:03.968791 update_engine[1547]: I0209 12:07:03.968640 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 12:07:03.969779 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 12:07:03.971702 update_engine[1547]: I0209 12:07:03.971611 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 12:07:03.971955 update_engine[1547]: E0209 12:07:03.971838 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 12:07:03.972141 update_engine[1547]: I0209 12:07:03.971992 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 12:07:13.885141 update_engine[1547]: I0209 12:07:13.885022 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 12:07:13.886171 update_engine[1547]: I0209 12:07:13.885535 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 12:07:13.886171 update_engine[1547]: E0209 12:07:13.885735 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 12:07:13.886171 update_engine[1547]: I0209 12:07:13.885902 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 12:07:20.552750 systemd[1]: Started sshd@18-139.178.89.23:22-198.12.118.109:41078.service. Feb 9 12:07:20.747818 sshd[4445]: Invalid user terra from 198.12.118.109 port 41078 Feb 9 12:07:20.753722 sshd[4445]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:20.754872 sshd[4445]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:07:20.754958 sshd[4445]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:07:20.755932 sshd[4445]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:22.541438 sshd[4445]: Failed password for invalid user terra from 198.12.118.109 port 41078 ssh2 Feb 9 12:07:23.706359 sshd[4445]: Received disconnect from 198.12.118.109 port 41078:11: Bye Bye [preauth] Feb 9 12:07:23.706359 sshd[4445]: Disconnected from invalid user terra 198.12.118.109 port 41078 [preauth] Feb 9 12:07:23.708907 systemd[1]: sshd@18-139.178.89.23:22-198.12.118.109:41078.service: Deactivated successfully. Feb 9 12:07:23.885193 update_engine[1547]: I0209 12:07:23.885056 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 12:07:23.886109 update_engine[1547]: I0209 12:07:23.885546 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 12:07:23.886109 update_engine[1547]: E0209 12:07:23.885754 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 12:07:23.886109 update_engine[1547]: I0209 12:07:23.885922 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 12:07:24.728742 systemd[1]: Started sshd@19-139.178.89.23:22-45.64.3.61:51186.service. Feb 9 12:07:25.792106 sshd[4451]: Invalid user frank from 45.64.3.61 port 51186 Feb 9 12:07:25.798232 sshd[4451]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:25.799186 sshd[4451]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:07:25.799303 sshd[4451]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:07:25.800196 sshd[4451]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:27.606866 sshd[4451]: Failed password for invalid user frank from 45.64.3.61 port 51186 ssh2 Feb 9 12:07:28.173803 sshd[4451]: Received disconnect from 45.64.3.61 port 51186:11: Bye Bye [preauth] Feb 9 12:07:28.173803 sshd[4451]: Disconnected from invalid user frank 45.64.3.61 port 51186 [preauth] Feb 9 12:07:28.176246 systemd[1]: sshd@19-139.178.89.23:22-45.64.3.61:51186.service: Deactivated successfully. Feb 9 12:07:29.561634 systemd[1]: Started sshd@20-139.178.89.23:22-209.97.179.25:51798.service. Feb 9 12:07:30.362974 sshd[4455]: Invalid user eguzkine from 209.97.179.25 port 51798 Feb 9 12:07:30.369098 sshd[4455]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:30.370066 sshd[4455]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:07:30.370154 sshd[4455]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:07:30.371088 sshd[4455]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:32.197763 sshd[4455]: Failed password for invalid user eguzkine from 209.97.179.25 port 51798 ssh2 Feb 9 12:07:33.293027 sshd[4455]: Received disconnect from 209.97.179.25 port 51798:11: Bye Bye [preauth] Feb 9 12:07:33.293027 sshd[4455]: Disconnected from invalid user eguzkine 209.97.179.25 port 51798 [preauth] Feb 9 12:07:33.295370 systemd[1]: sshd@20-139.178.89.23:22-209.97.179.25:51798.service: Deactivated successfully. Feb 9 12:07:33.878456 update_engine[1547]: I0209 12:07:33.878343 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 12:07:33.879356 update_engine[1547]: I0209 12:07:33.878810 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 12:07:33.879356 update_engine[1547]: E0209 12:07:33.879020 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 12:07:33.879356 update_engine[1547]: I0209 12:07:33.879164 1547 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 12:07:33.879356 update_engine[1547]: I0209 12:07:33.879179 1547 omaha_request_action.cc:621] Omaha request response: Feb 9 12:07:33.879356 update_engine[1547]: E0209 12:07:33.879345 1547 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879374 1547 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879384 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879393 1547 update_attempter.cc:306] Processing Done. Feb 9 12:07:33.879936 update_engine[1547]: E0209 12:07:33.879418 1547 update_attempter.cc:619] Update failed. Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879427 1547 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879436 1547 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879446 1547 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879598 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879647 1547 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879658 1547 omaha_request_action.cc:271] Request: Feb 9 12:07:33.879936 update_engine[1547]: Feb 9 12:07:33.879936 update_engine[1547]: Feb 9 12:07:33.879936 update_engine[1547]: Feb 9 12:07:33.879936 update_engine[1547]: Feb 9 12:07:33.879936 update_engine[1547]: Feb 9 12:07:33.879936 update_engine[1547]: Feb 9 12:07:33.879936 update_engine[1547]: I0209 12:07:33.879668 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.879977 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 12:07:33.881472 update_engine[1547]: E0209 12:07:33.880137 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880287 1547 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880303 1547 omaha_request_action.cc:621] Omaha request response: Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880313 1547 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880322 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880329 1547 update_attempter.cc:306] Processing Done. Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880336 1547 update_attempter.cc:310] Error event sent. Feb 9 12:07:33.881472 update_engine[1547]: I0209 12:07:33.880363 1547 update_check_scheduler.cc:74] Next update check in 46m42s Feb 9 12:07:33.882287 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 12:07:33.882287 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 12:07:40.809754 systemd[1]: Started sshd@21-139.178.89.23:22-180.107.140.47:59700.service. Feb 9 12:07:57.397970 systemd[1]: Started sshd@22-139.178.89.23:22-119.91.207.218:58152.service. Feb 9 12:07:57.842649 systemd[1]: Started sshd@23-139.178.89.23:22-39.109.116.167:50411.service. Feb 9 12:07:58.765025 sshd[4466]: Invalid user yeo from 39.109.116.167 port 50411 Feb 9 12:07:58.771454 sshd[4466]: pam_faillock(sshd:auth): User unknown Feb 9 12:07:58.772585 sshd[4466]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:07:58.772676 sshd[4466]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:07:58.773568 sshd[4466]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:00.775396 sshd[4466]: Failed password for invalid user yeo from 39.109.116.167 port 50411 ssh2 Feb 9 12:08:02.044302 sshd[4466]: Received disconnect from 39.109.116.167 port 50411:11: Bye Bye [preauth] Feb 9 12:08:02.044302 sshd[4466]: Disconnected from invalid user yeo 39.109.116.167 port 50411 [preauth] Feb 9 12:08:02.046929 systemd[1]: sshd@23-139.178.89.23:22-39.109.116.167:50411.service: Deactivated successfully. Feb 9 12:08:14.713297 systemd[1]: Started sshd@24-139.178.89.23:22-198.12.118.109:59614.service. Feb 9 12:08:14.880841 sshd[4472]: Invalid user slow from 198.12.118.109 port 59614 Feb 9 12:08:14.886764 sshd[4472]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:14.887775 sshd[4472]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:08:14.887864 sshd[4472]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:08:14.888880 sshd[4472]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:16.754596 sshd[4472]: Failed password for invalid user slow from 198.12.118.109 port 59614 ssh2 Feb 9 12:08:17.837600 sshd[4472]: Received disconnect from 198.12.118.109 port 59614:11: Bye Bye [preauth] Feb 9 12:08:17.837600 sshd[4472]: Disconnected from invalid user slow 198.12.118.109 port 59614 [preauth] Feb 9 12:08:17.840003 systemd[1]: sshd@24-139.178.89.23:22-198.12.118.109:59614.service: Deactivated successfully. Feb 9 12:08:26.113401 systemd[1]: Started sshd@25-139.178.89.23:22-209.97.179.25:42442.service. Feb 9 12:08:26.911498 sshd[4478]: Invalid user pdv from 209.97.179.25 port 42442 Feb 9 12:08:26.917698 sshd[4478]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:26.918687 sshd[4478]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:08:26.918777 sshd[4478]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:08:26.919688 sshd[4478]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:29.102295 sshd[4478]: Failed password for invalid user pdv from 209.97.179.25 port 42442 ssh2 Feb 9 12:08:29.246919 sshd[4478]: Received disconnect from 209.97.179.25 port 42442:11: Bye Bye [preauth] Feb 9 12:08:29.246919 sshd[4478]: Disconnected from invalid user pdv 209.97.179.25 port 42442 [preauth] Feb 9 12:08:29.249492 systemd[1]: sshd@25-139.178.89.23:22-209.97.179.25:42442.service: Deactivated successfully. Feb 9 12:08:32.053788 systemd[1]: Started sshd@26-139.178.89.23:22-45.64.3.61:41728.service. Feb 9 12:08:33.547372 sshd[4482]: Invalid user samadad from 45.64.3.61 port 41728 Feb 9 12:08:33.553514 sshd[4482]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:33.554664 sshd[4482]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:08:33.554752 sshd[4482]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:08:33.555642 sshd[4482]: pam_faillock(sshd:auth): User unknown Feb 9 12:08:35.362456 sshd[4482]: Failed password for invalid user samadad from 45.64.3.61 port 41728 ssh2 Feb 9 12:08:37.053616 sshd[4482]: Received disconnect from 45.64.3.61 port 41728:11: Bye Bye [preauth] Feb 9 12:08:37.053616 sshd[4482]: Disconnected from invalid user samadad 45.64.3.61 port 41728 [preauth] Feb 9 12:08:37.054479 systemd[1]: sshd@26-139.178.89.23:22-45.64.3.61:41728.service: Deactivated successfully. Feb 9 12:08:52.985619 systemd[1]: Started sshd@27-139.178.89.23:22-180.107.140.47:43044.service. Feb 9 12:09:04.403526 systemd[1]: Started sshd@28-139.178.89.23:22-39.109.116.167:59819.service. Feb 9 12:09:05.696259 sshd[4489]: Invalid user sitarska from 39.109.116.167 port 59819 Feb 9 12:09:05.702235 sshd[4489]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:05.703185 sshd[4489]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:09:05.703300 sshd[4489]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:09:05.704274 sshd[4489]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:08.236133 sshd[4489]: Failed password for invalid user sitarska from 39.109.116.167 port 59819 ssh2 Feb 9 12:09:10.552704 sshd[4489]: Received disconnect from 39.109.116.167 port 59819:11: Bye Bye [preauth] Feb 9 12:09:10.552704 sshd[4489]: Disconnected from invalid user sitarska 39.109.116.167 port 59819 [preauth] Feb 9 12:09:10.553468 systemd[1]: sshd@28-139.178.89.23:22-39.109.116.167:59819.service: Deactivated successfully. Feb 9 12:09:14.546817 systemd[1]: Started sshd@29-139.178.89.23:22-198.12.118.109:17153.service. Feb 9 12:09:14.713943 sshd[4495]: Invalid user hieng from 198.12.118.109 port 17153 Feb 9 12:09:14.719977 sshd[4495]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:14.720974 sshd[4495]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:09:14.721061 sshd[4495]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:09:14.722102 sshd[4495]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:16.493484 sshd[4495]: Failed password for invalid user hieng from 198.12.118.109 port 17153 ssh2 Feb 9 12:09:17.919041 sshd[4495]: Received disconnect from 198.12.118.109 port 17153:11: Bye Bye [preauth] Feb 9 12:09:17.919041 sshd[4495]: Disconnected from invalid user hieng 198.12.118.109 port 17153 [preauth] Feb 9 12:09:17.921452 systemd[1]: sshd@29-139.178.89.23:22-198.12.118.109:17153.service: Deactivated successfully. Feb 9 12:09:23.071020 systemd[1]: Started sshd@30-139.178.89.23:22-209.97.179.25:33082.service. Feb 9 12:09:23.908093 sshd[4499]: Invalid user feiyinai from 209.97.179.25 port 33082 Feb 9 12:09:23.914017 sshd[4499]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:23.915159 sshd[4499]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:09:23.915288 sshd[4499]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:09:23.916143 sshd[4499]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:26.590295 sshd[4499]: Failed password for invalid user feiyinai from 209.97.179.25 port 33082 ssh2 Feb 9 12:09:28.257562 sshd[4499]: Received disconnect from 209.97.179.25 port 33082:11: Bye Bye [preauth] Feb 9 12:09:28.257562 sshd[4499]: Disconnected from invalid user feiyinai 209.97.179.25 port 33082 [preauth] Feb 9 12:09:28.260054 systemd[1]: sshd@30-139.178.89.23:22-209.97.179.25:33082.service: Deactivated successfully. Feb 9 12:09:30.929527 systemd[1]: Started sshd@31-139.178.89.23:22-119.91.207.218:45124.service. Feb 9 12:09:32.639520 sshd[4505]: Invalid user msad from 119.91.207.218 port 45124 Feb 9 12:09:32.645639 sshd[4505]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:32.646646 sshd[4505]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:09:32.646734 sshd[4505]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.207.218 Feb 9 12:09:32.647633 sshd[4505]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:34.889459 sshd[4505]: Failed password for invalid user msad from 119.91.207.218 port 45124 ssh2 Feb 9 12:09:37.029319 sshd[4505]: Received disconnect from 119.91.207.218 port 45124:11: Bye Bye [preauth] Feb 9 12:09:37.029319 sshd[4505]: Disconnected from invalid user msad 119.91.207.218 port 45124 [preauth] Feb 9 12:09:37.031820 systemd[1]: sshd@31-139.178.89.23:22-119.91.207.218:45124.service: Deactivated successfully. Feb 9 12:09:39.759062 systemd[1]: Started sshd@32-139.178.89.23:22-45.64.3.61:60504.service. Feb 9 12:09:40.814845 sshd[4459]: Timeout before authentication for 180.107.140.47 port 59700 Feb 9 12:09:40.816298 systemd[1]: sshd@21-139.178.89.23:22-180.107.140.47:59700.service: Deactivated successfully. Feb 9 12:09:41.223993 sshd[4510]: Invalid user yyokoyama from 45.64.3.61 port 60504 Feb 9 12:09:41.230099 sshd[4510]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:41.231285 sshd[4510]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:09:41.231397 sshd[4510]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:09:41.233154 sshd[4510]: pam_faillock(sshd:auth): User unknown Feb 9 12:09:43.375617 sshd[4510]: Failed password for invalid user yyokoyama from 45.64.3.61 port 60504 ssh2 Feb 9 12:09:45.430036 sshd[4510]: Received disconnect from 45.64.3.61 port 60504:11: Bye Bye [preauth] Feb 9 12:09:45.430036 sshd[4510]: Disconnected from invalid user yyokoyama 45.64.3.61 port 60504 [preauth] Feb 9 12:09:45.432652 systemd[1]: sshd@32-139.178.89.23:22-45.64.3.61:60504.service: Deactivated successfully. Feb 9 12:09:57.403196 sshd[4465]: Timeout before authentication for 119.91.207.218 port 58152 Feb 9 12:09:57.404617 systemd[1]: sshd@22-139.178.89.23:22-119.91.207.218:58152.service: Deactivated successfully. Feb 9 12:10:02.537855 systemd[1]: Started sshd@33-139.178.89.23:22-180.107.140.47:54602.service. Feb 9 12:10:09.746528 systemd[1]: Started sshd@34-139.178.89.23:22-39.109.116.167:40994.service. Feb 9 12:10:10.673612 sshd[4525]: Invalid user medicare from 39.109.116.167 port 40994 Feb 9 12:10:10.675427 sshd[4525]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:10.675743 sshd[4525]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:10:10.675767 sshd[4525]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:10:10.676004 sshd[4525]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:12.812236 systemd[1]: Started sshd@35-139.178.89.23:22-198.12.118.109:35691.service. Feb 9 12:10:12.979118 sshd[4529]: Invalid user farrash from 198.12.118.109 port 35691 Feb 9 12:10:12.985021 sshd[4529]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:12.986166 sshd[4529]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:10:12.986296 sshd[4529]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:10:12.987180 sshd[4529]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:13.133559 sshd[4525]: Failed password for invalid user medicare from 39.109.116.167 port 40994 ssh2 Feb 9 12:10:14.718722 sshd[4529]: Failed password for invalid user farrash from 198.12.118.109 port 35691 ssh2 Feb 9 12:10:14.860445 sshd[4529]: Received disconnect from 198.12.118.109 port 35691:11: Bye Bye [preauth] Feb 9 12:10:14.860445 sshd[4529]: Disconnected from invalid user farrash 198.12.118.109 port 35691 [preauth] Feb 9 12:10:14.863018 systemd[1]: sshd@35-139.178.89.23:22-198.12.118.109:35691.service: Deactivated successfully. Feb 9 12:10:15.303667 sshd[4525]: Received disconnect from 39.109.116.167 port 40994:11: Bye Bye [preauth] Feb 9 12:10:15.303667 sshd[4525]: Disconnected from invalid user medicare 39.109.116.167 port 40994 [preauth] Feb 9 12:10:15.306138 systemd[1]: sshd@34-139.178.89.23:22-39.109.116.167:40994.service: Deactivated successfully. Feb 9 12:10:25.925634 systemd[1]: Started sshd@36-139.178.89.23:22-209.97.179.25:51964.service. Feb 9 12:10:26.733354 sshd[4540]: Invalid user kirin from 209.97.179.25 port 51964 Feb 9 12:10:26.739536 sshd[4540]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:26.740905 sshd[4540]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:10:26.740995 sshd[4540]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:10:26.742048 sshd[4540]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:28.729141 sshd[4540]: Failed password for invalid user kirin from 209.97.179.25 port 51964 ssh2 Feb 9 12:10:30.834430 sshd[4540]: Received disconnect from 209.97.179.25 port 51964:11: Bye Bye [preauth] Feb 9 12:10:30.834430 sshd[4540]: Disconnected from invalid user kirin 209.97.179.25 port 51964 [preauth] Feb 9 12:10:30.836947 systemd[1]: sshd@36-139.178.89.23:22-209.97.179.25:51964.service: Deactivated successfully. Feb 9 12:10:51.051135 systemd[1]: Started sshd@37-139.178.89.23:22-45.64.3.61:51070.service. Feb 9 12:10:52.086617 sshd[4545]: Invalid user taiwa from 45.64.3.61 port 51070 Feb 9 12:10:52.087930 sshd[4545]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:52.088168 sshd[4545]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:10:52.088188 sshd[4545]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:10:52.088381 sshd[4545]: pam_faillock(sshd:auth): User unknown Feb 9 12:10:52.990936 sshd[4486]: Timeout before authentication for 180.107.140.47 port 43044 Feb 9 12:10:52.992406 systemd[1]: sshd@27-139.178.89.23:22-180.107.140.47:43044.service: Deactivated successfully. Feb 9 12:10:54.646940 sshd[4545]: Failed password for invalid user taiwa from 45.64.3.61 port 51070 ssh2 Feb 9 12:10:55.075473 sshd[4545]: Received disconnect from 45.64.3.61 port 51070:11: Bye Bye [preauth] Feb 9 12:10:55.075473 sshd[4545]: Disconnected from invalid user taiwa 45.64.3.61 port 51070 [preauth] Feb 9 12:10:55.076081 systemd[1]: sshd@37-139.178.89.23:22-45.64.3.61:51070.service: Deactivated successfully. Feb 9 12:11:03.179428 systemd[1]: Started sshd@38-139.178.89.23:22-147.75.109.163:46374.service. Feb 9 12:11:03.217759 sshd[4553]: Accepted publickey for core from 147.75.109.163 port 46374 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:03.218632 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:03.221990 systemd-logind[1545]: New session 8 of user core. Feb 9 12:11:03.222554 systemd[1]: Started session-8.scope. Feb 9 12:11:03.319253 sshd[4553]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:03.320896 systemd[1]: sshd@38-139.178.89.23:22-147.75.109.163:46374.service: Deactivated successfully. Feb 9 12:11:03.321718 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 12:11:03.321763 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Feb 9 12:11:03.322442 systemd-logind[1545]: Removed session 8. Feb 9 12:11:03.656387 systemd[1]: Started sshd@39-139.178.89.23:22-119.91.207.218:60328.service. Feb 9 12:11:04.091567 systemd[1]: Started sshd@40-139.178.89.23:22-198.12.118.109:54225.service. Feb 9 12:11:04.274251 sshd[4581]: Invalid user huangguoli from 198.12.118.109 port 54225 Feb 9 12:11:04.280262 sshd[4581]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:04.281192 sshd[4581]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:04.281312 sshd[4581]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:11:04.282271 sshd[4581]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:05.366799 sshd[4580]: Invalid user dgk from 119.91.207.218 port 60328 Feb 9 12:11:05.371635 sshd[4580]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:05.372575 sshd[4580]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:05.372663 sshd[4580]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.207.218 Feb 9 12:11:05.373576 sshd[4580]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:06.821055 sshd[4581]: Failed password for invalid user huangguoli from 198.12.118.109 port 54225 ssh2 Feb 9 12:11:07.716478 sshd[4580]: Failed password for invalid user dgk from 119.91.207.218 port 60328 ssh2 Feb 9 12:11:08.287999 sshd[4581]: Received disconnect from 198.12.118.109 port 54225:11: Bye Bye [preauth] Feb 9 12:11:08.287999 sshd[4581]: Disconnected from invalid user huangguoli 198.12.118.109 port 54225 [preauth] Feb 9 12:11:08.290413 systemd[1]: sshd@40-139.178.89.23:22-198.12.118.109:54225.service: Deactivated successfully. Feb 9 12:11:08.321741 systemd[1]: Started sshd@41-139.178.89.23:22-147.75.109.163:33490.service. Feb 9 12:11:08.379117 sshd[4586]: Accepted publickey for core from 147.75.109.163 port 33490 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:08.380538 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:08.385706 systemd-logind[1545]: New session 9 of user core. Feb 9 12:11:08.386744 systemd[1]: Started session-9.scope. Feb 9 12:11:08.530463 sshd[4586]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:08.532724 systemd[1]: sshd@41-139.178.89.23:22-147.75.109.163:33490.service: Deactivated successfully. Feb 9 12:11:08.533788 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 12:11:08.533831 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Feb 9 12:11:08.534826 systemd-logind[1545]: Removed session 9. Feb 9 12:11:09.332709 sshd[4580]: Received disconnect from 119.91.207.218 port 60328:11: Bye Bye [preauth] Feb 9 12:11:09.332709 sshd[4580]: Disconnected from invalid user dgk 119.91.207.218 port 60328 [preauth] Feb 9 12:11:09.335100 systemd[1]: sshd@39-139.178.89.23:22-119.91.207.218:60328.service: Deactivated successfully. Feb 9 12:11:12.968357 systemd[1]: Started sshd@42-139.178.89.23:22-180.107.140.47:37932.service. Feb 9 12:11:13.537343 systemd[1]: Started sshd@43-139.178.89.23:22-147.75.109.163:33502.service. Feb 9 12:11:13.573745 sshd[4619]: Accepted publickey for core from 147.75.109.163 port 33502 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:13.574646 sshd[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:13.578012 systemd-logind[1545]: New session 10 of user core. Feb 9 12:11:13.578802 systemd[1]: Started session-10.scope. Feb 9 12:11:13.665886 sshd[4619]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:13.667283 systemd[1]: sshd@43-139.178.89.23:22-147.75.109.163:33502.service: Deactivated successfully. Feb 9 12:11:13.667906 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 12:11:13.667942 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Feb 9 12:11:13.668485 systemd-logind[1545]: Removed session 10. Feb 9 12:11:13.805609 sshd[4617]: Invalid user djibo from 180.107.140.47 port 37932 Feb 9 12:11:13.811772 sshd[4617]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:13.812763 sshd[4617]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:13.812854 sshd[4617]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:11:13.813903 sshd[4617]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:15.921454 sshd[4617]: Failed password for invalid user djibo from 180.107.140.47 port 37932 ssh2 Feb 9 12:11:16.324960 sshd[4617]: Received disconnect from 180.107.140.47 port 37932:11: Bye Bye [preauth] Feb 9 12:11:16.324960 sshd[4617]: Disconnected from invalid user djibo 180.107.140.47 port 37932 [preauth] Feb 9 12:11:16.327489 systemd[1]: sshd@42-139.178.89.23:22-180.107.140.47:37932.service: Deactivated successfully. Feb 9 12:11:16.457388 systemd[1]: Started sshd@44-139.178.89.23:22-39.109.116.167:50405.service. Feb 9 12:11:17.377832 sshd[4648]: Invalid user efthym from 39.109.116.167 port 50405 Feb 9 12:11:17.379370 sshd[4648]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:17.379650 sshd[4648]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:17.379675 sshd[4648]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:11:17.379922 sshd[4648]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:18.674425 systemd[1]: Started sshd@45-139.178.89.23:22-147.75.109.163:41736.service. Feb 9 12:11:18.714869 sshd[4650]: Accepted publickey for core from 147.75.109.163 port 41736 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:18.715700 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:18.718732 systemd-logind[1545]: New session 11 of user core. Feb 9 12:11:18.719275 systemd[1]: Started session-11.scope. Feb 9 12:11:18.807128 sshd[4650]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:18.808634 systemd[1]: Started sshd@46-139.178.89.23:22-147.75.109.163:41748.service. Feb 9 12:11:18.808932 systemd[1]: sshd@45-139.178.89.23:22-147.75.109.163:41736.service: Deactivated successfully. Feb 9 12:11:18.809481 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 12:11:18.809489 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Feb 9 12:11:18.809982 systemd-logind[1545]: Removed session 11. Feb 9 12:11:18.871587 sshd[4674]: Accepted publickey for core from 147.75.109.163 port 41748 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:18.874956 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:18.885509 systemd-logind[1545]: New session 12 of user core. Feb 9 12:11:18.887906 systemd[1]: Started session-12.scope. Feb 9 12:11:19.366873 sshd[4674]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:19.368477 systemd[1]: Started sshd@47-139.178.89.23:22-147.75.109.163:41750.service. Feb 9 12:11:19.368793 systemd[1]: sshd@46-139.178.89.23:22-147.75.109.163:41748.service: Deactivated successfully. Feb 9 12:11:19.369402 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 12:11:19.369434 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Feb 9 12:11:19.369886 systemd-logind[1545]: Removed session 12. Feb 9 12:11:19.405340 sshd[4701]: Accepted publickey for core from 147.75.109.163 port 41750 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:19.406218 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:19.408829 systemd-logind[1545]: New session 13 of user core. Feb 9 12:11:19.409301 systemd[1]: Started session-13.scope. Feb 9 12:11:19.540319 sshd[4701]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:19.542124 systemd[1]: sshd@47-139.178.89.23:22-147.75.109.163:41750.service: Deactivated successfully. Feb 9 12:11:19.542925 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 12:11:19.542957 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Feb 9 12:11:19.543647 systemd-logind[1545]: Removed session 13. Feb 9 12:11:19.703077 sshd[4648]: Failed password for invalid user efthym from 39.109.116.167 port 50405 ssh2 Feb 9 12:11:20.122738 sshd[4648]: Received disconnect from 39.109.116.167 port 50405:11: Bye Bye [preauth] Feb 9 12:11:20.122738 sshd[4648]: Disconnected from invalid user efthym 39.109.116.167 port 50405 [preauth] Feb 9 12:11:20.125082 systemd[1]: sshd@44-139.178.89.23:22-39.109.116.167:50405.service: Deactivated successfully. Feb 9 12:11:21.817594 systemd[1]: Started sshd@48-139.178.89.23:22-209.97.179.25:42598.service. Feb 9 12:11:22.616766 sshd[4733]: Invalid user jordan from 209.97.179.25 port 42598 Feb 9 12:11:22.622656 sshd[4733]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:22.623790 sshd[4733]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:22.623878 sshd[4733]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:11:22.624928 sshd[4733]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:24.300905 sshd[4733]: Failed password for invalid user jordan from 209.97.179.25 port 42598 ssh2 Feb 9 12:11:24.546708 systemd[1]: Started sshd@49-139.178.89.23:22-147.75.109.163:51340.service. Feb 9 12:11:24.592771 sshd[4737]: Accepted publickey for core from 147.75.109.163 port 51340 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:24.595805 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:24.606494 systemd-logind[1545]: New session 14 of user core. Feb 9 12:11:24.609373 systemd[1]: Started session-14.scope. Feb 9 12:11:24.703257 sshd[4737]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:24.704758 systemd[1]: sshd@49-139.178.89.23:22-147.75.109.163:51340.service: Deactivated successfully. Feb 9 12:11:24.705437 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 12:11:24.705477 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Feb 9 12:11:24.706043 systemd-logind[1545]: Removed session 14. Feb 9 12:11:25.805422 sshd[4733]: Received disconnect from 209.97.179.25 port 42598:11: Bye Bye [preauth] Feb 9 12:11:25.805422 sshd[4733]: Disconnected from invalid user jordan 209.97.179.25 port 42598 [preauth] Feb 9 12:11:25.807935 systemd[1]: sshd@48-139.178.89.23:22-209.97.179.25:42598.service: Deactivated successfully. Feb 9 12:11:29.710491 systemd[1]: Started sshd@50-139.178.89.23:22-147.75.109.163:51356.service. Feb 9 12:11:29.747027 sshd[4768]: Accepted publickey for core from 147.75.109.163 port 51356 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:29.747940 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:29.751052 systemd-logind[1545]: New session 15 of user core. Feb 9 12:11:29.751673 systemd[1]: Started session-15.scope. Feb 9 12:11:29.842414 sshd[4768]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:29.843769 systemd[1]: sshd@50-139.178.89.23:22-147.75.109.163:51356.service: Deactivated successfully. Feb 9 12:11:29.844408 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Feb 9 12:11:29.844416 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 12:11:29.844966 systemd-logind[1545]: Removed session 15. Feb 9 12:11:34.848663 systemd[1]: Started sshd@51-139.178.89.23:22-147.75.109.163:60252.service. Feb 9 12:11:34.885402 sshd[4794]: Accepted publickey for core from 147.75.109.163 port 60252 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:34.886454 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:34.889997 systemd-logind[1545]: New session 16 of user core. Feb 9 12:11:34.890815 systemd[1]: Started session-16.scope. Feb 9 12:11:34.983389 sshd[4794]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:34.984903 systemd[1]: sshd@51-139.178.89.23:22-147.75.109.163:60252.service: Deactivated successfully. Feb 9 12:11:34.985517 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 12:11:34.985556 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Feb 9 12:11:34.986010 systemd-logind[1545]: Removed session 16. Feb 9 12:11:39.989676 systemd[1]: Started sshd@52-139.178.89.23:22-147.75.109.163:60254.service. Feb 9 12:11:40.026124 sshd[4820]: Accepted publickey for core from 147.75.109.163 port 60254 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:40.027072 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:40.030105 systemd-logind[1545]: New session 17 of user core. Feb 9 12:11:40.030755 systemd[1]: Started session-17.scope. Feb 9 12:11:40.116507 sshd[4820]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:40.117975 systemd[1]: sshd@52-139.178.89.23:22-147.75.109.163:60254.service: Deactivated successfully. Feb 9 12:11:40.118674 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 12:11:40.118714 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Feb 9 12:11:40.119178 systemd-logind[1545]: Removed session 17. Feb 9 12:11:44.021418 systemd[1]: Started sshd@53-139.178.89.23:22-141.98.11.11:55682.service. Feb 9 12:11:45.122375 systemd[1]: Started sshd@54-139.178.89.23:22-147.75.109.163:32978.service. Feb 9 12:11:45.158838 sshd[4849]: Accepted publickey for core from 147.75.109.163 port 32978 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:45.160051 sshd[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:45.163385 systemd-logind[1545]: New session 18 of user core. Feb 9 12:11:45.163969 systemd[1]: Started session-18.scope. Feb 9 12:11:45.253220 sshd[4849]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:45.254818 systemd[1]: sshd@54-139.178.89.23:22-147.75.109.163:32978.service: Deactivated successfully. Feb 9 12:11:45.255529 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 12:11:45.255573 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Feb 9 12:11:45.256109 systemd-logind[1545]: Removed session 18. Feb 9 12:11:45.358043 sshd[4847]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.11 user=root Feb 9 12:11:46.858587 sshd[4847]: Failed password for root from 141.98.11.11 port 55682 ssh2 Feb 9 12:11:47.602453 sshd[4847]: Connection closed by authenticating user root 141.98.11.11 port 55682 [preauth] Feb 9 12:11:47.604922 systemd[1]: sshd@53-139.178.89.23:22-141.98.11.11:55682.service: Deactivated successfully. Feb 9 12:11:50.261322 systemd[1]: Started sshd@55-139.178.89.23:22-147.75.109.163:32988.service. Feb 9 12:11:50.324952 sshd[4877]: Accepted publickey for core from 147.75.109.163 port 32988 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:50.325610 sshd[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:50.327933 systemd-logind[1545]: New session 19 of user core. Feb 9 12:11:50.328380 systemd[1]: Started session-19.scope. Feb 9 12:11:50.442231 sshd[4877]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:50.444116 systemd[1]: sshd@55-139.178.89.23:22-147.75.109.163:32988.service: Deactivated successfully. Feb 9 12:11:50.445041 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 12:11:50.445058 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Feb 9 12:11:50.445827 systemd-logind[1545]: Removed session 19. Feb 9 12:11:55.449558 systemd[1]: Started sshd@56-139.178.89.23:22-147.75.109.163:58456.service. Feb 9 12:11:55.486011 sshd[4906]: Accepted publickey for core from 147.75.109.163 port 58456 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:11:55.486661 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:11:55.489035 systemd-logind[1545]: New session 20 of user core. Feb 9 12:11:55.489532 systemd[1]: Started session-20.scope. Feb 9 12:11:55.576515 sshd[4906]: pam_unix(sshd:session): session closed for user core Feb 9 12:11:55.577837 systemd[1]: sshd@56-139.178.89.23:22-147.75.109.163:58456.service: Deactivated successfully. Feb 9 12:11:55.578439 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Feb 9 12:11:55.578452 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 12:11:55.578960 systemd-logind[1545]: Removed session 20. Feb 9 12:11:56.684957 systemd[1]: Started sshd@57-139.178.89.23:22-198.12.118.109:11756.service. Feb 9 12:11:56.849426 sshd[4932]: Invalid user weibanghe from 198.12.118.109 port 11756 Feb 9 12:11:56.855692 sshd[4932]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:56.856664 sshd[4932]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:56.856750 sshd[4932]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:11:56.857640 sshd[4932]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:57.480774 systemd[1]: Started sshd@58-139.178.89.23:22-45.64.3.61:41614.service. Feb 9 12:11:58.533533 sshd[4932]: Failed password for invalid user weibanghe from 198.12.118.109 port 11756 ssh2 Feb 9 12:11:58.926242 sshd[4934]: Invalid user niyazi from 45.64.3.61 port 41614 Feb 9 12:11:58.932592 sshd[4934]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:58.933938 sshd[4934]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:11:58.934055 sshd[4934]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:11:58.935188 sshd[4934]: pam_faillock(sshd:auth): User unknown Feb 9 12:11:59.048744 sshd[4932]: Received disconnect from 198.12.118.109 port 11756:11: Bye Bye [preauth] Feb 9 12:11:59.048744 sshd[4932]: Disconnected from invalid user weibanghe 198.12.118.109 port 11756 [preauth] Feb 9 12:11:59.051422 systemd[1]: sshd@57-139.178.89.23:22-198.12.118.109:11756.service: Deactivated successfully. Feb 9 12:12:00.583414 systemd[1]: Started sshd@59-139.178.89.23:22-147.75.109.163:58466.service. Feb 9 12:12:00.619921 sshd[4938]: Accepted publickey for core from 147.75.109.163 port 58466 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:00.620875 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:00.624014 systemd-logind[1545]: New session 21 of user core. Feb 9 12:12:00.624648 systemd[1]: Started session-21.scope. Feb 9 12:12:00.715358 sshd[4938]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:00.716946 systemd[1]: sshd@59-139.178.89.23:22-147.75.109.163:58466.service: Deactivated successfully. Feb 9 12:12:00.717663 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 12:12:00.717708 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Feb 9 12:12:00.718181 systemd-logind[1545]: Removed session 21. Feb 9 12:12:00.887302 sshd[4934]: Failed password for invalid user niyazi from 45.64.3.61 port 41614 ssh2 Feb 9 12:12:01.366441 sshd[4934]: Received disconnect from 45.64.3.61 port 41614:11: Bye Bye [preauth] Feb 9 12:12:01.366441 sshd[4934]: Disconnected from invalid user niyazi 45.64.3.61 port 41614 [preauth] Feb 9 12:12:01.368999 systemd[1]: sshd@58-139.178.89.23:22-45.64.3.61:41614.service: Deactivated successfully. Feb 9 12:12:02.542889 sshd[4524]: Timeout before authentication for 180.107.140.47 port 54602 Feb 9 12:12:02.544288 systemd[1]: sshd@33-139.178.89.23:22-180.107.140.47:54602.service: Deactivated successfully. Feb 9 12:12:05.722821 systemd[1]: Started sshd@60-139.178.89.23:22-147.75.109.163:39586.service. Feb 9 12:12:05.768255 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 39586 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:05.768991 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:05.771586 systemd-logind[1545]: New session 22 of user core. Feb 9 12:12:05.772001 systemd[1]: Started session-22.scope. Feb 9 12:12:05.860831 sshd[4971]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:05.862330 systemd[1]: sshd@60-139.178.89.23:22-147.75.109.163:39586.service: Deactivated successfully. Feb 9 12:12:05.862972 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Feb 9 12:12:05.863026 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 12:12:05.863570 systemd-logind[1545]: Removed session 22. Feb 9 12:12:10.867374 systemd[1]: Started sshd@61-139.178.89.23:22-147.75.109.163:39594.service. Feb 9 12:12:10.903986 sshd[4999]: Accepted publickey for core from 147.75.109.163 port 39594 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:10.904659 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:10.907093 systemd-logind[1545]: New session 23 of user core. Feb 9 12:12:10.907608 systemd[1]: Started session-23.scope. Feb 9 12:12:10.996074 sshd[4999]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:10.997525 systemd[1]: sshd@61-139.178.89.23:22-147.75.109.163:39594.service: Deactivated successfully. Feb 9 12:12:10.998152 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Feb 9 12:12:10.998165 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 12:12:10.998778 systemd-logind[1545]: Removed session 23. Feb 9 12:12:16.002920 systemd[1]: Started sshd@62-139.178.89.23:22-147.75.109.163:41238.service. Feb 9 12:12:16.039622 sshd[5025]: Accepted publickey for core from 147.75.109.163 port 41238 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:16.040547 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:16.043780 systemd-logind[1545]: New session 24 of user core. Feb 9 12:12:16.044443 systemd[1]: Started session-24.scope. Feb 9 12:12:16.137471 sshd[5025]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:16.139983 systemd[1]: sshd@62-139.178.89.23:22-147.75.109.163:41238.service: Deactivated successfully. Feb 9 12:12:16.141183 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Feb 9 12:12:16.141237 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 12:12:16.142318 systemd-logind[1545]: Removed session 24. Feb 9 12:12:16.645587 systemd[1]: Started sshd@63-139.178.89.23:22-209.97.179.25:33242.service. Feb 9 12:12:17.506796 sshd[5051]: Invalid user wangbh from 209.97.179.25 port 33242 Feb 9 12:12:17.513011 sshd[5051]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:17.513966 sshd[5051]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:12:17.514048 sshd[5051]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:12:17.514934 sshd[5051]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:19.742450 sshd[5051]: Failed password for invalid user wangbh from 209.97.179.25 port 33242 ssh2 Feb 9 12:12:21.139301 systemd[1]: Started sshd@64-139.178.89.23:22-147.75.109.163:41252.service. Feb 9 12:12:21.176301 sshd[5053]: Accepted publickey for core from 147.75.109.163 port 41252 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:21.177303 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:21.180760 systemd-logind[1545]: New session 25 of user core. Feb 9 12:12:21.181439 systemd[1]: Started session-25.scope. Feb 9 12:12:21.271581 sshd[5053]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:21.273130 systemd[1]: sshd@64-139.178.89.23:22-147.75.109.163:41252.service: Deactivated successfully. Feb 9 12:12:21.273861 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Feb 9 12:12:21.273885 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 12:12:21.274552 systemd-logind[1545]: Removed session 25. Feb 9 12:12:21.616169 sshd[5051]: Received disconnect from 209.97.179.25 port 33242:11: Bye Bye [preauth] Feb 9 12:12:21.616169 sshd[5051]: Disconnected from invalid user wangbh 209.97.179.25 port 33242 [preauth] Feb 9 12:12:21.618695 systemd[1]: sshd@63-139.178.89.23:22-209.97.179.25:33242.service: Deactivated successfully. Feb 9 12:12:22.592340 systemd[1]: Started sshd@65-139.178.89.23:22-39.109.116.167:59814.service. Feb 9 12:12:22.678745 systemd[1]: Started sshd@66-139.178.89.23:22-180.107.140.47:49496.service. Feb 9 12:12:23.534672 sshd[5081]: Invalid user rkaneko from 39.109.116.167 port 59814 Feb 9 12:12:23.540713 sshd[5081]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:23.541702 sshd[5081]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:12:23.541791 sshd[5081]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:12:23.542709 sshd[5081]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:25.926235 sshd[5081]: Failed password for invalid user rkaneko from 39.109.116.167 port 59814 ssh2 Feb 9 12:12:26.256488 systemd[1]: Started sshd@67-139.178.89.23:22-119.91.207.218:47296.service. Feb 9 12:12:26.273713 systemd[1]: Started sshd@68-139.178.89.23:22-147.75.109.163:56788.service. Feb 9 12:12:26.311019 sshd[5088]: Accepted publickey for core from 147.75.109.163 port 56788 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:26.311903 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:26.314915 systemd-logind[1545]: New session 26 of user core. Feb 9 12:12:26.315637 systemd[1]: Started session-26.scope. Feb 9 12:12:26.401312 sshd[5088]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:26.402910 systemd[1]: sshd@68-139.178.89.23:22-147.75.109.163:56788.service: Deactivated successfully. Feb 9 12:12:26.403648 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 12:12:26.403661 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Feb 9 12:12:26.404152 systemd-logind[1545]: Removed session 26. Feb 9 12:12:27.148432 sshd[5086]: Invalid user highhome from 119.91.207.218 port 47296 Feb 9 12:12:27.154635 sshd[5086]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:27.155633 sshd[5086]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:12:27.155725 sshd[5086]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.207.218 Feb 9 12:12:27.156746 sshd[5086]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:28.209961 sshd[5081]: Received disconnect from 39.109.116.167 port 59814:11: Bye Bye [preauth] Feb 9 12:12:28.209961 sshd[5081]: Disconnected from invalid user rkaneko 39.109.116.167 port 59814 [preauth] Feb 9 12:12:28.212538 systemd[1]: sshd@65-139.178.89.23:22-39.109.116.167:59814.service: Deactivated successfully. Feb 9 12:12:29.755580 sshd[5086]: Failed password for invalid user highhome from 119.91.207.218 port 47296 ssh2 Feb 9 12:12:31.408708 systemd[1]: Started sshd@69-139.178.89.23:22-147.75.109.163:56796.service. Feb 9 12:12:31.445709 sshd[5115]: Accepted publickey for core from 147.75.109.163 port 56796 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:31.446388 sshd[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:31.448868 systemd-logind[1545]: New session 27 of user core. Feb 9 12:12:31.449462 systemd[1]: Started session-27.scope. Feb 9 12:12:31.574327 sshd[5115]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:31.575703 systemd[1]: sshd@69-139.178.89.23:22-147.75.109.163:56796.service: Deactivated successfully. Feb 9 12:12:31.576433 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 12:12:31.576457 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. Feb 9 12:12:31.576971 systemd-logind[1545]: Removed session 27. Feb 9 12:12:31.814412 sshd[5086]: Received disconnect from 119.91.207.218 port 47296:11: Bye Bye [preauth] Feb 9 12:12:31.814412 sshd[5086]: Disconnected from invalid user highhome 119.91.207.218 port 47296 [preauth] Feb 9 12:12:31.816870 systemd[1]: sshd@67-139.178.89.23:22-119.91.207.218:47296.service: Deactivated successfully. Feb 9 12:12:36.581264 systemd[1]: Started sshd@70-139.178.89.23:22-147.75.109.163:45166.service. Feb 9 12:12:36.618481 sshd[5143]: Accepted publickey for core from 147.75.109.163 port 45166 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:36.621726 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:36.632185 systemd-logind[1545]: New session 28 of user core. Feb 9 12:12:36.635233 systemd[1]: Started session-28.scope. Feb 9 12:12:36.756476 sshd[5143]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:36.758819 systemd[1]: sshd@70-139.178.89.23:22-147.75.109.163:45166.service: Deactivated successfully. Feb 9 12:12:36.759810 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 12:12:36.759860 systemd-logind[1545]: Session 28 logged out. Waiting for processes to exit. Feb 9 12:12:36.760809 systemd-logind[1545]: Removed session 28. Feb 9 12:12:41.762599 systemd[1]: Started sshd@71-139.178.89.23:22-147.75.109.163:45172.service. Feb 9 12:12:41.800390 sshd[5169]: Accepted publickey for core from 147.75.109.163 port 45172 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:41.803590 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:41.814501 systemd-logind[1545]: New session 29 of user core. Feb 9 12:12:41.817322 systemd[1]: Started session-29.scope. Feb 9 12:12:41.906202 sshd[5169]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:41.907997 systemd[1]: sshd@71-139.178.89.23:22-147.75.109.163:45172.service: Deactivated successfully. Feb 9 12:12:41.908913 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 12:12:41.908916 systemd-logind[1545]: Session 29 logged out. Waiting for processes to exit. Feb 9 12:12:41.909684 systemd-logind[1545]: Removed session 29. Feb 9 12:12:46.913453 systemd[1]: Started sshd@72-139.178.89.23:22-147.75.109.163:57518.service. Feb 9 12:12:46.949952 sshd[5198]: Accepted publickey for core from 147.75.109.163 port 57518 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:46.950895 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:46.954227 systemd-logind[1545]: New session 30 of user core. Feb 9 12:12:46.955015 systemd[1]: Started session-30.scope. Feb 9 12:12:47.043350 sshd[5198]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:47.044851 systemd[1]: sshd@72-139.178.89.23:22-147.75.109.163:57518.service: Deactivated successfully. Feb 9 12:12:47.045527 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 12:12:47.045569 systemd-logind[1545]: Session 30 logged out. Waiting for processes to exit. Feb 9 12:12:47.046100 systemd-logind[1545]: Removed session 30. Feb 9 12:12:47.128883 systemd[1]: Started sshd@73-139.178.89.23:22-198.12.118.109:30290.service. Feb 9 12:12:47.330322 sshd[5225]: Invalid user nhosseini from 198.12.118.109 port 30290 Feb 9 12:12:47.336544 sshd[5225]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:47.337577 sshd[5225]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:12:47.337668 sshd[5225]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:12:47.338595 sshd[5225]: pam_faillock(sshd:auth): User unknown Feb 9 12:12:49.682190 sshd[5225]: Failed password for invalid user nhosseini from 198.12.118.109 port 30290 ssh2 Feb 9 12:12:51.961461 sshd[5225]: Received disconnect from 198.12.118.109 port 30290:11: Bye Bye [preauth] Feb 9 12:12:51.961461 sshd[5225]: Disconnected from invalid user nhosseini 198.12.118.109 port 30290 [preauth] Feb 9 12:12:51.963975 systemd[1]: sshd@73-139.178.89.23:22-198.12.118.109:30290.service: Deactivated successfully. Feb 9 12:12:52.050385 systemd[1]: Started sshd@74-139.178.89.23:22-147.75.109.163:57530.service. Feb 9 12:12:52.113510 sshd[5229]: Accepted publickey for core from 147.75.109.163 port 57530 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:52.116739 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:52.128097 systemd-logind[1545]: New session 31 of user core. Feb 9 12:12:52.131222 systemd[1]: Started session-31.scope. Feb 9 12:12:52.224422 sshd[5229]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:52.225818 systemd[1]: sshd@74-139.178.89.23:22-147.75.109.163:57530.service: Deactivated successfully. Feb 9 12:12:52.226434 systemd-logind[1545]: Session 31 logged out. Waiting for processes to exit. Feb 9 12:12:52.226447 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 12:12:52.226966 systemd-logind[1545]: Removed session 31. Feb 9 12:12:57.231246 systemd[1]: Started sshd@75-139.178.89.23:22-147.75.109.163:38556.service. Feb 9 12:12:57.267759 sshd[5255]: Accepted publickey for core from 147.75.109.163 port 38556 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:12:57.268640 sshd[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:12:57.271704 systemd-logind[1545]: New session 32 of user core. Feb 9 12:12:57.272329 systemd[1]: Started session-32.scope. Feb 9 12:12:57.361361 sshd[5255]: pam_unix(sshd:session): session closed for user core Feb 9 12:12:57.362855 systemd[1]: sshd@75-139.178.89.23:22-147.75.109.163:38556.service: Deactivated successfully. Feb 9 12:12:57.363543 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 12:12:57.363559 systemd-logind[1545]: Session 32 logged out. Waiting for processes to exit. Feb 9 12:12:57.364103 systemd-logind[1545]: Removed session 32. Feb 9 12:13:02.368299 systemd[1]: Started sshd@76-139.178.89.23:22-147.75.109.163:38570.service. Feb 9 12:13:02.404829 sshd[5280]: Accepted publickey for core from 147.75.109.163 port 38570 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:02.405731 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:02.409000 systemd-logind[1545]: New session 33 of user core. Feb 9 12:13:02.409832 systemd[1]: Started session-33.scope. Feb 9 12:13:02.501024 sshd[5280]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:02.502437 systemd[1]: sshd@76-139.178.89.23:22-147.75.109.163:38570.service: Deactivated successfully. Feb 9 12:13:02.503003 systemd-logind[1545]: Session 33 logged out. Waiting for processes to exit. Feb 9 12:13:02.503043 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 12:13:02.503570 systemd-logind[1545]: Removed session 33. Feb 9 12:13:02.874404 systemd[1]: Started sshd@77-139.178.89.23:22-45.64.3.61:60380.service. Feb 9 12:13:06.458825 sshd[5306]: Invalid user soheyl from 45.64.3.61 port 60380 Feb 9 12:13:06.464989 sshd[5306]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:06.466019 sshd[5306]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:13:06.466108 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:13:06.467036 sshd[5306]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:07.508192 systemd[1]: Started sshd@78-139.178.89.23:22-147.75.109.163:59478.service. Feb 9 12:13:07.545089 sshd[5308]: Accepted publickey for core from 147.75.109.163 port 59478 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:07.546041 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:07.549099 systemd-logind[1545]: New session 34 of user core. Feb 9 12:13:07.550036 systemd[1]: Started session-34.scope. Feb 9 12:13:07.636567 sshd[5308]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:07.637956 systemd[1]: sshd@78-139.178.89.23:22-147.75.109.163:59478.service: Deactivated successfully. Feb 9 12:13:07.638583 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 12:13:07.638600 systemd-logind[1545]: Session 34 logged out. Waiting for processes to exit. Feb 9 12:13:07.639069 systemd-logind[1545]: Removed session 34. Feb 9 12:13:07.751468 sshd[5306]: Failed password for invalid user soheyl from 45.64.3.61 port 60380 ssh2 Feb 9 12:13:08.798579 sshd[5306]: Received disconnect from 45.64.3.61 port 60380:11: Bye Bye [preauth] Feb 9 12:13:08.798579 sshd[5306]: Disconnected from invalid user soheyl 45.64.3.61 port 60380 [preauth] Feb 9 12:13:08.801011 systemd[1]: sshd@77-139.178.89.23:22-45.64.3.61:60380.service: Deactivated successfully. Feb 9 12:13:12.092875 systemd[1]: Started sshd@79-139.178.89.23:22-209.97.179.25:52110.service. Feb 9 12:13:12.643813 systemd[1]: Started sshd@80-139.178.89.23:22-147.75.109.163:59480.service. Feb 9 12:13:12.679938 sshd[5340]: Accepted publickey for core from 147.75.109.163 port 59480 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:12.680778 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:12.683907 systemd-logind[1545]: New session 35 of user core. Feb 9 12:13:12.684699 systemd[1]: Started session-35.scope. Feb 9 12:13:12.773367 sshd[5340]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:12.774761 systemd[1]: sshd@80-139.178.89.23:22-147.75.109.163:59480.service: Deactivated successfully. Feb 9 12:13:12.775426 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 12:13:12.775474 systemd-logind[1545]: Session 35 logged out. Waiting for processes to exit. Feb 9 12:13:12.776024 systemd-logind[1545]: Removed session 35. Feb 9 12:13:12.933363 sshd[5338]: Invalid user mr from 209.97.179.25 port 52110 Feb 9 12:13:12.939411 sshd[5338]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:12.940550 sshd[5338]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:13:12.940640 sshd[5338]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:13:12.941707 sshd[5338]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:14.718252 sshd[5338]: Failed password for invalid user mr from 209.97.179.25 port 52110 ssh2 Feb 9 12:13:16.282537 sshd[5338]: Received disconnect from 209.97.179.25 port 52110:11: Bye Bye [preauth] Feb 9 12:13:16.282537 sshd[5338]: Disconnected from invalid user mr 209.97.179.25 port 52110 [preauth] Feb 9 12:13:16.284981 systemd[1]: sshd@79-139.178.89.23:22-209.97.179.25:52110.service: Deactivated successfully. Feb 9 12:13:17.779980 systemd[1]: Started sshd@81-139.178.89.23:22-147.75.109.163:45188.service. Feb 9 12:13:17.843512 sshd[5368]: Accepted publickey for core from 147.75.109.163 port 45188 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:17.846945 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:17.857775 systemd-logind[1545]: New session 36 of user core. Feb 9 12:13:17.860294 systemd[1]: Started session-36.scope. Feb 9 12:13:17.950434 sshd[5368]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:17.951880 systemd[1]: sshd@81-139.178.89.23:22-147.75.109.163:45188.service: Deactivated successfully. Feb 9 12:13:17.952537 systemd-logind[1545]: Session 36 logged out. Waiting for processes to exit. Feb 9 12:13:17.952545 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 12:13:17.953050 systemd-logind[1545]: Removed session 36. Feb 9 12:13:22.956821 systemd[1]: Started sshd@82-139.178.89.23:22-147.75.109.163:45196.service. Feb 9 12:13:22.993142 sshd[5394]: Accepted publickey for core from 147.75.109.163 port 45196 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:22.993905 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:22.996108 systemd-logind[1545]: New session 37 of user core. Feb 9 12:13:22.996629 systemd[1]: Started session-37.scope. Feb 9 12:13:23.082760 sshd[5394]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:23.084055 systemd[1]: sshd@82-139.178.89.23:22-147.75.109.163:45196.service: Deactivated successfully. Feb 9 12:13:23.084666 systemd-logind[1545]: Session 37 logged out. Waiting for processes to exit. Feb 9 12:13:23.084677 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 12:13:23.085108 systemd-logind[1545]: Removed session 37. Feb 9 12:13:27.728753 systemd[1]: Started sshd@83-139.178.89.23:22-39.109.116.167:40990.service. Feb 9 12:13:28.085531 systemd[1]: Started sshd@84-139.178.89.23:22-147.75.109.163:40566.service. Feb 9 12:13:28.123548 sshd[5424]: Accepted publickey for core from 147.75.109.163 port 40566 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:28.126739 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:28.137531 systemd-logind[1545]: New session 38 of user core. Feb 9 12:13:28.140739 systemd[1]: Started session-38.scope. Feb 9 12:13:28.230532 sshd[5424]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:28.231947 systemd[1]: sshd@84-139.178.89.23:22-147.75.109.163:40566.service: Deactivated successfully. Feb 9 12:13:28.232667 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 12:13:28.232681 systemd-logind[1545]: Session 38 logged out. Waiting for processes to exit. Feb 9 12:13:28.233146 systemd-logind[1545]: Removed session 38. Feb 9 12:13:28.657987 sshd[5422]: Invalid user dain from 39.109.116.167 port 40990 Feb 9 12:13:28.664132 sshd[5422]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:28.665139 sshd[5422]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:13:28.665264 sshd[5422]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:13:28.666151 sshd[5422]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:30.605864 systemd[1]: Started sshd@85-139.178.89.23:22-180.107.140.47:32798.service. Feb 9 12:13:30.637468 sshd[5422]: Failed password for invalid user dain from 39.109.116.167 port 40990 ssh2 Feb 9 12:13:30.843367 sshd[5422]: Received disconnect from 39.109.116.167 port 40990:11: Bye Bye [preauth] Feb 9 12:13:30.843367 sshd[5422]: Disconnected from invalid user dain 39.109.116.167 port 40990 [preauth] Feb 9 12:13:30.845924 systemd[1]: sshd@83-139.178.89.23:22-39.109.116.167:40990.service: Deactivated successfully. Feb 9 12:13:32.676086 sshd[5450]: Invalid user mehrnoush from 180.107.140.47 port 32798 Feb 9 12:13:32.682379 sshd[5450]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:32.683356 sshd[5450]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:13:32.683442 sshd[5450]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:13:32.684289 sshd[5450]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:33.236828 systemd[1]: Started sshd@86-139.178.89.23:22-147.75.109.163:40572.service. Feb 9 12:13:33.273366 sshd[5454]: Accepted publickey for core from 147.75.109.163 port 40572 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:33.274268 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:33.277409 systemd-logind[1545]: New session 39 of user core. Feb 9 12:13:33.278183 systemd[1]: Started session-39.scope. Feb 9 12:13:33.366970 sshd[5454]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:33.368437 systemd[1]: sshd@86-139.178.89.23:22-147.75.109.163:40572.service: Deactivated successfully. Feb 9 12:13:33.369073 systemd-logind[1545]: Session 39 logged out. Waiting for processes to exit. Feb 9 12:13:33.369076 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 12:13:33.369709 systemd-logind[1545]: Removed session 39. Feb 9 12:13:34.204797 sshd[5450]: Failed password for invalid user mehrnoush from 180.107.140.47 port 32798 ssh2 Feb 9 12:13:35.036715 sshd[5450]: Received disconnect from 180.107.140.47 port 32798:11: Bye Bye [preauth] Feb 9 12:13:35.036715 sshd[5450]: Disconnected from invalid user mehrnoush 180.107.140.47 port 32798 [preauth] Feb 9 12:13:35.039249 systemd[1]: sshd@85-139.178.89.23:22-180.107.140.47:32798.service: Deactivated successfully. Feb 9 12:13:36.278297 systemd[1]: Started sshd@87-139.178.89.23:22-198.12.118.109:48822.service. Feb 9 12:13:36.499842 sshd[5484]: Invalid user yashar from 198.12.118.109 port 48822 Feb 9 12:13:36.505972 sshd[5484]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:36.506969 sshd[5484]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:13:36.507057 sshd[5484]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:13:36.508002 sshd[5484]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:37.913076 sshd[5484]: Failed password for invalid user yashar from 198.12.118.109 port 48822 ssh2 Feb 9 12:13:38.373598 systemd[1]: Started sshd@88-139.178.89.23:22-147.75.109.163:45338.service. Feb 9 12:13:38.435791 sshd[5484]: Received disconnect from 198.12.118.109 port 48822:11: Bye Bye [preauth] Feb 9 12:13:38.435791 sshd[5484]: Disconnected from invalid user yashar 198.12.118.109 port 48822 [preauth] Feb 9 12:13:38.436470 sshd[5486]: Accepted publickey for core from 147.75.109.163 port 45338 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:38.438764 systemd[1]: sshd@87-139.178.89.23:22-198.12.118.109:48822.service: Deactivated successfully. Feb 9 12:13:38.440175 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:38.445160 systemd-logind[1545]: New session 40 of user core. Feb 9 12:13:38.445758 systemd[1]: Started session-40.scope. Feb 9 12:13:38.552834 sshd[5486]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:38.555380 systemd[1]: sshd@88-139.178.89.23:22-147.75.109.163:45338.service: Deactivated successfully. Feb 9 12:13:38.556695 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 12:13:38.556739 systemd-logind[1545]: Session 40 logged out. Waiting for processes to exit. Feb 9 12:13:38.557896 systemd-logind[1545]: Removed session 40. Feb 9 12:13:43.559457 systemd[1]: Started sshd@89-139.178.89.23:22-147.75.109.163:45350.service. Feb 9 12:13:43.595773 sshd[5514]: Accepted publickey for core from 147.75.109.163 port 45350 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:43.596693 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:43.599975 systemd-logind[1545]: New session 41 of user core. Feb 9 12:13:43.600704 systemd[1]: Started session-41.scope. Feb 9 12:13:43.692776 sshd[5514]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:43.694174 systemd[1]: sshd@89-139.178.89.23:22-147.75.109.163:45350.service: Deactivated successfully. Feb 9 12:13:43.694859 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 12:13:43.694890 systemd-logind[1545]: Session 41 logged out. Waiting for processes to exit. Feb 9 12:13:43.695375 systemd-logind[1545]: Removed session 41. Feb 9 12:13:44.668720 systemd[1]: Started sshd@90-139.178.89.23:22-119.91.207.218:34280.service. Feb 9 12:13:46.390433 sshd[5540]: Invalid user soheyl from 119.91.207.218 port 34280 Feb 9 12:13:46.396634 sshd[5540]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:46.397659 sshd[5540]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:13:46.397748 sshd[5540]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.207.218 Feb 9 12:13:46.398678 sshd[5540]: pam_faillock(sshd:auth): User unknown Feb 9 12:13:48.511068 sshd[5540]: Failed password for invalid user soheyl from 119.91.207.218 port 34280 ssh2 Feb 9 12:13:48.700272 systemd[1]: Started sshd@91-139.178.89.23:22-147.75.109.163:59592.service. Feb 9 12:13:48.736744 sshd[5542]: Accepted publickey for core from 147.75.109.163 port 59592 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:48.737441 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:48.740146 systemd-logind[1545]: New session 42 of user core. Feb 9 12:13:48.740573 systemd[1]: Started session-42.scope. Feb 9 12:13:48.828729 sshd[5542]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:48.829992 systemd[1]: sshd@91-139.178.89.23:22-147.75.109.163:59592.service: Deactivated successfully. Feb 9 12:13:48.830673 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 12:13:48.830713 systemd-logind[1545]: Session 42 logged out. Waiting for processes to exit. Feb 9 12:13:48.831290 systemd-logind[1545]: Removed session 42. Feb 9 12:13:50.698857 sshd[5540]: Received disconnect from 119.91.207.218 port 34280:11: Bye Bye [preauth] Feb 9 12:13:50.698857 sshd[5540]: Disconnected from invalid user soheyl 119.91.207.218 port 34280 [preauth] Feb 9 12:13:50.701357 systemd[1]: sshd@90-139.178.89.23:22-119.91.207.218:34280.service: Deactivated successfully. Feb 9 12:13:53.835124 systemd[1]: Started sshd@92-139.178.89.23:22-147.75.109.163:59602.service. Feb 9 12:13:53.871733 sshd[5570]: Accepted publickey for core from 147.75.109.163 port 59602 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:53.872644 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:53.875939 systemd-logind[1545]: New session 43 of user core. Feb 9 12:13:53.876531 systemd[1]: Started session-43.scope. Feb 9 12:13:53.968755 sshd[5570]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:53.970441 systemd[1]: sshd@92-139.178.89.23:22-147.75.109.163:59602.service: Deactivated successfully. Feb 9 12:13:53.971190 systemd-logind[1545]: Session 43 logged out. Waiting for processes to exit. Feb 9 12:13:53.971218 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 12:13:53.971967 systemd-logind[1545]: Removed session 43. Feb 9 12:13:58.975221 systemd[1]: Started sshd@93-139.178.89.23:22-147.75.109.163:59562.service. Feb 9 12:13:59.012020 sshd[5600]: Accepted publickey for core from 147.75.109.163 port 59562 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:13:59.012954 sshd[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:13:59.016018 systemd-logind[1545]: New session 44 of user core. Feb 9 12:13:59.016658 systemd[1]: Started session-44.scope. Feb 9 12:13:59.106414 sshd[5600]: pam_unix(sshd:session): session closed for user core Feb 9 12:13:59.107881 systemd[1]: sshd@93-139.178.89.23:22-147.75.109.163:59562.service: Deactivated successfully. Feb 9 12:13:59.108515 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 12:13:59.108555 systemd-logind[1545]: Session 44 logged out. Waiting for processes to exit. Feb 9 12:13:59.109066 systemd-logind[1545]: Removed session 44. Feb 9 12:14:04.113523 systemd[1]: Started sshd@94-139.178.89.23:22-147.75.109.163:59576.service. Feb 9 12:14:04.150128 sshd[5626]: Accepted publickey for core from 147.75.109.163 port 59576 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:04.151063 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:04.154144 systemd-logind[1545]: New session 45 of user core. Feb 9 12:14:04.154796 systemd[1]: Started session-45.scope. Feb 9 12:14:04.243883 sshd[5626]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:04.245372 systemd[1]: sshd@94-139.178.89.23:22-147.75.109.163:59576.service: Deactivated successfully. Feb 9 12:14:04.246043 systemd-logind[1545]: Session 45 logged out. Waiting for processes to exit. Feb 9 12:14:04.246065 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 12:14:04.246678 systemd-logind[1545]: Removed session 45. Feb 9 12:14:08.000042 systemd[1]: Started sshd@95-139.178.89.23:22-45.64.3.61:50912.service. Feb 9 12:14:08.148124 systemd[1]: Started sshd@96-139.178.89.23:22-209.97.179.25:42756.service. Feb 9 12:14:08.978399 sshd[5654]: Invalid user xuyunxia from 209.97.179.25 port 42756 Feb 9 12:14:08.984598 sshd[5654]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:08.985592 sshd[5654]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:14:08.985682 sshd[5654]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:14:08.986615 sshd[5654]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:09.250236 systemd[1]: Started sshd@97-139.178.89.23:22-147.75.109.163:49790.service. Feb 9 12:14:09.286750 sshd[5656]: Accepted publickey for core from 147.75.109.163 port 49790 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:09.287690 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:09.291008 systemd-logind[1545]: New session 46 of user core. Feb 9 12:14:09.291652 systemd[1]: Started session-46.scope. Feb 9 12:14:09.381180 sshd[5656]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:09.387186 systemd[1]: sshd@97-139.178.89.23:22-147.75.109.163:49790.service: Deactivated successfully. Feb 9 12:14:09.390111 systemd-logind[1545]: Session 46 logged out. Waiting for processes to exit. Feb 9 12:14:09.390194 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 12:14:09.392905 sshd[5652]: Invalid user florence from 45.64.3.61 port 50912 Feb 9 12:14:09.393309 systemd-logind[1545]: Removed session 46. Feb 9 12:14:09.399335 sshd[5652]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:09.400461 sshd[5652]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:14:09.400550 sshd[5652]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:14:09.401539 sshd[5652]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:11.119012 sshd[5654]: Failed password for invalid user xuyunxia from 209.97.179.25 port 42756 ssh2 Feb 9 12:14:11.668749 sshd[5652]: Failed password for invalid user florence from 45.64.3.61 port 50912 ssh2 Feb 9 12:14:13.322861 sshd[5654]: Received disconnect from 209.97.179.25 port 42756:11: Bye Bye [preauth] Feb 9 12:14:13.322861 sshd[5654]: Disconnected from invalid user xuyunxia 209.97.179.25 port 42756 [preauth] Feb 9 12:14:13.325382 systemd[1]: sshd@96-139.178.89.23:22-209.97.179.25:42756.service: Deactivated successfully. Feb 9 12:14:13.379602 sshd[5652]: Received disconnect from 45.64.3.61 port 50912:11: Bye Bye [preauth] Feb 9 12:14:13.379602 sshd[5652]: Disconnected from invalid user florence 45.64.3.61 port 50912 [preauth] Feb 9 12:14:13.380822 systemd[1]: sshd@95-139.178.89.23:22-45.64.3.61:50912.service: Deactivated successfully. Feb 9 12:14:14.387509 systemd[1]: Started sshd@98-139.178.89.23:22-147.75.109.163:49794.service. Feb 9 12:14:14.424619 sshd[5689]: Accepted publickey for core from 147.75.109.163 port 49794 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:14.427818 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:14.438394 systemd-logind[1545]: New session 47 of user core. Feb 9 12:14:14.440892 systemd[1]: Started session-47.scope. Feb 9 12:14:14.544205 sshd[5689]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:14.545726 systemd[1]: sshd@98-139.178.89.23:22-147.75.109.163:49794.service: Deactivated successfully. Feb 9 12:14:14.546442 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 12:14:14.546482 systemd-logind[1545]: Session 47 logged out. Waiting for processes to exit. Feb 9 12:14:14.547043 systemd-logind[1545]: Removed session 47. Feb 9 12:14:19.550869 systemd[1]: Started sshd@99-139.178.89.23:22-147.75.109.163:56266.service. Feb 9 12:14:19.587765 sshd[5715]: Accepted publickey for core from 147.75.109.163 port 56266 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:19.588707 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:19.591855 systemd-logind[1545]: New session 48 of user core. Feb 9 12:14:19.592472 systemd[1]: Started session-48.scope. Feb 9 12:14:19.680531 sshd[5715]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:19.682027 systemd[1]: Started sshd@100-139.178.89.23:22-147.75.109.163:56282.service. Feb 9 12:14:19.682340 systemd[1]: sshd@99-139.178.89.23:22-147.75.109.163:56266.service: Deactivated successfully. Feb 9 12:14:19.682839 systemd-logind[1545]: Session 48 logged out. Waiting for processes to exit. Feb 9 12:14:19.682880 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 12:14:19.683260 systemd-logind[1545]: Removed session 48. Feb 9 12:14:19.718796 sshd[5738]: Accepted publickey for core from 147.75.109.163 port 56282 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:19.719675 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:19.722945 systemd-logind[1545]: New session 49 of user core. Feb 9 12:14:19.723589 systemd[1]: Started session-49.scope. Feb 9 12:14:20.850382 sshd[5738]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:20.851830 systemd[1]: Started sshd@101-139.178.89.23:22-147.75.109.163:56286.service. Feb 9 12:14:20.852109 systemd[1]: sshd@100-139.178.89.23:22-147.75.109.163:56282.service: Deactivated successfully. Feb 9 12:14:20.852669 systemd-logind[1545]: Session 49 logged out. Waiting for processes to exit. Feb 9 12:14:20.852711 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 12:14:20.853112 systemd-logind[1545]: Removed session 49. Feb 9 12:14:20.888398 sshd[5762]: Accepted publickey for core from 147.75.109.163 port 56286 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:20.889257 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:20.892425 systemd-logind[1545]: New session 50 of user core. Feb 9 12:14:20.893097 systemd[1]: Started session-50.scope. Feb 9 12:14:21.692580 sshd[5762]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:21.694763 systemd[1]: Started sshd@102-139.178.89.23:22-147.75.109.163:56302.service. Feb 9 12:14:21.695296 systemd[1]: sshd@101-139.178.89.23:22-147.75.109.163:56286.service: Deactivated successfully. Feb 9 12:14:21.696151 systemd-logind[1545]: Session 50 logged out. Waiting for processes to exit. Feb 9 12:14:21.696160 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 12:14:21.696878 systemd-logind[1545]: Removed session 50. Feb 9 12:14:21.737337 sshd[5808]: Accepted publickey for core from 147.75.109.163 port 56302 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:21.738395 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:21.741590 systemd-logind[1545]: New session 51 of user core. Feb 9 12:14:21.742335 systemd[1]: Started session-51.scope. Feb 9 12:14:21.967060 sshd[5808]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:21.973462 systemd[1]: Started sshd@103-139.178.89.23:22-147.75.109.163:56306.service. Feb 9 12:14:21.974883 systemd[1]: sshd@102-139.178.89.23:22-147.75.109.163:56302.service: Deactivated successfully. Feb 9 12:14:21.977881 systemd-logind[1545]: Session 51 logged out. Waiting for processes to exit. Feb 9 12:14:21.977998 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 12:14:21.980292 systemd-logind[1545]: Removed session 51. Feb 9 12:14:22.052085 sshd[5867]: Accepted publickey for core from 147.75.109.163 port 56306 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:22.053542 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:22.058138 systemd-logind[1545]: New session 52 of user core. Feb 9 12:14:22.059170 systemd[1]: Started session-52.scope. Feb 9 12:14:22.193446 sshd[5867]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:22.195232 systemd[1]: sshd@103-139.178.89.23:22-147.75.109.163:56306.service: Deactivated successfully. Feb 9 12:14:22.196045 systemd[1]: session-52.scope: Deactivated successfully. Feb 9 12:14:22.196056 systemd-logind[1545]: Session 52 logged out. Waiting for processes to exit. Feb 9 12:14:22.196922 systemd-logind[1545]: Removed session 52. Feb 9 12:14:22.687383 sshd[5083]: Timeout before authentication for 180.107.140.47 port 49496 Feb 9 12:14:22.688813 systemd[1]: sshd@66-139.178.89.23:22-180.107.140.47:49496.service: Deactivated successfully. Feb 9 12:14:24.959840 systemd[1]: Started sshd@104-139.178.89.23:22-198.12.118.109:6353.service. Feb 9 12:14:25.142617 sshd[5899]: Invalid user santhosh from 198.12.118.109 port 6353 Feb 9 12:14:25.148658 sshd[5899]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:25.149634 sshd[5899]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:14:25.149725 sshd[5899]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:14:25.150631 sshd[5899]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:26.946448 sshd[5899]: Failed password for invalid user santhosh from 198.12.118.109 port 6353 ssh2 Feb 9 12:14:26.985811 sshd[5899]: Received disconnect from 198.12.118.109 port 6353:11: Bye Bye [preauth] Feb 9 12:14:26.985811 sshd[5899]: Disconnected from invalid user santhosh 198.12.118.109 port 6353 [preauth] Feb 9 12:14:26.988390 systemd[1]: sshd@104-139.178.89.23:22-198.12.118.109:6353.service: Deactivated successfully. Feb 9 12:14:27.200814 systemd[1]: Started sshd@105-139.178.89.23:22-147.75.109.163:56184.service. Feb 9 12:14:27.237391 sshd[5903]: Accepted publickey for core from 147.75.109.163 port 56184 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:27.238043 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:27.240649 systemd-logind[1545]: New session 53 of user core. Feb 9 12:14:27.241028 systemd[1]: Started session-53.scope. Feb 9 12:14:27.326029 sshd[5903]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:27.327456 systemd[1]: sshd@105-139.178.89.23:22-147.75.109.163:56184.service: Deactivated successfully. Feb 9 12:14:27.328018 systemd-logind[1545]: Session 53 logged out. Waiting for processes to exit. Feb 9 12:14:27.328066 systemd[1]: session-53.scope: Deactivated successfully. Feb 9 12:14:27.328576 systemd-logind[1545]: Removed session 53. Feb 9 12:14:32.332318 systemd[1]: Started sshd@106-139.178.89.23:22-147.75.109.163:56192.service. Feb 9 12:14:32.369024 sshd[5928]: Accepted publickey for core from 147.75.109.163 port 56192 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:32.369932 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:32.373075 systemd-logind[1545]: New session 54 of user core. Feb 9 12:14:32.373682 systemd[1]: Started session-54.scope. Feb 9 12:14:32.457582 sshd[5928]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:32.458962 systemd[1]: sshd@106-139.178.89.23:22-147.75.109.163:56192.service: Deactivated successfully. Feb 9 12:14:32.459631 systemd[1]: session-54.scope: Deactivated successfully. Feb 9 12:14:32.459672 systemd-logind[1545]: Session 54 logged out. Waiting for processes to exit. Feb 9 12:14:32.460169 systemd-logind[1545]: Removed session 54. Feb 9 12:14:34.316584 systemd[1]: Started sshd@107-139.178.89.23:22-39.109.116.167:50398.service. Feb 9 12:14:35.230520 sshd[5954]: Invalid user xcwang from 39.109.116.167 port 50398 Feb 9 12:14:35.236673 sshd[5954]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:35.237466 sshd[5954]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:14:35.237484 sshd[5954]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:14:35.237710 sshd[5954]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:37.073383 sshd[5954]: Failed password for invalid user xcwang from 39.109.116.167 port 50398 ssh2 Feb 9 12:14:37.464873 systemd[1]: Started sshd@108-139.178.89.23:22-147.75.109.163:49378.service. Feb 9 12:14:37.501405 sshd[5959]: Accepted publickey for core from 147.75.109.163 port 49378 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:37.502079 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:37.504639 systemd-logind[1545]: New session 55 of user core. Feb 9 12:14:37.505208 systemd[1]: Started session-55.scope. Feb 9 12:14:37.588990 sshd[5959]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:37.590522 systemd[1]: sshd@108-139.178.89.23:22-147.75.109.163:49378.service: Deactivated successfully. Feb 9 12:14:37.591113 systemd-logind[1545]: Session 55 logged out. Waiting for processes to exit. Feb 9 12:14:37.591123 systemd[1]: session-55.scope: Deactivated successfully. Feb 9 12:14:37.591669 systemd-logind[1545]: Removed session 55. Feb 9 12:14:39.029376 sshd[5954]: Received disconnect from 39.109.116.167 port 50398:11: Bye Bye [preauth] Feb 9 12:14:39.029376 sshd[5954]: Disconnected from invalid user xcwang 39.109.116.167 port 50398 [preauth] Feb 9 12:14:39.031840 systemd[1]: sshd@107-139.178.89.23:22-39.109.116.167:50398.service: Deactivated successfully. Feb 9 12:14:39.124156 systemd[1]: Started sshd@109-139.178.89.23:22-180.107.140.47:44380.service. Feb 9 12:14:41.083472 sshd[5987]: Invalid user mmessmer from 180.107.140.47 port 44380 Feb 9 12:14:41.089672 sshd[5987]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:41.090835 sshd[5987]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:14:41.090925 sshd[5987]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:14:41.091947 sshd[5987]: pam_faillock(sshd:auth): User unknown Feb 9 12:14:42.596411 systemd[1]: Started sshd@110-139.178.89.23:22-147.75.109.163:49394.service. Feb 9 12:14:42.633019 sshd[5989]: Accepted publickey for core from 147.75.109.163 port 49394 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:42.633929 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:42.637100 systemd-logind[1545]: New session 56 of user core. Feb 9 12:14:42.638184 systemd[1]: Started session-56.scope. Feb 9 12:14:42.727396 sshd[5989]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:42.728854 systemd[1]: sshd@110-139.178.89.23:22-147.75.109.163:49394.service: Deactivated successfully. Feb 9 12:14:42.729485 systemd[1]: session-56.scope: Deactivated successfully. Feb 9 12:14:42.729530 systemd-logind[1545]: Session 56 logged out. Waiting for processes to exit. Feb 9 12:14:42.730097 systemd-logind[1545]: Removed session 56. Feb 9 12:14:43.420106 sshd[5987]: Failed password for invalid user mmessmer from 180.107.140.47 port 44380 ssh2 Feb 9 12:14:43.897438 sshd[5987]: Received disconnect from 180.107.140.47 port 44380:11: Bye Bye [preauth] Feb 9 12:14:43.897438 sshd[5987]: Disconnected from invalid user mmessmer 180.107.140.47 port 44380 [preauth] Feb 9 12:14:43.899929 systemd[1]: sshd@109-139.178.89.23:22-180.107.140.47:44380.service: Deactivated successfully. Feb 9 12:14:47.734241 systemd[1]: Started sshd@111-139.178.89.23:22-147.75.109.163:48450.service. Feb 9 12:14:47.770423 sshd[6018]: Accepted publickey for core from 147.75.109.163 port 48450 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:47.771174 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:47.773861 systemd-logind[1545]: New session 57 of user core. Feb 9 12:14:47.774376 systemd[1]: Started session-57.scope. Feb 9 12:14:47.858890 sshd[6018]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:47.860523 systemd[1]: sshd@111-139.178.89.23:22-147.75.109.163:48450.service: Deactivated successfully. Feb 9 12:14:47.861190 systemd-logind[1545]: Session 57 logged out. Waiting for processes to exit. Feb 9 12:14:47.861213 systemd[1]: session-57.scope: Deactivated successfully. Feb 9 12:14:47.861920 systemd-logind[1545]: Removed session 57. Feb 9 12:14:52.865650 systemd[1]: Started sshd@112-139.178.89.23:22-147.75.109.163:48462.service. Feb 9 12:14:52.902409 sshd[6043]: Accepted publickey for core from 147.75.109.163 port 48462 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:52.903351 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:52.906739 systemd-logind[1545]: New session 58 of user core. Feb 9 12:14:52.907349 systemd[1]: Started session-58.scope. Feb 9 12:14:52.999391 sshd[6043]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:53.001266 systemd[1]: sshd@112-139.178.89.23:22-147.75.109.163:48462.service: Deactivated successfully. Feb 9 12:14:53.002115 systemd-logind[1545]: Session 58 logged out. Waiting for processes to exit. Feb 9 12:14:53.002189 systemd[1]: session-58.scope: Deactivated successfully. Feb 9 12:14:53.002860 systemd-logind[1545]: Removed session 58. Feb 9 12:14:58.005999 systemd[1]: Started sshd@113-139.178.89.23:22-147.75.109.163:47334.service. Feb 9 12:14:58.042514 sshd[6071]: Accepted publickey for core from 147.75.109.163 port 47334 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:14:58.043394 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:14:58.046473 systemd-logind[1545]: New session 59 of user core. Feb 9 12:14:58.047506 systemd[1]: Started session-59.scope. Feb 9 12:14:58.135073 sshd[6071]: pam_unix(sshd:session): session closed for user core Feb 9 12:14:58.136355 systemd[1]: sshd@113-139.178.89.23:22-147.75.109.163:47334.service: Deactivated successfully. Feb 9 12:14:58.136999 systemd[1]: session-59.scope: Deactivated successfully. Feb 9 12:14:58.137027 systemd-logind[1545]: Session 59 logged out. Waiting for processes to exit. Feb 9 12:14:58.137640 systemd-logind[1545]: Removed session 59. Feb 9 12:15:01.661519 systemd[1]: Started sshd@114-139.178.89.23:22-209.97.179.25:33400.service. Feb 9 12:15:02.502827 sshd[6097]: Invalid user anulkhajadoor from 209.97.179.25 port 33400 Feb 9 12:15:02.508789 sshd[6097]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:02.509962 sshd[6097]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:02.510052 sshd[6097]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:15:02.510973 sshd[6097]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:03.141836 systemd[1]: Started sshd@115-139.178.89.23:22-147.75.109.163:47344.service. Feb 9 12:15:03.178315 sshd[6099]: Accepted publickey for core from 147.75.109.163 port 47344 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:03.179225 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:03.182423 systemd-logind[1545]: New session 60 of user core. Feb 9 12:15:03.183240 systemd[1]: Started session-60.scope. Feb 9 12:15:03.271627 sshd[6099]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:03.273139 systemd[1]: sshd@115-139.178.89.23:22-147.75.109.163:47344.service: Deactivated successfully. Feb 9 12:15:03.273887 systemd[1]: session-60.scope: Deactivated successfully. Feb 9 12:15:03.273925 systemd-logind[1545]: Session 60 logged out. Waiting for processes to exit. Feb 9 12:15:03.274584 systemd-logind[1545]: Removed session 60. Feb 9 12:15:04.723430 sshd[6097]: Failed password for invalid user anulkhajadoor from 209.97.179.25 port 33400 ssh2 Feb 9 12:15:06.035831 sshd[6097]: Received disconnect from 209.97.179.25 port 33400:11: Bye Bye [preauth] Feb 9 12:15:06.035831 sshd[6097]: Disconnected from invalid user anulkhajadoor 209.97.179.25 port 33400 [preauth] Feb 9 12:15:06.038342 systemd[1]: sshd@114-139.178.89.23:22-209.97.179.25:33400.service: Deactivated successfully. Feb 9 12:15:08.277997 systemd[1]: Started sshd@116-139.178.89.23:22-147.75.109.163:35666.service. Feb 9 12:15:08.314776 sshd[6127]: Accepted publickey for core from 147.75.109.163 port 35666 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:08.315454 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:08.317914 systemd-logind[1545]: New session 61 of user core. Feb 9 12:15:08.318550 systemd[1]: Started session-61.scope. Feb 9 12:15:08.441434 sshd[6127]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:08.442834 systemd[1]: sshd@116-139.178.89.23:22-147.75.109.163:35666.service: Deactivated successfully. Feb 9 12:15:08.443420 systemd-logind[1545]: Session 61 logged out. Waiting for processes to exit. Feb 9 12:15:08.443475 systemd[1]: session-61.scope: Deactivated successfully. Feb 9 12:15:08.443911 systemd-logind[1545]: Removed session 61. Feb 9 12:15:13.448488 systemd[1]: Started sshd@117-139.178.89.23:22-147.75.109.163:35670.service. Feb 9 12:15:13.485608 sshd[6155]: Accepted publickey for core from 147.75.109.163 port 35670 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:13.486302 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:13.488647 systemd-logind[1545]: New session 62 of user core. Feb 9 12:15:13.489116 systemd[1]: Started session-62.scope. Feb 9 12:15:13.576361 sshd[6155]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:13.577885 systemd[1]: sshd@117-139.178.89.23:22-147.75.109.163:35670.service: Deactivated successfully. Feb 9 12:15:13.578551 systemd[1]: session-62.scope: Deactivated successfully. Feb 9 12:15:13.578591 systemd-logind[1545]: Session 62 logged out. Waiting for processes to exit. Feb 9 12:15:13.579141 systemd-logind[1545]: Removed session 62. Feb 9 12:15:13.695215 systemd[1]: Started sshd@118-139.178.89.23:22-45.64.3.61:41464.service. Feb 9 12:15:15.142708 sshd[6180]: Invalid user eritr from 45.64.3.61 port 41464 Feb 9 12:15:15.148833 sshd[6180]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:15.149873 sshd[6180]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:15.149963 sshd[6180]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:15:15.150945 sshd[6180]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:15.175935 systemd[1]: Started sshd@119-139.178.89.23:22-198.12.118.109:24885.service. Feb 9 12:15:15.334184 sshd[6182]: Invalid user todoroki from 198.12.118.109 port 24885 Feb 9 12:15:15.340524 sshd[6182]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:15.341556 sshd[6182]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:15.341645 sshd[6182]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:15:15.342584 sshd[6182]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:16.827625 systemd[1]: Started sshd@120-139.178.89.23:22-119.91.207.218:49508.service. Feb 9 12:15:17.147803 sshd[6180]: Failed password for invalid user eritr from 45.64.3.61 port 41464 ssh2 Feb 9 12:15:17.339645 sshd[6182]: Failed password for invalid user todoroki from 198.12.118.109 port 24885 ssh2 Feb 9 12:15:17.459674 sshd[6180]: Received disconnect from 45.64.3.61 port 41464:11: Bye Bye [preauth] Feb 9 12:15:17.459674 sshd[6180]: Disconnected from invalid user eritr 45.64.3.61 port 41464 [preauth] Feb 9 12:15:17.462257 systemd[1]: sshd@118-139.178.89.23:22-45.64.3.61:41464.service: Deactivated successfully. Feb 9 12:15:18.563900 sshd[6184]: Invalid user tamx from 119.91.207.218 port 49508 Feb 9 12:15:18.569979 sshd[6184]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:18.570997 sshd[6184]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:18.571087 sshd[6184]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.207.218 Feb 9 12:15:18.572022 sshd[6184]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:18.584491 systemd[1]: Started sshd@121-139.178.89.23:22-147.75.109.163:55564.service. Feb 9 12:15:18.624216 sshd[6188]: Accepted publickey for core from 147.75.109.163 port 55564 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:18.625008 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:18.628001 systemd-logind[1545]: New session 63 of user core. Feb 9 12:15:18.628681 systemd[1]: Started session-63.scope. Feb 9 12:15:18.715045 sshd[6188]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:18.716530 systemd[1]: sshd@121-139.178.89.23:22-147.75.109.163:55564.service: Deactivated successfully. Feb 9 12:15:18.717116 systemd-logind[1545]: Session 63 logged out. Waiting for processes to exit. Feb 9 12:15:18.717165 systemd[1]: session-63.scope: Deactivated successfully. Feb 9 12:15:18.717845 systemd-logind[1545]: Removed session 63. Feb 9 12:15:19.285496 sshd[6182]: Received disconnect from 198.12.118.109 port 24885:11: Bye Bye [preauth] Feb 9 12:15:19.285496 sshd[6182]: Disconnected from invalid user todoroki 198.12.118.109 port 24885 [preauth] Feb 9 12:15:19.287962 systemd[1]: sshd@119-139.178.89.23:22-198.12.118.109:24885.service: Deactivated successfully. Feb 9 12:15:20.313367 sshd[6184]: Failed password for invalid user tamx from 119.91.207.218 port 49508 ssh2 Feb 9 12:15:20.526788 sshd[6184]: Received disconnect from 119.91.207.218 port 49508:11: Bye Bye [preauth] Feb 9 12:15:20.526788 sshd[6184]: Disconnected from invalid user tamx 119.91.207.218 port 49508 [preauth] Feb 9 12:15:20.529265 systemd[1]: sshd@120-139.178.89.23:22-119.91.207.218:49508.service: Deactivated successfully. Feb 9 12:15:23.722502 systemd[1]: Started sshd@122-139.178.89.23:22-147.75.109.163:55576.service. Feb 9 12:15:23.759114 sshd[6219]: Accepted publickey for core from 147.75.109.163 port 55576 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:23.760028 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:23.763071 systemd-logind[1545]: New session 64 of user core. Feb 9 12:15:23.763938 systemd[1]: Started session-64.scope. Feb 9 12:15:23.849274 sshd[6219]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:23.850778 systemd[1]: sshd@122-139.178.89.23:22-147.75.109.163:55576.service: Deactivated successfully. Feb 9 12:15:23.851512 systemd[1]: session-64.scope: Deactivated successfully. Feb 9 12:15:23.851535 systemd-logind[1545]: Session 64 logged out. Waiting for processes to exit. Feb 9 12:15:23.852061 systemd-logind[1545]: Removed session 64. Feb 9 12:15:28.854769 systemd[1]: Started sshd@123-139.178.89.23:22-147.75.109.163:39416.service. Feb 9 12:15:28.891391 sshd[6247]: Accepted publickey for core from 147.75.109.163 port 39416 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:28.892311 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:28.895376 systemd-logind[1545]: New session 65 of user core. Feb 9 12:15:28.896212 systemd[1]: Started session-65.scope. Feb 9 12:15:28.983836 sshd[6247]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:28.985255 systemd[1]: sshd@123-139.178.89.23:22-147.75.109.163:39416.service: Deactivated successfully. Feb 9 12:15:28.985925 systemd[1]: session-65.scope: Deactivated successfully. Feb 9 12:15:28.985959 systemd-logind[1545]: Session 65 logged out. Waiting for processes to exit. Feb 9 12:15:28.986563 systemd-logind[1545]: Removed session 65. Feb 9 12:15:33.990340 systemd[1]: Started sshd@124-139.178.89.23:22-147.75.109.163:39430.service. Feb 9 12:15:34.026497 sshd[6273]: Accepted publickey for core from 147.75.109.163 port 39430 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:34.027230 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:34.029854 systemd-logind[1545]: New session 66 of user core. Feb 9 12:15:34.030239 systemd[1]: Started session-66.scope. Feb 9 12:15:34.114904 sshd[6273]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:34.116605 systemd[1]: sshd@124-139.178.89.23:22-147.75.109.163:39430.service: Deactivated successfully. Feb 9 12:15:34.117296 systemd-logind[1545]: Session 66 logged out. Waiting for processes to exit. Feb 9 12:15:34.117311 systemd[1]: session-66.scope: Deactivated successfully. Feb 9 12:15:34.117887 systemd-logind[1545]: Removed session 66. Feb 9 12:15:39.121045 systemd[1]: Started sshd@125-139.178.89.23:22-147.75.109.163:39830.service. Feb 9 12:15:39.157657 sshd[6298]: Accepted publickey for core from 147.75.109.163 port 39830 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:39.158680 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:39.162152 systemd-logind[1545]: New session 67 of user core. Feb 9 12:15:39.163022 systemd[1]: Started session-67.scope. Feb 9 12:15:39.251120 sshd[6298]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:39.252951 systemd[1]: sshd@125-139.178.89.23:22-147.75.109.163:39830.service: Deactivated successfully. Feb 9 12:15:39.253777 systemd[1]: session-67.scope: Deactivated successfully. Feb 9 12:15:39.253788 systemd-logind[1545]: Session 67 logged out. Waiting for processes to exit. Feb 9 12:15:39.254455 systemd-logind[1545]: Removed session 67. Feb 9 12:15:39.773004 systemd[1]: Started sshd@126-139.178.89.23:22-180.107.140.47:55936.service. Feb 9 12:15:40.657444 sshd[6324]: Invalid user samiul from 180.107.140.47 port 55936 Feb 9 12:15:40.663391 sshd[6324]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:40.664340 sshd[6324]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:40.664427 sshd[6324]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:15:40.665332 sshd[6324]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:40.809179 systemd[1]: Started sshd@127-139.178.89.23:22-39.109.116.167:59807.service. Feb 9 12:15:41.760458 sshd[6326]: Invalid user jihoon from 39.109.116.167 port 59807 Feb 9 12:15:41.766626 sshd[6326]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:41.767634 sshd[6326]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:41.767724 sshd[6326]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:15:41.768643 sshd[6326]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:43.093765 sshd[6324]: Failed password for invalid user samiul from 180.107.140.47 port 55936 ssh2 Feb 9 12:15:44.258155 systemd[1]: Started sshd@128-139.178.89.23:22-147.75.109.163:39838.service. Feb 9 12:15:44.294304 sshd[6328]: Accepted publickey for core from 147.75.109.163 port 39838 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:44.295212 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:44.298439 systemd-logind[1545]: New session 68 of user core. Feb 9 12:15:44.299093 systemd[1]: Started session-68.scope. Feb 9 12:15:44.332487 sshd[6326]: Failed password for invalid user jihoon from 39.109.116.167 port 59807 ssh2 Feb 9 12:15:44.384205 sshd[6328]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:44.385859 systemd[1]: sshd@128-139.178.89.23:22-147.75.109.163:39838.service: Deactivated successfully. Feb 9 12:15:44.386623 systemd[1]: session-68.scope: Deactivated successfully. Feb 9 12:15:44.386637 systemd-logind[1545]: Session 68 logged out. Waiting for processes to exit. Feb 9 12:15:44.387178 systemd-logind[1545]: Removed session 68. Feb 9 12:15:45.144851 sshd[6324]: Received disconnect from 180.107.140.47 port 55936:11: Bye Bye [preauth] Feb 9 12:15:45.144851 sshd[6324]: Disconnected from invalid user samiul 180.107.140.47 port 55936 [preauth] Feb 9 12:15:45.147323 systemd[1]: sshd@126-139.178.89.23:22-180.107.140.47:55936.service: Deactivated successfully. Feb 9 12:15:46.226755 sshd[6326]: Received disconnect from 39.109.116.167 port 59807:11: Bye Bye [preauth] Feb 9 12:15:46.226755 sshd[6326]: Disconnected from invalid user jihoon 39.109.116.167 port 59807 [preauth] Feb 9 12:15:46.229274 systemd[1]: sshd@127-139.178.89.23:22-39.109.116.167:59807.service: Deactivated successfully. Feb 9 12:15:49.390460 systemd[1]: Started sshd@129-139.178.89.23:22-147.75.109.163:42636.service. Feb 9 12:15:49.426771 sshd[6358]: Accepted publickey for core from 147.75.109.163 port 42636 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:49.427429 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:49.429825 systemd-logind[1545]: New session 69 of user core. Feb 9 12:15:49.430261 systemd[1]: Started session-69.scope. Feb 9 12:15:49.516081 sshd[6358]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:49.517600 systemd[1]: sshd@129-139.178.89.23:22-147.75.109.163:42636.service: Deactivated successfully. Feb 9 12:15:49.518161 systemd-logind[1545]: Session 69 logged out. Waiting for processes to exit. Feb 9 12:15:49.518195 systemd[1]: session-69.scope: Deactivated successfully. Feb 9 12:15:49.518811 systemd-logind[1545]: Removed session 69. Feb 9 12:15:54.523739 systemd[1]: Started sshd@130-139.178.89.23:22-147.75.109.163:42038.service. Feb 9 12:15:54.616413 sshd[6386]: Accepted publickey for core from 147.75.109.163 port 42038 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:54.619829 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:54.630862 systemd-logind[1545]: New session 70 of user core. Feb 9 12:15:54.633272 systemd[1]: Started session-70.scope. Feb 9 12:15:54.727111 sshd[6386]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:54.728885 systemd[1]: sshd@130-139.178.89.23:22-147.75.109.163:42038.service: Deactivated successfully. Feb 9 12:15:54.729663 systemd[1]: session-70.scope: Deactivated successfully. Feb 9 12:15:54.729711 systemd-logind[1545]: Session 70 logged out. Waiting for processes to exit. Feb 9 12:15:54.730184 systemd-logind[1545]: Removed session 70. Feb 9 12:15:56.479838 systemd[1]: Started sshd@131-139.178.89.23:22-209.97.179.25:52282.service. Feb 9 12:15:57.278559 sshd[6411]: Invalid user jbreb from 209.97.179.25 port 52282 Feb 9 12:15:57.284723 sshd[6411]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:57.285729 sshd[6411]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:15:57.285818 sshd[6411]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:15:57.286869 sshd[6411]: pam_faillock(sshd:auth): User unknown Feb 9 12:15:59.715379 sshd[6411]: Failed password for invalid user jbreb from 209.97.179.25 port 52282 ssh2 Feb 9 12:15:59.733867 systemd[1]: Started sshd@132-139.178.89.23:22-147.75.109.163:42054.service. Feb 9 12:15:59.770386 sshd[6413]: Accepted publickey for core from 147.75.109.163 port 42054 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:15:59.771351 sshd[6413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:15:59.774687 systemd-logind[1545]: New session 71 of user core. Feb 9 12:15:59.775415 systemd[1]: Started session-71.scope. Feb 9 12:15:59.861707 sshd[6413]: pam_unix(sshd:session): session closed for user core Feb 9 12:15:59.863233 systemd[1]: sshd@132-139.178.89.23:22-147.75.109.163:42054.service: Deactivated successfully. Feb 9 12:15:59.863920 systemd-logind[1545]: Session 71 logged out. Waiting for processes to exit. Feb 9 12:15:59.863922 systemd[1]: session-71.scope: Deactivated successfully. Feb 9 12:15:59.864413 systemd-logind[1545]: Removed session 71. Feb 9 12:16:00.755846 sshd[6411]: Received disconnect from 209.97.179.25 port 52282:11: Bye Bye [preauth] Feb 9 12:16:00.755846 sshd[6411]: Disconnected from invalid user jbreb 209.97.179.25 port 52282 [preauth] Feb 9 12:16:00.758433 systemd[1]: sshd@131-139.178.89.23:22-209.97.179.25:52282.service: Deactivated successfully. Feb 9 12:16:04.867999 systemd[1]: Started sshd@133-139.178.89.23:22-147.75.109.163:34652.service. Feb 9 12:16:04.904963 sshd[6442]: Accepted publickey for core from 147.75.109.163 port 34652 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:04.905829 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:04.908947 systemd-logind[1545]: New session 72 of user core. Feb 9 12:16:04.909568 systemd[1]: Started session-72.scope. Feb 9 12:16:04.994359 sshd[6442]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:04.995782 systemd[1]: sshd@133-139.178.89.23:22-147.75.109.163:34652.service: Deactivated successfully. Feb 9 12:16:04.996429 systemd[1]: session-72.scope: Deactivated successfully. Feb 9 12:16:04.996474 systemd-logind[1545]: Session 72 logged out. Waiting for processes to exit. Feb 9 12:16:04.996993 systemd-logind[1545]: Removed session 72. Feb 9 12:16:06.659926 systemd[1]: Started sshd@134-139.178.89.23:22-198.12.118.109:43419.service. Feb 9 12:16:06.828870 sshd[6468]: Invalid user bahmanghiasi from 198.12.118.109 port 43419 Feb 9 12:16:06.835137 sshd[6468]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:06.836111 sshd[6468]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:16:06.836227 sshd[6468]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:16:06.837148 sshd[6468]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:08.167100 sshd[6468]: Failed password for invalid user bahmanghiasi from 198.12.118.109 port 43419 ssh2 Feb 9 12:16:08.476349 sshd[6468]: Received disconnect from 198.12.118.109 port 43419:11: Bye Bye [preauth] Feb 9 12:16:08.476349 sshd[6468]: Disconnected from invalid user bahmanghiasi 198.12.118.109 port 43419 [preauth] Feb 9 12:16:08.478889 systemd[1]: sshd@134-139.178.89.23:22-198.12.118.109:43419.service: Deactivated successfully. Feb 9 12:16:10.001715 systemd[1]: Started sshd@135-139.178.89.23:22-147.75.109.163:34664.service. Feb 9 12:16:10.047442 sshd[6472]: Accepted publickey for core from 147.75.109.163 port 34664 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:10.048111 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:10.050721 systemd-logind[1545]: New session 73 of user core. Feb 9 12:16:10.051294 systemd[1]: Started session-73.scope. Feb 9 12:16:10.138084 sshd[6472]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:10.139312 systemd[1]: sshd@135-139.178.89.23:22-147.75.109.163:34664.service: Deactivated successfully. Feb 9 12:16:10.139866 systemd-logind[1545]: Session 73 logged out. Waiting for processes to exit. Feb 9 12:16:10.139874 systemd[1]: session-73.scope: Deactivated successfully. Feb 9 12:16:10.140286 systemd-logind[1545]: Removed session 73. Feb 9 12:16:15.144185 systemd[1]: Started sshd@136-139.178.89.23:22-147.75.109.163:45566.service. Feb 9 12:16:15.180921 sshd[6500]: Accepted publickey for core from 147.75.109.163 port 45566 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:15.181864 sshd[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:15.185015 systemd-logind[1545]: New session 74 of user core. Feb 9 12:16:15.185600 systemd[1]: Started session-74.scope. Feb 9 12:16:15.274604 sshd[6500]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:15.275987 systemd[1]: sshd@136-139.178.89.23:22-147.75.109.163:45566.service: Deactivated successfully. Feb 9 12:16:15.276558 systemd-logind[1545]: Session 74 logged out. Waiting for processes to exit. Feb 9 12:16:15.276603 systemd[1]: session-74.scope: Deactivated successfully. Feb 9 12:16:15.277066 systemd-logind[1545]: Removed session 74. Feb 9 12:16:20.281167 systemd[1]: Started sshd@137-139.178.89.23:22-147.75.109.163:45572.service. Feb 9 12:16:20.317625 sshd[6526]: Accepted publickey for core from 147.75.109.163 port 45572 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:20.318555 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:20.322010 systemd-logind[1545]: New session 75 of user core. Feb 9 12:16:20.322643 systemd[1]: Started session-75.scope. Feb 9 12:16:20.409358 sshd[6526]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:20.410832 systemd[1]: sshd@137-139.178.89.23:22-147.75.109.163:45572.service: Deactivated successfully. Feb 9 12:16:20.411502 systemd[1]: session-75.scope: Deactivated successfully. Feb 9 12:16:20.411543 systemd-logind[1545]: Session 75 logged out. Waiting for processes to exit. Feb 9 12:16:20.412024 systemd-logind[1545]: Removed session 75. Feb 9 12:16:20.435381 systemd[1]: Started sshd@138-139.178.89.23:22-45.64.3.61:60228.service. Feb 9 12:16:21.497541 sshd[6550]: Invalid user amazon from 45.64.3.61 port 60228 Feb 9 12:16:21.504457 sshd[6550]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:21.505478 sshd[6550]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:16:21.505569 sshd[6550]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:16:21.506540 sshd[6550]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:23.563602 sshd[6550]: Failed password for invalid user amazon from 45.64.3.61 port 60228 ssh2 Feb 9 12:16:25.053658 sshd[6550]: Received disconnect from 45.64.3.61 port 60228:11: Bye Bye [preauth] Feb 9 12:16:25.053658 sshd[6550]: Disconnected from invalid user amazon 45.64.3.61 port 60228 [preauth] Feb 9 12:16:25.056106 systemd[1]: sshd@138-139.178.89.23:22-45.64.3.61:60228.service: Deactivated successfully. Feb 9 12:16:25.415976 systemd[1]: Started sshd@139-139.178.89.23:22-147.75.109.163:58466.service. Feb 9 12:16:25.452510 sshd[6557]: Accepted publickey for core from 147.75.109.163 port 58466 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:25.453424 sshd[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:25.456799 systemd-logind[1545]: New session 76 of user core. Feb 9 12:16:25.457410 systemd[1]: Started session-76.scope. Feb 9 12:16:25.544368 sshd[6557]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:25.545997 systemd[1]: sshd@139-139.178.89.23:22-147.75.109.163:58466.service: Deactivated successfully. Feb 9 12:16:25.546729 systemd[1]: session-76.scope: Deactivated successfully. Feb 9 12:16:25.546770 systemd-logind[1545]: Session 76 logged out. Waiting for processes to exit. Feb 9 12:16:25.547324 systemd-logind[1545]: Removed session 76. Feb 9 12:16:30.550910 systemd[1]: Started sshd@140-139.178.89.23:22-147.75.109.163:58474.service. Feb 9 12:16:30.613726 sshd[6582]: Accepted publickey for core from 147.75.109.163 port 58474 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:30.616913 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:30.627905 systemd-logind[1545]: New session 77 of user core. Feb 9 12:16:30.630825 systemd[1]: Started session-77.scope. Feb 9 12:16:30.721984 sshd[6582]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:30.723426 systemd[1]: sshd@140-139.178.89.23:22-147.75.109.163:58474.service: Deactivated successfully. Feb 9 12:16:30.724068 systemd[1]: session-77.scope: Deactivated successfully. Feb 9 12:16:30.724107 systemd-logind[1545]: Session 77 logged out. Waiting for processes to exit. Feb 9 12:16:30.724762 systemd-logind[1545]: Removed session 77. Feb 9 12:16:35.729460 systemd[1]: Started sshd@141-139.178.89.23:22-147.75.109.163:58322.service. Feb 9 12:16:35.775394 sshd[6608]: Accepted publickey for core from 147.75.109.163 port 58322 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:35.778657 sshd[6608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:35.789112 systemd-logind[1545]: New session 78 of user core. Feb 9 12:16:35.791971 systemd[1]: Started session-78.scope. Feb 9 12:16:35.882319 sshd[6608]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:35.883909 systemd[1]: sshd@141-139.178.89.23:22-147.75.109.163:58322.service: Deactivated successfully. Feb 9 12:16:35.884636 systemd[1]: session-78.scope: Deactivated successfully. Feb 9 12:16:35.884676 systemd-logind[1545]: Session 78 logged out. Waiting for processes to exit. Feb 9 12:16:35.885212 systemd-logind[1545]: Removed session 78. Feb 9 12:16:40.613922 systemd[1]: Started sshd@142-139.178.89.23:22-180.107.140.47:39270.service. Feb 9 12:16:40.890428 systemd[1]: Started sshd@143-139.178.89.23:22-147.75.109.163:58338.service. Feb 9 12:16:40.930325 sshd[6635]: Accepted publickey for core from 147.75.109.163 port 58338 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:40.931109 sshd[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:40.933790 systemd-logind[1545]: New session 79 of user core. Feb 9 12:16:40.934396 systemd[1]: Started session-79.scope. Feb 9 12:16:41.022924 sshd[6635]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:41.024361 systemd[1]: sshd@143-139.178.89.23:22-147.75.109.163:58338.service: Deactivated successfully. Feb 9 12:16:41.024994 systemd[1]: session-79.scope: Deactivated successfully. Feb 9 12:16:41.025037 systemd-logind[1545]: Session 79 logged out. Waiting for processes to exit. Feb 9 12:16:41.025509 systemd-logind[1545]: Removed session 79. Feb 9 12:16:42.931418 sshd[6634]: Invalid user ryjxiao from 180.107.140.47 port 39270 Feb 9 12:16:42.937448 sshd[6634]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:42.938742 sshd[6634]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:16:42.938861 sshd[6634]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.107.140.47 Feb 9 12:16:42.940089 sshd[6634]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:45.213056 sshd[6634]: Failed password for invalid user ryjxiao from 180.107.140.47 port 39270 ssh2 Feb 9 12:16:45.920090 sshd[6634]: Received disconnect from 180.107.140.47 port 39270:11: Bye Bye [preauth] Feb 9 12:16:45.920090 sshd[6634]: Disconnected from invalid user ryjxiao 180.107.140.47 port 39270 [preauth] Feb 9 12:16:45.922596 systemd[1]: sshd@142-139.178.89.23:22-180.107.140.47:39270.service: Deactivated successfully. Feb 9 12:16:46.029012 systemd[1]: Started sshd@144-139.178.89.23:22-147.75.109.163:59824.service. Feb 9 12:16:46.065776 sshd[6662]: Accepted publickey for core from 147.75.109.163 port 59824 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:46.066443 sshd[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:46.068941 systemd-logind[1545]: New session 80 of user core. Feb 9 12:16:46.069443 systemd[1]: Started session-80.scope. Feb 9 12:16:46.156695 sshd[6662]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:46.158196 systemd[1]: sshd@144-139.178.89.23:22-147.75.109.163:59824.service: Deactivated successfully. Feb 9 12:16:46.158816 systemd[1]: session-80.scope: Deactivated successfully. Feb 9 12:16:46.158820 systemd-logind[1545]: Session 80 logged out. Waiting for processes to exit. Feb 9 12:16:46.159321 systemd-logind[1545]: Removed session 80. Feb 9 12:16:47.700625 systemd[1]: Started sshd@145-139.178.89.23:22-39.109.116.167:40983.service. Feb 9 12:16:48.826572 systemd[1]: Started sshd@146-139.178.89.23:22-119.91.207.218:36502.service. Feb 9 12:16:48.987006 sshd[6690]: Invalid user jesus from 39.109.116.167 port 40983 Feb 9 12:16:48.993178 sshd[6690]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:48.994186 sshd[6690]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:16:48.994302 sshd[6690]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:16:48.995312 sshd[6690]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:50.756606 sshd[6690]: Failed password for invalid user jesus from 39.109.116.167 port 40983 ssh2 Feb 9 12:16:51.123085 sshd[6690]: Received disconnect from 39.109.116.167 port 40983:11: Bye Bye [preauth] Feb 9 12:16:51.123085 sshd[6690]: Disconnected from invalid user jesus 39.109.116.167 port 40983 [preauth] Feb 9 12:16:51.124493 systemd[1]: sshd@145-139.178.89.23:22-39.109.116.167:40983.service: Deactivated successfully. Feb 9 12:16:51.164081 systemd[1]: Started sshd@147-139.178.89.23:22-147.75.109.163:59838.service. Feb 9 12:16:51.204390 sshd[6695]: Accepted publickey for core from 147.75.109.163 port 59838 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:51.205161 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:51.207959 systemd-logind[1545]: New session 81 of user core. Feb 9 12:16:51.208492 systemd[1]: Started session-81.scope. Feb 9 12:16:51.295220 sshd[6695]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:51.296786 systemd[1]: sshd@147-139.178.89.23:22-147.75.109.163:59838.service: Deactivated successfully. Feb 9 12:16:51.297478 systemd[1]: session-81.scope: Deactivated successfully. Feb 9 12:16:51.297522 systemd-logind[1545]: Session 81 logged out. Waiting for processes to exit. Feb 9 12:16:51.298075 systemd-logind[1545]: Removed session 81. Feb 9 12:16:55.764600 systemd[1]: Started sshd@148-139.178.89.23:22-209.97.179.25:42924.service. Feb 9 12:16:56.302535 systemd[1]: Started sshd@149-139.178.89.23:22-147.75.109.163:60164.service. Feb 9 12:16:56.339024 sshd[6722]: Accepted publickey for core from 147.75.109.163 port 60164 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:16:56.339693 sshd[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:16:56.342440 systemd-logind[1545]: New session 82 of user core. Feb 9 12:16:56.342823 systemd[1]: Started session-82.scope. Feb 9 12:16:56.428715 sshd[6722]: pam_unix(sshd:session): session closed for user core Feb 9 12:16:56.429957 systemd[1]: sshd@149-139.178.89.23:22-147.75.109.163:60164.service: Deactivated successfully. Feb 9 12:16:56.430593 systemd[1]: session-82.scope: Deactivated successfully. Feb 9 12:16:56.430640 systemd-logind[1545]: Session 82 logged out. Waiting for processes to exit. Feb 9 12:16:56.431100 systemd-logind[1545]: Removed session 82. Feb 9 12:16:56.591880 sshd[6720]: Invalid user sanazvpn from 209.97.179.25 port 42924 Feb 9 12:16:56.597782 sshd[6720]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:56.598908 sshd[6720]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:16:56.599001 sshd[6720]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:16:56.600031 sshd[6720]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:57.294443 systemd[1]: Started sshd@150-139.178.89.23:22-198.12.118.109:61949.service. Feb 9 12:16:57.479741 sshd[6747]: Invalid user masi from 198.12.118.109 port 61949 Feb 9 12:16:57.485732 sshd[6747]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:57.486757 sshd[6747]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:16:57.486845 sshd[6747]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:16:57.487843 sshd[6747]: pam_faillock(sshd:auth): User unknown Feb 9 12:16:58.461436 sshd[6720]: Failed password for invalid user sanazvpn from 209.97.179.25 port 42924 ssh2 Feb 9 12:16:59.715646 sshd[6720]: Received disconnect from 209.97.179.25 port 42924:11: Bye Bye [preauth] Feb 9 12:16:59.715646 sshd[6720]: Disconnected from invalid user sanazvpn 209.97.179.25 port 42924 [preauth] Feb 9 12:16:59.718101 systemd[1]: sshd@148-139.178.89.23:22-209.97.179.25:42924.service: Deactivated successfully. Feb 9 12:16:59.820846 sshd[6747]: Failed password for invalid user masi from 198.12.118.109 port 61949 ssh2 Feb 9 12:17:01.435330 systemd[1]: Started sshd@151-139.178.89.23:22-147.75.109.163:60172.service. Feb 9 12:17:01.472024 sshd[6751]: Accepted publickey for core from 147.75.109.163 port 60172 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:01.472929 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:01.475981 systemd-logind[1545]: New session 83 of user core. Feb 9 12:17:01.476816 systemd[1]: Started session-83.scope. Feb 9 12:17:01.563028 sshd[6751]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:01.564609 systemd[1]: sshd@151-139.178.89.23:22-147.75.109.163:60172.service: Deactivated successfully. Feb 9 12:17:01.565273 systemd[1]: session-83.scope: Deactivated successfully. Feb 9 12:17:01.565289 systemd-logind[1545]: Session 83 logged out. Waiting for processes to exit. Feb 9 12:17:01.566021 systemd-logind[1545]: Removed session 83. Feb 9 12:17:01.983891 sshd[6747]: Received disconnect from 198.12.118.109 port 61949:11: Bye Bye [preauth] Feb 9 12:17:01.983891 sshd[6747]: Disconnected from invalid user masi 198.12.118.109 port 61949 [preauth] Feb 9 12:17:01.986399 systemd[1]: sshd@150-139.178.89.23:22-198.12.118.109:61949.service: Deactivated successfully. Feb 9 12:17:06.569525 systemd[1]: Started sshd@152-139.178.89.23:22-147.75.109.163:46534.service. Feb 9 12:17:06.657573 sshd[6778]: Accepted publickey for core from 147.75.109.163 port 46534 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:06.658480 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:06.661329 systemd-logind[1545]: New session 84 of user core. Feb 9 12:17:06.662093 systemd[1]: Started session-84.scope. Feb 9 12:17:06.743443 sshd[6778]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:06.744910 systemd[1]: sshd@152-139.178.89.23:22-147.75.109.163:46534.service: Deactivated successfully. Feb 9 12:17:06.745558 systemd[1]: session-84.scope: Deactivated successfully. Feb 9 12:17:06.745599 systemd-logind[1545]: Session 84 logged out. Waiting for processes to exit. Feb 9 12:17:06.746114 systemd-logind[1545]: Removed session 84. Feb 9 12:17:11.750145 systemd[1]: Started sshd@153-139.178.89.23:22-147.75.109.163:46550.service. Feb 9 12:17:11.786876 sshd[6807]: Accepted publickey for core from 147.75.109.163 port 46550 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:11.787523 sshd[6807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:11.789909 systemd-logind[1545]: New session 85 of user core. Feb 9 12:17:11.790359 systemd[1]: Started session-85.scope. Feb 9 12:17:11.876440 sshd[6807]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:11.878057 systemd[1]: sshd@153-139.178.89.23:22-147.75.109.163:46550.service: Deactivated successfully. Feb 9 12:17:11.878802 systemd[1]: session-85.scope: Deactivated successfully. Feb 9 12:17:11.878844 systemd-logind[1545]: Session 85 logged out. Waiting for processes to exit. Feb 9 12:17:11.879454 systemd-logind[1545]: Removed session 85. Feb 9 12:17:16.883607 systemd[1]: Started sshd@154-139.178.89.23:22-147.75.109.163:41560.service. Feb 9 12:17:16.920220 sshd[6833]: Accepted publickey for core from 147.75.109.163 port 41560 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:16.921166 sshd[6833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:16.924328 systemd-logind[1545]: New session 86 of user core. Feb 9 12:17:16.925030 systemd[1]: Started session-86.scope. Feb 9 12:17:17.013415 sshd[6833]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:17.014920 systemd[1]: sshd@154-139.178.89.23:22-147.75.109.163:41560.service: Deactivated successfully. Feb 9 12:17:17.015531 systemd[1]: session-86.scope: Deactivated successfully. Feb 9 12:17:17.015547 systemd-logind[1545]: Session 86 logged out. Waiting for processes to exit. Feb 9 12:17:17.015985 systemd-logind[1545]: Removed session 86. Feb 9 12:17:22.020446 systemd[1]: Started sshd@155-139.178.89.23:22-147.75.109.163:41574.service. Feb 9 12:17:22.056869 sshd[6857]: Accepted publickey for core from 147.75.109.163 port 41574 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:22.057645 sshd[6857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:22.060013 systemd-logind[1545]: New session 87 of user core. Feb 9 12:17:22.060621 systemd[1]: Started session-87.scope. Feb 9 12:17:22.146602 sshd[6857]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:22.148062 systemd[1]: Started sshd@156-139.178.89.23:22-147.75.109.163:41586.service. Feb 9 12:17:22.148362 systemd[1]: sshd@155-139.178.89.23:22-147.75.109.163:41574.service: Deactivated successfully. Feb 9 12:17:22.148909 systemd-logind[1545]: Session 87 logged out. Waiting for processes to exit. Feb 9 12:17:22.148946 systemd[1]: session-87.scope: Deactivated successfully. Feb 9 12:17:22.149353 systemd-logind[1545]: Removed session 87. Feb 9 12:17:22.184945 sshd[6881]: Accepted publickey for core from 147.75.109.163 port 41586 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:22.185855 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:22.188931 systemd-logind[1545]: New session 88 of user core. Feb 9 12:17:22.189531 systemd[1]: Started session-88.scope. Feb 9 12:17:23.553953 env[1559]: time="2024-02-09T12:17:23.553925951Z" level=info msg="StopContainer for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" with timeout 30 (s)" Feb 9 12:17:23.554241 env[1559]: time="2024-02-09T12:17:23.554151482Z" level=info msg="Stop container \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" with signal terminated" Feb 9 12:17:23.575981 env[1559]: time="2024-02-09T12:17:23.575941157Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 12:17:23.578454 env[1559]: time="2024-02-09T12:17:23.578441012Z" level=info msg="StopContainer for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" with timeout 1 (s)" Feb 9 12:17:23.578551 env[1559]: time="2024-02-09T12:17:23.578539633Z" level=info msg="Stop container \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" with signal terminated" Feb 9 12:17:23.599479 systemd-networkd[1414]: lxc_health: Link DOWN Feb 9 12:17:23.599499 systemd-networkd[1414]: lxc_health: Lost carrier Feb 9 12:17:23.601132 env[1559]: time="2024-02-09T12:17:23.601005487Z" level=info msg="shim disconnected" id=e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f Feb 9 12:17:23.601507 env[1559]: time="2024-02-09T12:17:23.601129803Z" level=warning msg="cleaning up after shim disconnected" id=e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f namespace=k8s.io Feb 9 12:17:23.601507 env[1559]: time="2024-02-09T12:17:23.601169741Z" level=info msg="cleaning up dead shim" Feb 9 12:17:23.601617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f-rootfs.mount: Deactivated successfully. Feb 9 12:17:23.617476 env[1559]: time="2024-02-09T12:17:23.617387135Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6947 runtime=io.containerd.runc.v2\n" Feb 9 12:17:23.619523 env[1559]: time="2024-02-09T12:17:23.619418210Z" level=info msg="StopContainer for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" returns successfully" Feb 9 12:17:23.620531 env[1559]: time="2024-02-09T12:17:23.620463030Z" level=info msg="StopPodSandbox for \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\"" Feb 9 12:17:23.620781 env[1559]: time="2024-02-09T12:17:23.620619702Z" level=info msg="Container to stop \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:23.626218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0-shm.mount: Deactivated successfully. Feb 9 12:17:23.666709 env[1559]: time="2024-02-09T12:17:23.666629574Z" level=info msg="shim disconnected" id=b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0 Feb 9 12:17:23.666979 env[1559]: time="2024-02-09T12:17:23.666710254Z" level=warning msg="cleaning up after shim disconnected" id=b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0 namespace=k8s.io Feb 9 12:17:23.666979 env[1559]: time="2024-02-09T12:17:23.666728118Z" level=info msg="cleaning up dead shim" Feb 9 12:17:23.666831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0-rootfs.mount: Deactivated successfully. Feb 9 12:17:23.675463 env[1559]: time="2024-02-09T12:17:23.675390071Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6990 runtime=io.containerd.runc.v2\n" Feb 9 12:17:23.675831 env[1559]: time="2024-02-09T12:17:23.675769231Z" level=info msg="TearDown network for sandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" successfully" Feb 9 12:17:23.675831 env[1559]: time="2024-02-09T12:17:23.675799851Z" level=info msg="StopPodSandbox for \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" returns successfully" Feb 9 12:17:23.692332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c-rootfs.mount: Deactivated successfully. Feb 9 12:17:23.692612 env[1559]: time="2024-02-09T12:17:23.692372884Z" level=info msg="shim disconnected" id=538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c Feb 9 12:17:23.692612 env[1559]: time="2024-02-09T12:17:23.692427015Z" level=warning msg="cleaning up after shim disconnected" id=538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c namespace=k8s.io Feb 9 12:17:23.692612 env[1559]: time="2024-02-09T12:17:23.692442384Z" level=info msg="cleaning up dead shim" Feb 9 12:17:23.701983 env[1559]: time="2024-02-09T12:17:23.701905553Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7008 runtime=io.containerd.runc.v2\n" Feb 9 12:17:23.703075 env[1559]: time="2024-02-09T12:17:23.703010164Z" level=info msg="StopContainer for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" returns successfully" Feb 9 12:17:23.703576 env[1559]: time="2024-02-09T12:17:23.703513242Z" level=info msg="StopPodSandbox for \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\"" Feb 9 12:17:23.703683 env[1559]: time="2024-02-09T12:17:23.703588855Z" level=info msg="Container to stop \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:23.703683 env[1559]: time="2024-02-09T12:17:23.703613528Z" level=info msg="Container to stop \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:23.703683 env[1559]: time="2024-02-09T12:17:23.703630401Z" level=info msg="Container to stop \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:23.703683 env[1559]: time="2024-02-09T12:17:23.703645901Z" level=info msg="Container to stop \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:23.703683 env[1559]: time="2024-02-09T12:17:23.703660253Z" level=info msg="Container to stop \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:23.743161 env[1559]: time="2024-02-09T12:17:23.743075568Z" level=info msg="shim disconnected" id=fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d Feb 9 12:17:23.743543 env[1559]: time="2024-02-09T12:17:23.743159181Z" level=warning msg="cleaning up after shim disconnected" id=fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d namespace=k8s.io Feb 9 12:17:23.743543 env[1559]: time="2024-02-09T12:17:23.743187301Z" level=info msg="cleaning up dead shim" Feb 9 12:17:23.753984 env[1559]: time="2024-02-09T12:17:23.753906482Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7040 runtime=io.containerd.runc.v2\n" Feb 9 12:17:23.754391 env[1559]: time="2024-02-09T12:17:23.754318853Z" level=info msg="TearDown network for sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" successfully" Feb 9 12:17:23.754391 env[1559]: time="2024-02-09T12:17:23.754355527Z" level=info msg="StopPodSandbox for \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" returns successfully" Feb 9 12:17:23.865761 kubelet[2733]: I0209 12:17:23.865556 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hubble-tls\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.865761 kubelet[2733]: I0209 12:17:23.865653 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-etc-cni-netd\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.865761 kubelet[2733]: I0209 12:17:23.865711 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-bpf-maps\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867326 kubelet[2733]: I0209 12:17:23.865756 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.867326 kubelet[2733]: I0209 12:17:23.865790 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cni-path\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867326 kubelet[2733]: I0209 12:17:23.865847 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.867326 kubelet[2733]: I0209 12:17:23.865839 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.867326 kubelet[2733]: I0209 12:17:23.865947 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-clustermesh-secrets\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867885 kubelet[2733]: I0209 12:17:23.866016 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-xtables-lock\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867885 kubelet[2733]: I0209 12:17:23.866073 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-lib-modules\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867885 kubelet[2733]: I0209 12:17:23.866099 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.867885 kubelet[2733]: I0209 12:17:23.866139 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-config-path\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867885 kubelet[2733]: I0209 12:17:23.866231 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hostproc\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.867885 kubelet[2733]: I0209 12:17:23.866233 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.868578 kubelet[2733]: I0209 12:17:23.866303 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7wh8\" (UniqueName: \"kubernetes.io/projected/132653b9-67de-4d54-9702-f91f31f47975-kube-api-access-q7wh8\") pod \"132653b9-67de-4d54-9702-f91f31f47975\" (UID: \"132653b9-67de-4d54-9702-f91f31f47975\") " Feb 9 12:17:23.868578 kubelet[2733]: I0209 12:17:23.866283 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.868578 kubelet[2733]: I0209 12:17:23.866362 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-net\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.868578 kubelet[2733]: I0209 12:17:23.866415 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-cgroup\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.868578 kubelet[2733]: I0209 12:17:23.866477 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84k22\" (UniqueName: \"kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-kube-api-access-84k22\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.869100 kubelet[2733]: I0209 12:17:23.866487 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.869100 kubelet[2733]: I0209 12:17:23.866530 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-run\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.869100 kubelet[2733]: I0209 12:17:23.866587 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-kernel\") pod \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\" (UID: \"2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0\") " Feb 9 12:17:23.869100 kubelet[2733]: I0209 12:17:23.866598 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.869100 kubelet[2733]: I0209 12:17:23.866687 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/132653b9-67de-4d54-9702-f91f31f47975-cilium-config-path\") pod \"132653b9-67de-4d54-9702-f91f31f47975\" (UID: \"132653b9-67de-4d54-9702-f91f31f47975\") " Feb 9 12:17:23.869666 kubelet[2733]: I0209 12:17:23.866722 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.869666 kubelet[2733]: I0209 12:17:23.866654 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:23.869666 kubelet[2733]: W0209 12:17:23.866802 2733 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 12:17:23.869666 kubelet[2733]: I0209 12:17:23.866841 2733 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hostproc\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.869666 kubelet[2733]: I0209 12:17:23.866891 2733 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-net\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.869666 kubelet[2733]: I0209 12:17:23.866952 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-cgroup\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.869666 kubelet[2733]: I0209 12:17:23.866994 2733 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-xtables-lock\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.870368 kubelet[2733]: I0209 12:17:23.867044 2733 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-etc-cni-netd\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.870368 kubelet[2733]: I0209 12:17:23.867086 2733 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-bpf-maps\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.870368 kubelet[2733]: I0209 12:17:23.867117 2733 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cni-path\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.870368 kubelet[2733]: I0209 12:17:23.867161 2733 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-lib-modules\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.870368 kubelet[2733]: W0209 12:17:23.867198 2733 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/132653b9-67de-4d54-9702-f91f31f47975/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 12:17:23.872826 kubelet[2733]: I0209 12:17:23.872736 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 12:17:23.873090 kubelet[2733]: I0209 12:17:23.872883 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 12:17:23.873648 kubelet[2733]: I0209 12:17:23.873542 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-kube-api-access-84k22" (OuterVolumeSpecName: "kube-api-access-84k22") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "kube-api-access-84k22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 12:17:23.873858 kubelet[2733]: I0209 12:17:23.873702 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132653b9-67de-4d54-9702-f91f31f47975-kube-api-access-q7wh8" (OuterVolumeSpecName: "kube-api-access-q7wh8") pod "132653b9-67de-4d54-9702-f91f31f47975" (UID: "132653b9-67de-4d54-9702-f91f31f47975"). InnerVolumeSpecName "kube-api-access-q7wh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 12:17:23.875519 kubelet[2733]: I0209 12:17:23.875391 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" (UID: "2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 12:17:23.875739 kubelet[2733]: I0209 12:17:23.875637 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/132653b9-67de-4d54-9702-f91f31f47975-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "132653b9-67de-4d54-9702-f91f31f47975" (UID: "132653b9-67de-4d54-9702-f91f31f47975"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 12:17:23.967496 kubelet[2733]: I0209 12:17:23.967377 2733 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-clustermesh-secrets\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967496 kubelet[2733]: I0209 12:17:23.967453 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-config-path\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967496 kubelet[2733]: I0209 12:17:23.967497 2733 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-q7wh8\" (UniqueName: \"kubernetes.io/projected/132653b9-67de-4d54-9702-f91f31f47975-kube-api-access-q7wh8\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967999 kubelet[2733]: I0209 12:17:23.967532 2733 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-84k22\" (UniqueName: \"kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-kube-api-access-84k22\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967999 kubelet[2733]: I0209 12:17:23.967571 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-cilium-run\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967999 kubelet[2733]: I0209 12:17:23.967612 2733 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967999 kubelet[2733]: I0209 12:17:23.967645 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/132653b9-67de-4d54-9702-f91f31f47975-cilium-config-path\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:23.967999 kubelet[2733]: I0209 12:17:23.967677 2733 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0-hubble-tls\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:24.181714 kubelet[2733]: I0209 12:17:24.181663 2733 scope.go:115] "RemoveContainer" containerID="e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f" Feb 9 12:17:24.184482 env[1559]: time="2024-02-09T12:17:24.184410321Z" level=info msg="RemoveContainer for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\"" Feb 9 12:17:24.190506 env[1559]: time="2024-02-09T12:17:24.190394109Z" level=info msg="RemoveContainer for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" returns successfully" Feb 9 12:17:24.190951 kubelet[2733]: I0209 12:17:24.190885 2733 scope.go:115] "RemoveContainer" containerID="e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f" Feb 9 12:17:24.191595 env[1559]: time="2024-02-09T12:17:24.191375442Z" level=error msg="ContainerStatus for \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\": not found" Feb 9 12:17:24.191901 kubelet[2733]: E0209 12:17:24.191837 2733 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\": not found" containerID="e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f" Feb 9 12:17:24.192051 kubelet[2733]: I0209 12:17:24.191923 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f} err="failed to get container status \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4cceadafaa1f904a837013a1504f1779d160df9cdddc735d7163e73c786721f\": not found" Feb 9 12:17:24.192051 kubelet[2733]: I0209 12:17:24.191973 2733 scope.go:115] "RemoveContainer" containerID="538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c" Feb 9 12:17:24.194425 env[1559]: time="2024-02-09T12:17:24.194342552Z" level=info msg="RemoveContainer for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\"" Feb 9 12:17:24.198333 env[1559]: time="2024-02-09T12:17:24.198228596Z" level=info msg="RemoveContainer for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" returns successfully" Feb 9 12:17:24.198699 kubelet[2733]: I0209 12:17:24.198624 2733 scope.go:115] "RemoveContainer" containerID="f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353" Feb 9 12:17:24.201143 env[1559]: time="2024-02-09T12:17:24.201040711Z" level=info msg="RemoveContainer for \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\"" Feb 9 12:17:24.204938 env[1559]: time="2024-02-09T12:17:24.204868590Z" level=info msg="RemoveContainer for \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\" returns successfully" Feb 9 12:17:24.205332 kubelet[2733]: I0209 12:17:24.205276 2733 scope.go:115] "RemoveContainer" containerID="077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b" Feb 9 12:17:24.207931 env[1559]: time="2024-02-09T12:17:24.207829335Z" level=info msg="RemoveContainer for \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\"" Feb 9 12:17:24.212176 env[1559]: time="2024-02-09T12:17:24.212070801Z" level=info msg="RemoveContainer for \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\" returns successfully" Feb 9 12:17:24.212549 kubelet[2733]: I0209 12:17:24.212461 2733 scope.go:115] "RemoveContainer" containerID="509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1" Feb 9 12:17:24.214825 env[1559]: time="2024-02-09T12:17:24.214750665Z" level=info msg="RemoveContainer for \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\"" Feb 9 12:17:24.218082 env[1559]: time="2024-02-09T12:17:24.218032640Z" level=info msg="RemoveContainer for \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\" returns successfully" Feb 9 12:17:24.218358 kubelet[2733]: I0209 12:17:24.218291 2733 scope.go:115] "RemoveContainer" containerID="3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602" Feb 9 12:17:24.219660 env[1559]: time="2024-02-09T12:17:24.219615630Z" level=info msg="RemoveContainer for \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\"" Feb 9 12:17:24.222338 env[1559]: time="2024-02-09T12:17:24.222295895Z" level=info msg="RemoveContainer for \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\" returns successfully" Feb 9 12:17:24.222577 kubelet[2733]: I0209 12:17:24.222554 2733 scope.go:115] "RemoveContainer" containerID="538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c" Feb 9 12:17:24.222901 env[1559]: time="2024-02-09T12:17:24.222818978Z" level=error msg="ContainerStatus for \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\": not found" Feb 9 12:17:24.223087 kubelet[2733]: E0209 12:17:24.223055 2733 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\": not found" containerID="538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c" Feb 9 12:17:24.223195 kubelet[2733]: I0209 12:17:24.223141 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c} err="failed to get container status \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"538e1225cba384809b5988ce1b6af51ef617c5c299b9388463161cfc0ad34f1c\": not found" Feb 9 12:17:24.223195 kubelet[2733]: I0209 12:17:24.223167 2733 scope.go:115] "RemoveContainer" containerID="f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353" Feb 9 12:17:24.223478 env[1559]: time="2024-02-09T12:17:24.223414635Z" level=error msg="ContainerStatus for \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\": not found" Feb 9 12:17:24.223626 kubelet[2733]: E0209 12:17:24.223610 2733 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\": not found" containerID="f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353" Feb 9 12:17:24.223691 kubelet[2733]: I0209 12:17:24.223643 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353} err="failed to get container status \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\": rpc error: code = NotFound desc = an error occurred when try to find container \"f39201629ceb8a723bda3ff62fe7fa7836ccf95f6083984c13f16c9268265353\": not found" Feb 9 12:17:24.223691 kubelet[2733]: I0209 12:17:24.223656 2733 scope.go:115] "RemoveContainer" containerID="077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b" Feb 9 12:17:24.223880 env[1559]: time="2024-02-09T12:17:24.223824804Z" level=error msg="ContainerStatus for \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\": not found" Feb 9 12:17:24.223999 kubelet[2733]: E0209 12:17:24.223983 2733 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\": not found" containerID="077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b" Feb 9 12:17:24.224056 kubelet[2733]: I0209 12:17:24.224022 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b} err="failed to get container status \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"077c1eb0c1de2a36e33b231a1bdeacf50f595a664c38b81a61e7d74b2cc8cf6b\": not found" Feb 9 12:17:24.224056 kubelet[2733]: I0209 12:17:24.224036 2733 scope.go:115] "RemoveContainer" containerID="509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1" Feb 9 12:17:24.224273 env[1559]: time="2024-02-09T12:17:24.224218981Z" level=error msg="ContainerStatus for \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\": not found" Feb 9 12:17:24.224434 kubelet[2733]: E0209 12:17:24.224419 2733 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\": not found" containerID="509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1" Feb 9 12:17:24.224493 kubelet[2733]: I0209 12:17:24.224453 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1} err="failed to get container status \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"509e5dc27a605570d1b16dd2fa4d043e3f9948afffa758eeba2aeaee836039c1\": not found" Feb 9 12:17:24.224493 kubelet[2733]: I0209 12:17:24.224466 2733 scope.go:115] "RemoveContainer" containerID="3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602" Feb 9 12:17:24.224657 env[1559]: time="2024-02-09T12:17:24.224608650Z" level=error msg="ContainerStatus for \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\": not found" Feb 9 12:17:24.224780 kubelet[2733]: E0209 12:17:24.224763 2733 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\": not found" containerID="3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602" Feb 9 12:17:24.224834 kubelet[2733]: I0209 12:17:24.224805 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602} err="failed to get container status \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\": rpc error: code = NotFound desc = an error occurred when try to find container \"3813df3a45bd369b81957286b754572a7f0eb9e1750827fd8e0c17642300b602\": not found" Feb 9 12:17:24.574296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d-rootfs.mount: Deactivated successfully. Feb 9 12:17:24.574392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d-shm.mount: Deactivated successfully. Feb 9 12:17:24.574468 systemd[1]: var-lib-kubelet-pods-132653b9\x2d67de\x2d4d54\x2d9702\x2df91f31f47975-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7wh8.mount: Deactivated successfully. Feb 9 12:17:24.574548 systemd[1]: var-lib-kubelet-pods-2cb44fe4\x2de89d\x2d48df\x2d8bc4\x2deb3df7fca3b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d84k22.mount: Deactivated successfully. Feb 9 12:17:24.574647 systemd[1]: var-lib-kubelet-pods-2cb44fe4\x2de89d\x2d48df\x2d8bc4\x2deb3df7fca3b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 12:17:24.574766 systemd[1]: var-lib-kubelet-pods-2cb44fe4\x2de89d\x2d48df\x2d8bc4\x2deb3df7fca3b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 12:17:25.361170 kubelet[2733]: E0209 12:17:25.361075 2733 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 12:17:25.504404 sshd[6881]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:25.506098 systemd[1]: Started sshd@157-139.178.89.23:22-147.75.109.163:50736.service. Feb 9 12:17:25.506468 systemd[1]: sshd@156-139.178.89.23:22-147.75.109.163:41586.service: Deactivated successfully. Feb 9 12:17:25.507185 systemd-logind[1545]: Session 88 logged out. Waiting for processes to exit. Feb 9 12:17:25.507248 systemd[1]: session-88.scope: Deactivated successfully. Feb 9 12:17:25.507726 systemd-logind[1545]: Removed session 88. Feb 9 12:17:25.542919 sshd[7059]: Accepted publickey for core from 147.75.109.163 port 50736 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:25.543891 sshd[7059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:25.547132 systemd-logind[1545]: New session 89 of user core. Feb 9 12:17:25.547864 systemd[1]: Started session-89.scope. Feb 9 12:17:26.037608 sshd[7059]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:26.039727 systemd[1]: Started sshd@158-139.178.89.23:22-147.75.109.163:50740.service. Feb 9 12:17:26.040109 systemd[1]: sshd@157-139.178.89.23:22-147.75.109.163:50736.service: Deactivated successfully. Feb 9 12:17:26.040782 systemd-logind[1545]: Session 89 logged out. Waiting for processes to exit. Feb 9 12:17:26.040835 systemd[1]: session-89.scope: Deactivated successfully. Feb 9 12:17:26.041434 systemd-logind[1545]: Removed session 89. Feb 9 12:17:26.044421 kubelet[2733]: I0209 12:17:26.044399 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:17:26.044508 kubelet[2733]: E0209 12:17:26.044443 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" containerName="cilium-agent" Feb 9 12:17:26.044508 kubelet[2733]: E0209 12:17:26.044454 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" containerName="mount-cgroup" Feb 9 12:17:26.044508 kubelet[2733]: E0209 12:17:26.044462 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" containerName="apply-sysctl-overwrites" Feb 9 12:17:26.044508 kubelet[2733]: E0209 12:17:26.044468 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="132653b9-67de-4d54-9702-f91f31f47975" containerName="cilium-operator" Feb 9 12:17:26.044508 kubelet[2733]: E0209 12:17:26.044474 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" containerName="mount-bpf-fs" Feb 9 12:17:26.044508 kubelet[2733]: E0209 12:17:26.044481 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" containerName="clean-cilium-state" Feb 9 12:17:26.044508 kubelet[2733]: I0209 12:17:26.044504 2733 memory_manager.go:346] "RemoveStaleState removing state" podUID="132653b9-67de-4d54-9702-f91f31f47975" containerName="cilium-operator" Feb 9 12:17:26.044648 kubelet[2733]: I0209 12:17:26.044511 2733 memory_manager.go:346] "RemoveStaleState removing state" podUID="2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0" containerName="cilium-agent" Feb 9 12:17:26.077748 sshd[7083]: Accepted publickey for core from 147.75.109.163 port 50740 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:26.078583 sshd[7083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:26.080970 systemd-logind[1545]: New session 90 of user core. Feb 9 12:17:26.081478 systemd[1]: Started session-90.scope. Feb 9 12:17:26.083747 kubelet[2733]: I0209 12:17:26.083733 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-kernel\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083786 kubelet[2733]: I0209 12:17:26.083761 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-config-path\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083786 kubelet[2733]: I0209 12:17:26.083774 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-bpf-maps\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083832 kubelet[2733]: I0209 12:17:26.083797 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-cgroup\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083832 kubelet[2733]: I0209 12:17:26.083809 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cni-path\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083832 kubelet[2733]: I0209 12:17:26.083820 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-lib-modules\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083900 kubelet[2733]: I0209 12:17:26.083840 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hubble-tls\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083900 kubelet[2733]: I0209 12:17:26.083875 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-clustermesh-secrets\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083900 kubelet[2733]: I0209 12:17:26.083895 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwd89\" (UniqueName: \"kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-kube-api-access-hwd89\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083995 kubelet[2733]: I0209 12:17:26.083933 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-run\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.083995 kubelet[2733]: I0209 12:17:26.083976 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hostproc\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.084061 kubelet[2733]: I0209 12:17:26.084002 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-etc-cni-netd\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.084061 kubelet[2733]: I0209 12:17:26.084026 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-net\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.084061 kubelet[2733]: I0209 12:17:26.084058 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-xtables-lock\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.084130 kubelet[2733]: I0209 12:17:26.084085 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-ipsec-secrets\") pod \"cilium-gk4ck\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " pod="kube-system/cilium-gk4ck" Feb 9 12:17:26.133027 kubelet[2733]: I0209 12:17:26.132940 2733 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=132653b9-67de-4d54-9702-f91f31f47975 path="/var/lib/kubelet/pods/132653b9-67de-4d54-9702-f91f31f47975/volumes" Feb 9 12:17:26.134077 kubelet[2733]: I0209 12:17:26.134000 2733 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0 path="/var/lib/kubelet/pods/2cb44fe4-e89d-48df-8bc4-eb3df7fca3b0/volumes" Feb 9 12:17:26.221702 sshd[7083]: pam_unix(sshd:session): session closed for user core Feb 9 12:17:26.223659 systemd[1]: Started sshd@159-139.178.89.23:22-147.75.109.163:50746.service. Feb 9 12:17:26.224083 systemd[1]: sshd@158-139.178.89.23:22-147.75.109.163:50740.service: Deactivated successfully. Feb 9 12:17:26.224838 systemd-logind[1545]: Session 90 logged out. Waiting for processes to exit. Feb 9 12:17:26.224877 systemd[1]: session-90.scope: Deactivated successfully. Feb 9 12:17:26.225530 systemd-logind[1545]: Removed session 90. Feb 9 12:17:26.263659 sshd[7113]: Accepted publickey for core from 147.75.109.163 port 50746 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 12:17:26.264686 sshd[7113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 12:17:26.267961 systemd-logind[1545]: New session 91 of user core. Feb 9 12:17:26.268545 systemd[1]: Started session-91.scope. Feb 9 12:17:26.348615 env[1559]: time="2024-02-09T12:17:26.348410469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gk4ck,Uid:78fc9a09-08af-4ab0-9de4-fbb86552dbd6,Namespace:kube-system,Attempt:0,}" Feb 9 12:17:26.362992 env[1559]: time="2024-02-09T12:17:26.362888968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:17:26.362992 env[1559]: time="2024-02-09T12:17:26.362941677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:17:26.362992 env[1559]: time="2024-02-09T12:17:26.362961708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:17:26.363260 env[1559]: time="2024-02-09T12:17:26.363181146Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0 pid=7139 runtime=io.containerd.runc.v2 Feb 9 12:17:26.423128 env[1559]: time="2024-02-09T12:17:26.423078704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gk4ck,Uid:78fc9a09-08af-4ab0-9de4-fbb86552dbd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\"" Feb 9 12:17:26.425664 env[1559]: time="2024-02-09T12:17:26.425633590Z" level=info msg="CreateContainer within sandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 12:17:26.432452 env[1559]: time="2024-02-09T12:17:26.432388332Z" level=info msg="CreateContainer within sandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81\"" Feb 9 12:17:26.432808 env[1559]: time="2024-02-09T12:17:26.432742565Z" level=info msg="StartContainer for \"931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81\"" Feb 9 12:17:26.522053 env[1559]: time="2024-02-09T12:17:26.521974441Z" level=info msg="StartContainer for \"931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81\" returns successfully" Feb 9 12:17:26.586418 env[1559]: time="2024-02-09T12:17:26.586261886Z" level=info msg="shim disconnected" id=931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81 Feb 9 12:17:26.586418 env[1559]: time="2024-02-09T12:17:26.586368084Z" level=warning msg="cleaning up after shim disconnected" id=931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81 namespace=k8s.io Feb 9 12:17:26.586418 env[1559]: time="2024-02-09T12:17:26.586396626Z" level=info msg="cleaning up dead shim" Feb 9 12:17:26.604887 env[1559]: time="2024-02-09T12:17:26.604672760Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7225 runtime=io.containerd.runc.v2\n" Feb 9 12:17:27.200579 env[1559]: time="2024-02-09T12:17:27.200552747Z" level=info msg="StopPodSandbox for \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\"" Feb 9 12:17:27.200699 env[1559]: time="2024-02-09T12:17:27.200594807Z" level=info msg="Container to stop \"931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 12:17:27.202364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0-shm.mount: Deactivated successfully. Feb 9 12:17:27.244128 env[1559]: time="2024-02-09T12:17:27.243989683Z" level=info msg="shim disconnected" id=116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0 Feb 9 12:17:27.244564 env[1559]: time="2024-02-09T12:17:27.244150359Z" level=warning msg="cleaning up after shim disconnected" id=116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0 namespace=k8s.io Feb 9 12:17:27.244564 env[1559]: time="2024-02-09T12:17:27.244220754Z" level=info msg="cleaning up dead shim" Feb 9 12:17:27.249525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0-rootfs.mount: Deactivated successfully. Feb 9 12:17:27.262053 env[1559]: time="2024-02-09T12:17:27.261906754Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7257 runtime=io.containerd.runc.v2\n" Feb 9 12:17:27.262778 env[1559]: time="2024-02-09T12:17:27.262677571Z" level=info msg="TearDown network for sandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" successfully" Feb 9 12:17:27.262778 env[1559]: time="2024-02-09T12:17:27.262742007Z" level=info msg="StopPodSandbox for \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" returns successfully" Feb 9 12:17:27.293703 kubelet[2733]: I0209 12:17:27.293635 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-cgroup\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.294805 kubelet[2733]: I0209 12:17:27.293728 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cni-path\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.294805 kubelet[2733]: I0209 12:17:27.293788 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-run\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.294805 kubelet[2733]: I0209 12:17:27.293779 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.294805 kubelet[2733]: I0209 12:17:27.293850 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-kernel\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.294805 kubelet[2733]: I0209 12:17:27.293908 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-xtables-lock\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.294805 kubelet[2733]: I0209 12:17:27.293895 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.295642 kubelet[2733]: I0209 12:17:27.293895 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cni-path" (OuterVolumeSpecName: "cni-path") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.295642 kubelet[2733]: I0209 12:17:27.293977 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-ipsec-secrets\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.295642 kubelet[2733]: I0209 12:17:27.293969 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.295642 kubelet[2733]: I0209 12:17:27.294030 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-lib-modules\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.295642 kubelet[2733]: I0209 12:17:27.294029 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.296318 kubelet[2733]: I0209 12:17:27.294090 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hubble-tls\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.296318 kubelet[2733]: I0209 12:17:27.294080 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.296318 kubelet[2733]: I0209 12:17:27.294152 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-etc-cni-netd\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.296318 kubelet[2733]: I0209 12:17:27.294224 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hostproc\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.296318 kubelet[2733]: I0209 12:17:27.294283 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-net\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.296318 kubelet[2733]: I0209 12:17:27.294273 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.297028 kubelet[2733]: I0209 12:17:27.294336 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-bpf-maps\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.297028 kubelet[2733]: I0209 12:17:27.294344 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.297028 kubelet[2733]: I0209 12:17:27.294401 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwd89\" (UniqueName: \"kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-kube-api-access-hwd89\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.297028 kubelet[2733]: I0209 12:17:27.294380 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hostproc" (OuterVolumeSpecName: "hostproc") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.297028 kubelet[2733]: I0209 12:17:27.294402 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294477 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-clustermesh-secrets\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294573 2733 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-config-path\") pod \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\" (UID: \"78fc9a09-08af-4ab0-9de4-fbb86552dbd6\") " Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294678 2733 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294717 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-cgroup\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294749 2733 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cni-path\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294782 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-run\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.297620 kubelet[2733]: I0209 12:17:27.294811 2733 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-etc-cni-netd\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.298377 kubelet[2733]: I0209 12:17:27.294841 2733 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-xtables-lock\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.298377 kubelet[2733]: I0209 12:17:27.294893 2733 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-lib-modules\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.298377 kubelet[2733]: I0209 12:17:27.294933 2733 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hostproc\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.298377 kubelet[2733]: I0209 12:17:27.294977 2733 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-host-proc-sys-net\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.298377 kubelet[2733]: I0209 12:17:27.295033 2733 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-bpf-maps\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.298377 kubelet[2733]: W0209 12:17:27.295034 2733 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/78fc9a09-08af-4ab0-9de4-fbb86552dbd6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 12:17:27.300312 kubelet[2733]: I0209 12:17:27.300137 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 12:17:27.300355 kubelet[2733]: I0209 12:17:27.300332 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 12:17:27.300502 kubelet[2733]: I0209 12:17:27.300458 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-kube-api-access-hwd89" (OuterVolumeSpecName: "kube-api-access-hwd89") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "kube-api-access-hwd89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 12:17:27.300502 kubelet[2733]: I0209 12:17:27.300471 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 12:17:27.300578 kubelet[2733]: I0209 12:17:27.300519 2733 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "78fc9a09-08af-4ab0-9de4-fbb86552dbd6" (UID: "78fc9a09-08af-4ab0-9de4-fbb86552dbd6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 12:17:27.301771 systemd[1]: var-lib-kubelet-pods-78fc9a09\x2d08af\x2d4ab0\x2d9de4\x2dfbb86552dbd6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhwd89.mount: Deactivated successfully. Feb 9 12:17:27.301874 systemd[1]: var-lib-kubelet-pods-78fc9a09\x2d08af\x2d4ab0\x2d9de4\x2dfbb86552dbd6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 12:17:27.301952 systemd[1]: var-lib-kubelet-pods-78fc9a09\x2d08af\x2d4ab0\x2d9de4\x2dfbb86552dbd6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 12:17:27.302027 systemd[1]: var-lib-kubelet-pods-78fc9a09\x2d08af\x2d4ab0\x2d9de4\x2dfbb86552dbd6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 12:17:27.395421 kubelet[2733]: I0209 12:17:27.395315 2733 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-clustermesh-secrets\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.395421 kubelet[2733]: I0209 12:17:27.395393 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-config-path\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.395421 kubelet[2733]: I0209 12:17:27.395430 2733 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-hubble-tls\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.396081 kubelet[2733]: I0209 12:17:27.395471 2733 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:27.396081 kubelet[2733]: I0209 12:17:27.395507 2733 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-hwd89\" (UniqueName: \"kubernetes.io/projected/78fc9a09-08af-4ab0-9de4-fbb86552dbd6-kube-api-access-hwd89\") on node \"ci-3510.3.2-a-b58f4ff548\" DevicePath \"\"" Feb 9 12:17:28.206688 kubelet[2733]: I0209 12:17:28.206630 2733 scope.go:115] "RemoveContainer" containerID="931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81" Feb 9 12:17:28.209002 env[1559]: time="2024-02-09T12:17:28.208984861Z" level=info msg="RemoveContainer for \"931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81\"" Feb 9 12:17:28.210489 env[1559]: time="2024-02-09T12:17:28.210475691Z" level=info msg="RemoveContainer for \"931a94eb7255fbbbb2de1fff504e20159f9ea8691076f13ac8dfcfa90a753a81\" returns successfully" Feb 9 12:17:28.225844 kubelet[2733]: I0209 12:17:28.225823 2733 topology_manager.go:210] "Topology Admit Handler" Feb 9 12:17:28.225941 kubelet[2733]: E0209 12:17:28.225866 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78fc9a09-08af-4ab0-9de4-fbb86552dbd6" containerName="mount-cgroup" Feb 9 12:17:28.225941 kubelet[2733]: I0209 12:17:28.225892 2733 memory_manager.go:346] "RemoveStaleState removing state" podUID="78fc9a09-08af-4ab0-9de4-fbb86552dbd6" containerName="mount-cgroup" Feb 9 12:17:28.228683 systemd[1]: Started sshd@160-139.178.89.23:22-45.64.3.61:50768.service. Feb 9 12:17:28.302008 kubelet[2733]: I0209 12:17:28.301888 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2df7567b-649e-4732-9482-5061f452fb02-cilium-ipsec-secrets\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.302008 kubelet[2733]: I0209 12:17:28.301998 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-bpf-maps\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303053 kubelet[2733]: I0209 12:17:28.302136 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-cilium-cgroup\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303053 kubelet[2733]: I0209 12:17:28.302334 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-host-proc-sys-kernel\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303053 kubelet[2733]: I0209 12:17:28.302557 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-lib-modules\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303053 kubelet[2733]: I0209 12:17:28.302642 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-cni-path\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303053 kubelet[2733]: I0209 12:17:28.302796 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-etc-cni-netd\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303053 kubelet[2733]: I0209 12:17:28.302929 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2df7567b-649e-4732-9482-5061f452fb02-clustermesh-secrets\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303720 kubelet[2733]: I0209 12:17:28.303070 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2df7567b-649e-4732-9482-5061f452fb02-hubble-tls\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303720 kubelet[2733]: I0209 12:17:28.303170 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-hostproc\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303720 kubelet[2733]: I0209 12:17:28.303352 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-host-proc-sys-net\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303720 kubelet[2733]: I0209 12:17:28.303439 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-587rp\" (UniqueName: \"kubernetes.io/projected/2df7567b-649e-4732-9482-5061f452fb02-kube-api-access-587rp\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303720 kubelet[2733]: I0209 12:17:28.303506 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-cilium-run\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.303720 kubelet[2733]: I0209 12:17:28.303617 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df7567b-649e-4732-9482-5061f452fb02-xtables-lock\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.304394 kubelet[2733]: I0209 12:17:28.303731 2733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2df7567b-649e-4732-9482-5061f452fb02-cilium-config-path\") pod \"cilium-n25pv\" (UID: \"2df7567b-649e-4732-9482-5061f452fb02\") " pod="kube-system/cilium-n25pv" Feb 9 12:17:28.529749 env[1559]: time="2024-02-09T12:17:28.529497805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n25pv,Uid:2df7567b-649e-4732-9482-5061f452fb02,Namespace:kube-system,Attempt:0,}" Feb 9 12:17:28.544112 env[1559]: time="2024-02-09T12:17:28.544085568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 12:17:28.544112 env[1559]: time="2024-02-09T12:17:28.544105151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 12:17:28.544181 env[1559]: time="2024-02-09T12:17:28.544112091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 12:17:28.544204 env[1559]: time="2024-02-09T12:17:28.544173049Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62 pid=7287 runtime=io.containerd.runc.v2 Feb 9 12:17:28.589785 env[1559]: time="2024-02-09T12:17:28.589729624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n25pv,Uid:2df7567b-649e-4732-9482-5061f452fb02,Namespace:kube-system,Attempt:0,} returns sandbox id \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\"" Feb 9 12:17:28.592274 env[1559]: time="2024-02-09T12:17:28.592212387Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 12:17:28.599854 env[1559]: time="2024-02-09T12:17:28.599775329Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a219e41c7a2c7c4bcec071dc32dec67546fbc3c0e4183a07d41b3d32116ad77\"" Feb 9 12:17:28.600454 env[1559]: time="2024-02-09T12:17:28.600369275Z" level=info msg="StartContainer for \"5a219e41c7a2c7c4bcec071dc32dec67546fbc3c0e4183a07d41b3d32116ad77\"" Feb 9 12:17:28.719271 env[1559]: time="2024-02-09T12:17:28.719151051Z" level=info msg="StartContainer for \"5a219e41c7a2c7c4bcec071dc32dec67546fbc3c0e4183a07d41b3d32116ad77\" returns successfully" Feb 9 12:17:28.805761 env[1559]: time="2024-02-09T12:17:28.805533139Z" level=info msg="shim disconnected" id=5a219e41c7a2c7c4bcec071dc32dec67546fbc3c0e4183a07d41b3d32116ad77 Feb 9 12:17:28.805761 env[1559]: time="2024-02-09T12:17:28.805652916Z" level=warning msg="cleaning up after shim disconnected" id=5a219e41c7a2c7c4bcec071dc32dec67546fbc3c0e4183a07d41b3d32116ad77 namespace=k8s.io Feb 9 12:17:28.805761 env[1559]: time="2024-02-09T12:17:28.805684237Z" level=info msg="cleaning up dead shim" Feb 9 12:17:28.836007 env[1559]: time="2024-02-09T12:17:28.835915137Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7368 runtime=io.containerd.runc.v2\n" Feb 9 12:17:29.212451 env[1559]: time="2024-02-09T12:17:29.212424270Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 12:17:29.217288 env[1559]: time="2024-02-09T12:17:29.217224022Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39782e53d5e5526366fcbee01e155530431c7224e5e845c68a0005416420e504\"" Feb 9 12:17:29.217591 env[1559]: time="2024-02-09T12:17:29.217539190Z" level=info msg="StartContainer for \"39782e53d5e5526366fcbee01e155530431c7224e5e845c68a0005416420e504\"" Feb 9 12:17:29.250300 env[1559]: time="2024-02-09T12:17:29.250272061Z" level=info msg="StartContainer for \"39782e53d5e5526366fcbee01e155530431c7224e5e845c68a0005416420e504\" returns successfully" Feb 9 12:17:29.265454 sshd[7274]: Invalid user rguerin from 45.64.3.61 port 50768 Feb 9 12:17:29.267461 sshd[7274]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:29.267788 sshd[7274]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:17:29.267818 sshd[7274]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:17:29.268127 sshd[7274]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:29.296110 env[1559]: time="2024-02-09T12:17:29.295998535Z" level=info msg="shim disconnected" id=39782e53d5e5526366fcbee01e155530431c7224e5e845c68a0005416420e504 Feb 9 12:17:29.296495 env[1559]: time="2024-02-09T12:17:29.296117583Z" level=warning msg="cleaning up after shim disconnected" id=39782e53d5e5526366fcbee01e155530431c7224e5e845c68a0005416420e504 namespace=k8s.io Feb 9 12:17:29.296495 env[1559]: time="2024-02-09T12:17:29.296148191Z" level=info msg="cleaning up dead shim" Feb 9 12:17:29.325797 env[1559]: time="2024-02-09T12:17:29.325708243Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7429 runtime=io.containerd.runc.v2\n" Feb 9 12:17:30.130071 kubelet[2733]: I0209 12:17:30.130055 2733 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=78fc9a09-08af-4ab0-9de4-fbb86552dbd6 path="/var/lib/kubelet/pods/78fc9a09-08af-4ab0-9de4-fbb86552dbd6/volumes" Feb 9 12:17:30.217639 env[1559]: time="2024-02-09T12:17:30.217615153Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 12:17:30.223023 env[1559]: time="2024-02-09T12:17:30.222999051Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de5f1caabd7e4329e926375461ae55ea461ecfcc44839bb6215ceb99fc6c134d\"" Feb 9 12:17:30.223379 env[1559]: time="2024-02-09T12:17:30.223325664Z" level=info msg="StartContainer for \"de5f1caabd7e4329e926375461ae55ea461ecfcc44839bb6215ceb99fc6c134d\"" Feb 9 12:17:30.249094 env[1559]: time="2024-02-09T12:17:30.249070507Z" level=info msg="StartContainer for \"de5f1caabd7e4329e926375461ae55ea461ecfcc44839bb6215ceb99fc6c134d\" returns successfully" Feb 9 12:17:30.293273 env[1559]: time="2024-02-09T12:17:30.293140846Z" level=info msg="shim disconnected" id=de5f1caabd7e4329e926375461ae55ea461ecfcc44839bb6215ceb99fc6c134d Feb 9 12:17:30.293671 env[1559]: time="2024-02-09T12:17:30.293269826Z" level=warning msg="cleaning up after shim disconnected" id=de5f1caabd7e4329e926375461ae55ea461ecfcc44839bb6215ceb99fc6c134d namespace=k8s.io Feb 9 12:17:30.293671 env[1559]: time="2024-02-09T12:17:30.293300255Z" level=info msg="cleaning up dead shim" Feb 9 12:17:30.322705 env[1559]: time="2024-02-09T12:17:30.322571591Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7485 runtime=io.containerd.runc.v2\n" Feb 9 12:17:30.362644 kubelet[2733]: E0209 12:17:30.362556 2733 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 12:17:30.417256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de5f1caabd7e4329e926375461ae55ea461ecfcc44839bb6215ceb99fc6c134d-rootfs.mount: Deactivated successfully. Feb 9 12:17:31.229894 env[1559]: time="2024-02-09T12:17:31.229782854Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 12:17:31.242990 env[1559]: time="2024-02-09T12:17:31.242948921Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a388f05451f6d7dcabf7ad25072770685769769b2934561928a99d75e6ecc8f8\"" Feb 9 12:17:31.243268 env[1559]: time="2024-02-09T12:17:31.243252245Z" level=info msg="StartContainer for \"a388f05451f6d7dcabf7ad25072770685769769b2934561928a99d75e6ecc8f8\"" Feb 9 12:17:31.253783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593341039.mount: Deactivated successfully. Feb 9 12:17:31.285571 env[1559]: time="2024-02-09T12:17:31.285546721Z" level=info msg="StartContainer for \"a388f05451f6d7dcabf7ad25072770685769769b2934561928a99d75e6ecc8f8\" returns successfully" Feb 9 12:17:31.318454 env[1559]: time="2024-02-09T12:17:31.318425400Z" level=info msg="shim disconnected" id=a388f05451f6d7dcabf7ad25072770685769769b2934561928a99d75e6ecc8f8 Feb 9 12:17:31.318454 env[1559]: time="2024-02-09T12:17:31.318453488Z" level=warning msg="cleaning up after shim disconnected" id=a388f05451f6d7dcabf7ad25072770685769769b2934561928a99d75e6ecc8f8 namespace=k8s.io Feb 9 12:17:31.318612 env[1559]: time="2024-02-09T12:17:31.318462641Z" level=info msg="cleaning up dead shim" Feb 9 12:17:31.334392 env[1559]: time="2024-02-09T12:17:31.334338136Z" level=warning msg="cleanup warnings time=\"2024-02-09T12:17:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7541 runtime=io.containerd.runc.v2\n" Feb 9 12:17:31.417705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a388f05451f6d7dcabf7ad25072770685769769b2934561928a99d75e6ecc8f8-rootfs.mount: Deactivated successfully. Feb 9 12:17:31.992478 sshd[7274]: Failed password for invalid user rguerin from 45.64.3.61 port 50768 ssh2 Feb 9 12:17:32.235130 env[1559]: time="2024-02-09T12:17:32.235083651Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 12:17:32.240330 env[1559]: time="2024-02-09T12:17:32.240281262Z" level=info msg="CreateContainer within sandbox \"943d398f54749e462d0a0da710ef5bd985031aa77f44c9d7501d66467b191c62\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"01f15c3b8cdbcca19a89d7faf8c7cfc5f631327242fca971e9a2f1dac1807120\"" Feb 9 12:17:32.240605 env[1559]: time="2024-02-09T12:17:32.240587930Z" level=info msg="StartContainer for \"01f15c3b8cdbcca19a89d7faf8c7cfc5f631327242fca971e9a2f1dac1807120\"" Feb 9 12:17:32.264087 env[1559]: time="2024-02-09T12:17:32.264026270Z" level=info msg="StartContainer for \"01f15c3b8cdbcca19a89d7faf8c7cfc5f631327242fca971e9a2f1dac1807120\" returns successfully" Feb 9 12:17:32.413212 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 12:17:32.491462 sshd[7274]: Received disconnect from 45.64.3.61 port 50768:11: Bye Bye [preauth] Feb 9 12:17:32.491462 sshd[7274]: Disconnected from invalid user rguerin 45.64.3.61 port 50768 [preauth] Feb 9 12:17:32.492096 systemd[1]: sshd@160-139.178.89.23:22-45.64.3.61:50768.service: Deactivated successfully. Feb 9 12:17:33.253505 kubelet[2733]: I0209 12:17:33.253449 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n25pv" podStartSLOduration=5.253428767 pod.CreationTimestamp="2024-02-09 12:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 12:17:33.253159782 +0000 UTC m=+743.191675732" watchObservedRunningTime="2024-02-09 12:17:33.253428767 +0000 UTC m=+743.191944713" Feb 9 12:17:34.284317 kubelet[2733]: I0209 12:17:34.284196 2733 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-b58f4ff548" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 12:17:34.284086116 +0000 UTC m=+744.222602115 LastTransitionTime:2024-02-09 12:17:34.284086116 +0000 UTC m=+744.222602115 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 12:17:35.303472 systemd-networkd[1414]: lxc_health: Link UP Feb 9 12:17:35.328296 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 12:17:35.328338 systemd-networkd[1414]: lxc_health: Gained carrier Feb 9 12:17:37.158367 systemd-networkd[1414]: lxc_health: Gained IPv6LL Feb 9 12:17:49.879063 systemd[1]: Started sshd@161-139.178.89.23:22-198.12.118.109:19484.service. Feb 9 12:17:50.052389 sshd[8286]: Invalid user bsshon from 198.12.118.109 port 19484 Feb 9 12:17:50.058354 sshd[8286]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:50.059527 sshd[8286]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:17:50.059617 sshd[8286]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.12.118.109 Feb 9 12:17:50.060545 sshd[8286]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:51.334243 sshd[8286]: Failed password for invalid user bsshon from 198.12.118.109 port 19484 ssh2 Feb 9 12:17:52.186870 sshd[8286]: Received disconnect from 198.12.118.109 port 19484:11: Bye Bye [preauth] Feb 9 12:17:52.186870 sshd[8286]: Disconnected from invalid user bsshon 198.12.118.109 port 19484 [preauth] Feb 9 12:17:52.189385 systemd[1]: sshd@161-139.178.89.23:22-198.12.118.109:19484.service: Deactivated successfully. Feb 9 12:17:52.539218 systemd[1]: Started sshd@162-139.178.89.23:22-209.97.179.25:33562.service. Feb 9 12:17:53.354337 sshd[8290]: Invalid user elika from 209.97.179.25 port 33562 Feb 9 12:17:53.360601 sshd[8290]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:53.361783 sshd[8290]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:17:53.361874 sshd[8290]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=209.97.179.25 Feb 9 12:17:53.362832 sshd[8290]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:53.543154 systemd[1]: Started sshd@163-139.178.89.23:22-180.107.140.47:50820.service. Feb 9 12:17:55.716362 sshd[8290]: Failed password for invalid user elika from 209.97.179.25 port 33562 ssh2 Feb 9 12:17:56.870468 systemd[1]: Started sshd@164-139.178.89.23:22-39.109.116.167:50393.service. Feb 9 12:17:57.014956 sshd[8290]: Received disconnect from 209.97.179.25 port 33562:11: Bye Bye [preauth] Feb 9 12:17:57.014956 sshd[8290]: Disconnected from invalid user elika 209.97.179.25 port 33562 [preauth] Feb 9 12:17:57.017494 systemd[1]: sshd@162-139.178.89.23:22-209.97.179.25:33562.service: Deactivated successfully. Feb 9 12:17:57.820802 sshd[8295]: Invalid user zhxiao from 39.109.116.167 port 50393 Feb 9 12:17:57.826921 sshd[8295]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:57.827921 sshd[8295]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:17:57.828011 sshd[8295]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.109.116.167 Feb 9 12:17:57.829066 sshd[8295]: pam_faillock(sshd:auth): User unknown Feb 9 12:17:59.730839 sshd[8295]: Failed password for invalid user zhxiao from 39.109.116.167 port 50393 ssh2 Feb 9 12:18:00.808845 sshd[8295]: Received disconnect from 39.109.116.167 port 50393:11: Bye Bye [preauth] Feb 9 12:18:00.808845 sshd[8295]: Disconnected from invalid user zhxiao 39.109.116.167 port 50393 [preauth] Feb 9 12:18:00.811315 systemd[1]: sshd@164-139.178.89.23:22-39.109.116.167:50393.service: Deactivated successfully. Feb 9 12:18:10.145868 env[1559]: time="2024-02-09T12:18:10.145638538Z" level=info msg="StopPodSandbox for \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\"" Feb 9 12:18:10.146971 env[1559]: time="2024-02-09T12:18:10.145851140Z" level=info msg="TearDown network for sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" successfully" Feb 9 12:18:10.146971 env[1559]: time="2024-02-09T12:18:10.145947174Z" level=info msg="StopPodSandbox for \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" returns successfully" Feb 9 12:18:10.146971 env[1559]: time="2024-02-09T12:18:10.146856076Z" level=info msg="RemovePodSandbox for \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\"" Feb 9 12:18:10.147412 env[1559]: time="2024-02-09T12:18:10.146932211Z" level=info msg="Forcibly stopping sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\"" Feb 9 12:18:10.147412 env[1559]: time="2024-02-09T12:18:10.147118257Z" level=info msg="TearDown network for sandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" successfully" Feb 9 12:18:10.152010 env[1559]: time="2024-02-09T12:18:10.151901025Z" level=info msg="RemovePodSandbox \"fcc3fb17c17e2c2949150957b6f75e768a7cb5291930bb1490658a9d08547c2d\" returns successfully" Feb 9 12:18:10.152779 env[1559]: time="2024-02-09T12:18:10.152679676Z" level=info msg="StopPodSandbox for \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\"" Feb 9 12:18:10.152998 env[1559]: time="2024-02-09T12:18:10.152859890Z" level=info msg="TearDown network for sandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" successfully" Feb 9 12:18:10.152998 env[1559]: time="2024-02-09T12:18:10.152946766Z" level=info msg="StopPodSandbox for \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" returns successfully" Feb 9 12:18:10.153778 env[1559]: time="2024-02-09T12:18:10.153644399Z" level=info msg="RemovePodSandbox for \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\"" Feb 9 12:18:10.153991 env[1559]: time="2024-02-09T12:18:10.153738875Z" level=info msg="Forcibly stopping sandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\"" Feb 9 12:18:10.153991 env[1559]: time="2024-02-09T12:18:10.153930526Z" level=info msg="TearDown network for sandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" successfully" Feb 9 12:18:10.158451 env[1559]: time="2024-02-09T12:18:10.158379314Z" level=info msg="RemovePodSandbox \"b463850eb116dcd955724c2fa14d52ce1ab4fec531299d68d0fd44894bc27ed0\" returns successfully" Feb 9 12:18:10.159121 env[1559]: time="2024-02-09T12:18:10.159008117Z" level=info msg="StopPodSandbox for \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\"" Feb 9 12:18:10.159376 env[1559]: time="2024-02-09T12:18:10.159177249Z" level=info msg="TearDown network for sandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" successfully" Feb 9 12:18:10.159376 env[1559]: time="2024-02-09T12:18:10.159309890Z" level=info msg="StopPodSandbox for \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" returns successfully" Feb 9 12:18:10.160012 env[1559]: time="2024-02-09T12:18:10.159904613Z" level=info msg="RemovePodSandbox for \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\"" Feb 9 12:18:10.160261 env[1559]: time="2024-02-09T12:18:10.159980286Z" level=info msg="Forcibly stopping sandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\"" Feb 9 12:18:10.160261 env[1559]: time="2024-02-09T12:18:10.160148239Z" level=info msg="TearDown network for sandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" successfully" Feb 9 12:18:10.164178 env[1559]: time="2024-02-09T12:18:10.164058607Z" level=info msg="RemovePodSandbox \"116fa5900429bb951f39e80746109e428035a27c77fd7ee12d36735475fd89d0\" returns successfully" Feb 9 12:18:20.669020 systemd[1]: Started sshd@165-139.178.89.23:22-119.91.207.218:51730.service. Feb 9 12:18:37.177287 systemd[1]: Started sshd@166-139.178.89.23:22-45.64.3.61:41320.service. Feb 9 12:18:38.241789 sshd[8336]: Invalid user bamdad from 45.64.3.61 port 41320 Feb 9 12:18:38.247930 sshd[8336]: pam_faillock(sshd:auth): User unknown Feb 9 12:18:38.248739 sshd[8336]: pam_unix(sshd:auth): check pass; user unknown Feb 9 12:18:38.248756 sshd[8336]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.64.3.61 Feb 9 12:18:38.248935 sshd[8336]: pam_faillock(sshd:auth): User unknown Feb 9 12:18:38.564477 sshd[7113]: pam_unix(sshd:session): session closed for user core Feb 9 12:18:38.570523 systemd[1]: sshd@159-139.178.89.23:22-147.75.109.163:50746.service: Deactivated successfully. Feb 9 12:18:38.573335 systemd-logind[1545]: Session 91 logged out. Waiting for processes to exit. Feb 9 12:18:38.573499 systemd[1]: session-91.scope: Deactivated successfully. Feb 9 12:18:38.575814 systemd-logind[1545]: Removed session 91.