Feb 9 07:14:01.550419 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 07:14:01.550432 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 07:14:01.550439 kernel: BIOS-provided physical RAM map: Feb 9 07:14:01.550442 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 07:14:01.550446 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 07:14:01.550450 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 07:14:01.550454 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 07:14:01.550458 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 07:14:01.550462 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000820dcfff] usable Feb 9 07:14:01.550465 kernel: BIOS-e820: [mem 0x00000000820dd000-0x00000000820ddfff] ACPI NVS Feb 9 07:14:01.550470 kernel: BIOS-e820: [mem 0x00000000820de000-0x00000000820defff] reserved Feb 9 07:14:01.550474 kernel: BIOS-e820: [mem 0x00000000820df000-0x000000008afccfff] usable Feb 9 07:14:01.550477 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 9 07:14:01.550484 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 9 07:14:01.550489 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 9 07:14:01.550494 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 9 07:14:01.550498 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 9 07:14:01.550502 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 9 07:14:01.550506 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 07:14:01.550510 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 07:14:01.550514 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 07:14:01.550518 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 07:14:01.550523 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 07:14:01.550527 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 9 07:14:01.550531 kernel: NX (Execute Disable) protection: active Feb 9 07:14:01.550535 kernel: SMBIOS 3.2.1 present. Feb 9 07:14:01.550540 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Feb 9 07:14:01.550544 kernel: tsc: Detected 3400.000 MHz processor Feb 9 07:14:01.550548 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 07:14:01.550553 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 07:14:01.550557 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 07:14:01.550562 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 9 07:14:01.550566 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 07:14:01.550570 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 9 07:14:01.550574 kernel: Using GB pages for direct mapping Feb 9 07:14:01.550579 kernel: ACPI: Early table checksum verification disabled Feb 9 07:14:01.550584 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 07:14:01.550588 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 07:14:01.550592 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 9 07:14:01.550597 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 07:14:01.550603 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 9 07:14:01.550607 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 9 07:14:01.550613 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 9 07:14:01.550617 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 07:14:01.550622 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 07:14:01.550626 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 07:14:01.550631 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 07:14:01.550636 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 07:14:01.550640 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 07:14:01.550645 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:14:01.550650 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 07:14:01.550655 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 07:14:01.550659 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:14:01.550664 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:14:01.550669 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 07:14:01.550673 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 07:14:01.550678 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:14:01.550682 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:14:01.550688 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 07:14:01.550693 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 9 07:14:01.550697 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 07:14:01.550702 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 07:14:01.550706 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 07:14:01.550711 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 9 07:14:01.550716 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 07:14:01.550720 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 07:14:01.550725 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 07:14:01.550730 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 07:14:01.550735 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 07:14:01.550739 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 9 07:14:01.550744 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 9 07:14:01.550749 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 9 07:14:01.550753 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 9 07:14:01.550758 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 9 07:14:01.550762 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 9 07:14:01.550767 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 9 07:14:01.550772 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 9 07:14:01.550777 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 9 07:14:01.550781 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 9 07:14:01.550786 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 9 07:14:01.550790 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 9 07:14:01.550795 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 9 07:14:01.550799 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 9 07:14:01.550804 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 9 07:14:01.550809 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 9 07:14:01.550814 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 9 07:14:01.550818 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 9 07:14:01.550823 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 9 07:14:01.550827 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 9 07:14:01.550832 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 9 07:14:01.550837 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 9 07:14:01.550841 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 9 07:14:01.550846 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 9 07:14:01.550851 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 9 07:14:01.550855 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 9 07:14:01.550860 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 9 07:14:01.550865 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 9 07:14:01.550869 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 9 07:14:01.550874 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 9 07:14:01.550879 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 9 07:14:01.550883 kernel: No NUMA configuration found Feb 9 07:14:01.550888 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 9 07:14:01.550893 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 9 07:14:01.550898 kernel: Zone ranges: Feb 9 07:14:01.550903 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 07:14:01.550907 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 07:14:01.550912 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 9 07:14:01.550916 kernel: Movable zone start for each node Feb 9 07:14:01.550921 kernel: Early memory node ranges Feb 9 07:14:01.550926 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 07:14:01.550930 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 07:14:01.550935 kernel: node 0: [mem 0x0000000040400000-0x00000000820dcfff] Feb 9 07:14:01.550940 kernel: node 0: [mem 0x00000000820df000-0x000000008afccfff] Feb 9 07:14:01.550945 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 9 07:14:01.550949 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 9 07:14:01.550954 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 9 07:14:01.550958 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 9 07:14:01.550963 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 07:14:01.550971 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 07:14:01.550977 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 07:14:01.550981 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 07:14:01.550986 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 9 07:14:01.550992 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 9 07:14:01.550997 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 9 07:14:01.551002 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 9 07:14:01.551007 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 07:14:01.551012 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 07:14:01.551017 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 07:14:01.551022 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 07:14:01.551028 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 07:14:01.551033 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 07:14:01.551038 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 07:14:01.551042 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 07:14:01.551047 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 07:14:01.551052 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 07:14:01.551057 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 07:14:01.551062 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 07:14:01.551067 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 07:14:01.551073 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 07:14:01.551077 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 07:14:01.551082 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 07:14:01.551087 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 07:14:01.551092 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 07:14:01.551097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 07:14:01.551102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 07:14:01.551107 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 07:14:01.551112 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 07:14:01.551117 kernel: TSC deadline timer available Feb 9 07:14:01.551122 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 07:14:01.551127 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 9 07:14:01.551132 kernel: Booting paravirtualized kernel on bare hardware Feb 9 07:14:01.551137 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 07:14:01.551142 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 07:14:01.551147 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 07:14:01.551152 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 07:14:01.551157 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 07:14:01.551163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 9 07:14:01.551167 kernel: Policy zone: Normal Feb 9 07:14:01.551173 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 07:14:01.551178 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 07:14:01.551183 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 07:14:01.551188 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 07:14:01.551193 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 07:14:01.551198 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 07:14:01.551204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 07:14:01.551209 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 07:14:01.551214 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 07:14:01.551219 kernel: rcu: Hierarchical RCU implementation. Feb 9 07:14:01.551224 kernel: rcu: RCU event tracing is enabled. Feb 9 07:14:01.551229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 07:14:01.551234 kernel: Rude variant of Tasks RCU enabled. Feb 9 07:14:01.551239 kernel: Tracing variant of Tasks RCU enabled. Feb 9 07:14:01.551244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 07:14:01.551250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 07:14:01.551255 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 07:14:01.551260 kernel: random: crng init done Feb 9 07:14:01.551265 kernel: Console: colour dummy device 80x25 Feb 9 07:14:01.551270 kernel: printk: console [tty0] enabled Feb 9 07:14:01.551275 kernel: printk: console [ttyS1] enabled Feb 9 07:14:01.551280 kernel: ACPI: Core revision 20210730 Feb 9 07:14:01.551285 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 9 07:14:01.551290 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 07:14:01.551295 kernel: DMAR: Host address width 39 Feb 9 07:14:01.551300 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 07:14:01.551305 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 07:14:01.551310 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 9 07:14:01.551315 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 9 07:14:01.551320 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 07:14:01.551325 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 07:14:01.551330 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 07:14:01.551335 kernel: x2apic enabled Feb 9 07:14:01.551340 kernel: Switched APIC routing to cluster x2apic. Feb 9 07:14:01.551345 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 07:14:01.551350 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 07:14:01.551355 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 07:14:01.551360 kernel: process: using mwait in idle threads Feb 9 07:14:01.551365 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 07:14:01.551370 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 07:14:01.551375 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 07:14:01.551380 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 07:14:01.551386 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 07:14:01.551391 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 07:14:01.551395 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 07:14:01.551400 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 07:14:01.551405 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 07:14:01.551410 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 07:14:01.551415 kernel: TAA: Mitigation: TSX disabled Feb 9 07:14:01.551420 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 07:14:01.551424 kernel: SRBDS: Mitigation: Microcode Feb 9 07:14:01.551429 kernel: GDS: Vulnerable: No microcode Feb 9 07:14:01.551434 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 07:14:01.551440 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 07:14:01.551445 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 07:14:01.551450 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 07:14:01.551455 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 07:14:01.551460 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 07:14:01.551464 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 07:14:01.551469 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 07:14:01.551474 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 07:14:01.551481 kernel: Freeing SMP alternatives memory: 32K Feb 9 07:14:01.551486 kernel: pid_max: default: 32768 minimum: 301 Feb 9 07:14:01.551491 kernel: LSM: Security Framework initializing Feb 9 07:14:01.551496 kernel: SELinux: Initializing. Feb 9 07:14:01.551501 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 07:14:01.551506 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 07:14:01.551511 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 07:14:01.551516 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 07:14:01.551521 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 07:14:01.551526 kernel: ... version: 4 Feb 9 07:14:01.551531 kernel: ... bit width: 48 Feb 9 07:14:01.551536 kernel: ... generic registers: 4 Feb 9 07:14:01.551541 kernel: ... value mask: 0000ffffffffffff Feb 9 07:14:01.551546 kernel: ... max period: 00007fffffffffff Feb 9 07:14:01.551552 kernel: ... fixed-purpose events: 3 Feb 9 07:14:01.551556 kernel: ... event mask: 000000070000000f Feb 9 07:14:01.551561 kernel: signal: max sigframe size: 2032 Feb 9 07:14:01.551566 kernel: rcu: Hierarchical SRCU implementation. Feb 9 07:14:01.551571 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 07:14:01.551576 kernel: smp: Bringing up secondary CPUs ... Feb 9 07:14:01.551581 kernel: x86: Booting SMP configuration: Feb 9 07:14:01.551586 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 07:14:01.551591 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 07:14:01.551597 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 07:14:01.551602 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 07:14:01.551607 kernel: smpboot: Max logical packages: 1 Feb 9 07:14:01.551612 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 07:14:01.551617 kernel: devtmpfs: initialized Feb 9 07:14:01.551622 kernel: x86/mm: Memory block size: 128MB Feb 9 07:14:01.551627 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x820dd000-0x820ddfff] (4096 bytes) Feb 9 07:14:01.551632 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 9 07:14:01.551638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 07:14:01.551643 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 07:14:01.551648 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 07:14:01.551653 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 07:14:01.551658 kernel: audit: initializing netlink subsys (disabled) Feb 9 07:14:01.551662 kernel: audit: type=2000 audit(1707462835.040:1): state=initialized audit_enabled=0 res=1 Feb 9 07:14:01.551667 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 07:14:01.551672 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 07:14:01.551677 kernel: cpuidle: using governor menu Feb 9 07:14:01.551683 kernel: ACPI: bus type PCI registered Feb 9 07:14:01.551688 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 07:14:01.551693 kernel: dca service started, version 1.12.1 Feb 9 07:14:01.551698 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 07:14:01.551703 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 07:14:01.551708 kernel: PCI: Using configuration type 1 for base access Feb 9 07:14:01.551713 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 07:14:01.551717 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 07:14:01.551722 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 07:14:01.551728 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 07:14:01.551733 kernel: ACPI: Added _OSI(Module Device) Feb 9 07:14:01.551738 kernel: ACPI: Added _OSI(Processor Device) Feb 9 07:14:01.551743 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 07:14:01.551748 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 07:14:01.551753 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 07:14:01.551757 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 07:14:01.551762 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 07:14:01.551767 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 07:14:01.551773 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:14:01.551778 kernel: ACPI: SSDT 0xFFFF8DC180213C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 07:14:01.551783 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 07:14:01.551788 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:14:01.551793 kernel: ACPI: SSDT 0xFFFF8DC181AE6000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 07:14:01.551798 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:14:01.551803 kernel: ACPI: SSDT 0xFFFF8DC181A5C000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 07:14:01.551808 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:14:01.551812 kernel: ACPI: SSDT 0xFFFF8DC181A5E000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 07:14:01.551817 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:14:01.551823 kernel: ACPI: SSDT 0xFFFF8DC18014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 07:14:01.551828 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:14:01.551833 kernel: ACPI: SSDT 0xFFFF8DC181AE2C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 07:14:01.551838 kernel: ACPI: Interpreter enabled Feb 9 07:14:01.551843 kernel: ACPI: PM: (supports S0 S5) Feb 9 07:14:01.551847 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 07:14:01.551852 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 07:14:01.551857 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 07:14:01.551862 kernel: HEST: Table parsing has been initialized. Feb 9 07:14:01.551868 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 07:14:01.551873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 07:14:01.551878 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 07:14:01.551883 kernel: ACPI: PM: Power Resource [USBC] Feb 9 07:14:01.551888 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 07:14:01.551892 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 07:14:01.551897 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 07:14:01.551902 kernel: ACPI: PM: Power Resource [WRST] Feb 9 07:14:01.551907 kernel: ACPI: PM: Power Resource [FN00] Feb 9 07:14:01.551913 kernel: ACPI: PM: Power Resource [FN01] Feb 9 07:14:01.551918 kernel: ACPI: PM: Power Resource [FN02] Feb 9 07:14:01.551922 kernel: ACPI: PM: Power Resource [FN03] Feb 9 07:14:01.551927 kernel: ACPI: PM: Power Resource [FN04] Feb 9 07:14:01.551932 kernel: ACPI: PM: Power Resource [PIN] Feb 9 07:14:01.551937 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 07:14:01.552002 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 07:14:01.552047 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 07:14:01.552090 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 07:14:01.552097 kernel: PCI host bridge to bus 0000:00 Feb 9 07:14:01.552140 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 07:14:01.552178 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 07:14:01.552214 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 07:14:01.552251 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 9 07:14:01.552287 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 07:14:01.552324 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 07:14:01.552375 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 07:14:01.552425 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 07:14:01.552467 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.552515 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 07:14:01.552558 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 9 07:14:01.552605 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 07:14:01.552648 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 9 07:14:01.552694 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 07:14:01.552736 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 9 07:14:01.552779 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 07:14:01.552825 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 07:14:01.552868 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 9 07:14:01.552909 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 9 07:14:01.552954 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 07:14:01.552996 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 07:14:01.553042 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 07:14:01.553083 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 07:14:01.553129 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 07:14:01.553171 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 9 07:14:01.553212 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 07:14:01.553256 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 07:14:01.553297 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 9 07:14:01.553338 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 07:14:01.553382 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 07:14:01.553425 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 9 07:14:01.553465 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 07:14:01.553512 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 07:14:01.553553 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 9 07:14:01.553594 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 9 07:14:01.553634 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 9 07:14:01.553674 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 9 07:14:01.553722 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 9 07:14:01.553765 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 9 07:14:01.553806 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 07:14:01.553852 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 07:14:01.553893 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.553940 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 07:14:01.553981 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.554046 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 07:14:01.554087 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.554132 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 07:14:01.554172 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.554217 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 9 07:14:01.554259 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.554303 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 07:14:01.554344 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 07:14:01.554390 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 07:14:01.554438 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 07:14:01.554478 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 9 07:14:01.554554 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 07:14:01.554599 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 07:14:01.554639 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 07:14:01.554687 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 07:14:01.554730 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 07:14:01.554775 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 9 07:14:01.554817 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 9 07:14:01.554860 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 07:14:01.554901 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 07:14:01.554949 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 07:14:01.554990 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 07:14:01.555035 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 9 07:14:01.555076 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 9 07:14:01.555118 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 07:14:01.555159 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 07:14:01.555201 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 07:14:01.555241 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 07:14:01.555282 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 07:14:01.555323 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 07:14:01.555370 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 9 07:14:01.555413 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 9 07:14:01.555454 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 07:14:01.555499 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 9 07:14:01.555541 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.555583 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 07:14:01.555623 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 07:14:01.555663 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 07:14:01.555712 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 07:14:01.555753 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 9 07:14:01.555795 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 07:14:01.555837 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 9 07:14:01.555879 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 07:14:01.555919 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 07:14:01.555960 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 07:14:01.556003 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 07:14:01.556044 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 07:14:01.556092 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 07:14:01.556168 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 9 07:14:01.556231 kernel: pci 0000:06:00.0: supports D1 D2 Feb 9 07:14:01.556274 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 07:14:01.556316 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 07:14:01.556356 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 07:14:01.556399 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 07:14:01.556445 kernel: pci_bus 0000:07: extended config space not accessible Feb 9 07:14:01.556513 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 07:14:01.556571 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 9 07:14:01.556615 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 9 07:14:01.556659 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 07:14:01.556704 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 07:14:01.556749 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 07:14:01.556794 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 07:14:01.556836 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 07:14:01.556878 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 07:14:01.556920 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 07:14:01.556927 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 07:14:01.556933 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 07:14:01.556938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 07:14:01.556945 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 07:14:01.556950 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 07:14:01.556955 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 07:14:01.556960 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 07:14:01.556965 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 07:14:01.556971 kernel: iommu: Default domain type: Translated Feb 9 07:14:01.556976 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 07:14:01.557019 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 9 07:14:01.557064 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 07:14:01.557108 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 9 07:14:01.557115 kernel: vgaarb: loaded Feb 9 07:14:01.557121 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 07:14:01.557126 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 07:14:01.557131 kernel: PTP clock support registered Feb 9 07:14:01.557136 kernel: PCI: Using ACPI for IRQ routing Feb 9 07:14:01.557142 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 07:14:01.557147 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 07:14:01.557153 kernel: e820: reserve RAM buffer [mem 0x820dd000-0x83ffffff] Feb 9 07:14:01.557158 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 9 07:14:01.557163 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 9 07:14:01.557168 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 9 07:14:01.557173 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 9 07:14:01.557178 kernel: clocksource: Switched to clocksource tsc-early Feb 9 07:14:01.557183 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 07:14:01.557188 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 07:14:01.557193 kernel: pnp: PnP ACPI init Feb 9 07:14:01.557236 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 07:14:01.557277 kernel: pnp 00:02: [dma 0 disabled] Feb 9 07:14:01.557317 kernel: pnp 00:03: [dma 0 disabled] Feb 9 07:14:01.557357 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 07:14:01.557395 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 07:14:01.557434 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 07:14:01.557476 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 07:14:01.557555 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 07:14:01.557591 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 07:14:01.557628 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 07:14:01.557663 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 07:14:01.557699 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 07:14:01.557735 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 07:14:01.557774 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 07:14:01.557813 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 07:14:01.557850 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 07:14:01.557886 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 07:14:01.557922 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 07:14:01.557959 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 07:14:01.557995 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 07:14:01.558034 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 07:14:01.558073 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 07:14:01.558081 kernel: pnp: PnP ACPI: found 10 devices Feb 9 07:14:01.558086 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 07:14:01.558091 kernel: NET: Registered PF_INET protocol family Feb 9 07:14:01.558096 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 07:14:01.558102 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 07:14:01.558107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 07:14:01.558113 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 07:14:01.558119 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 07:14:01.558124 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 07:14:01.558129 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 07:14:01.558134 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 07:14:01.558139 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 07:14:01.558144 kernel: NET: Registered PF_XDP protocol family Feb 9 07:14:01.558186 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 9 07:14:01.558228 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 9 07:14:01.558269 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 9 07:14:01.558311 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 07:14:01.558353 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 07:14:01.558396 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 07:14:01.558437 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 07:14:01.558478 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 07:14:01.558558 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 07:14:01.558602 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 07:14:01.558642 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 07:14:01.558682 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 07:14:01.558724 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 07:14:01.558765 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 07:14:01.558808 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 07:14:01.558848 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 07:14:01.558890 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 07:14:01.558931 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 07:14:01.558975 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 07:14:01.559017 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 07:14:01.559058 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 07:14:01.559099 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 07:14:01.559139 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 07:14:01.559181 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 07:14:01.559218 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 07:14:01.559255 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 07:14:01.559290 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 07:14:01.559326 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 07:14:01.559361 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 9 07:14:01.559396 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 07:14:01.559438 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 9 07:14:01.559478 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 07:14:01.559563 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 9 07:14:01.559601 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 9 07:14:01.559643 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 9 07:14:01.559681 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 9 07:14:01.559722 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 9 07:14:01.559763 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 9 07:14:01.559802 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 07:14:01.559843 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 9 07:14:01.559850 kernel: PCI: CLS 64 bytes, default 64 Feb 9 07:14:01.559856 kernel: DMAR: No ATSR found Feb 9 07:14:01.559862 kernel: DMAR: No SATC found Feb 9 07:14:01.559867 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 07:14:01.559908 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 07:14:01.559952 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 07:14:01.559993 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 9 07:14:01.560033 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 9 07:14:01.560074 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 9 07:14:01.560114 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 9 07:14:01.560155 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 9 07:14:01.560194 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 9 07:14:01.560235 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 9 07:14:01.560276 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 9 07:14:01.560318 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 9 07:14:01.560358 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 9 07:14:01.560399 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 9 07:14:01.560440 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 9 07:14:01.560482 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 9 07:14:01.560524 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 9 07:14:01.560564 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 9 07:14:01.560606 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 9 07:14:01.560646 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 9 07:14:01.560687 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 9 07:14:01.560727 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 9 07:14:01.560770 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 9 07:14:01.560812 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 9 07:14:01.560853 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 9 07:14:01.560896 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 07:14:01.560941 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 9 07:14:01.560985 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 9 07:14:01.560993 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 07:14:01.560998 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 07:14:01.561004 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 9 07:14:01.561009 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 9 07:14:01.561014 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 07:14:01.561019 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 07:14:01.561026 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 07:14:01.561069 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 07:14:01.561077 kernel: Initialise system trusted keyrings Feb 9 07:14:01.561082 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 07:14:01.561087 kernel: Key type asymmetric registered Feb 9 07:14:01.561092 kernel: Asymmetric key parser 'x509' registered Feb 9 07:14:01.561097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 07:14:01.561103 kernel: io scheduler mq-deadline registered Feb 9 07:14:01.561109 kernel: io scheduler kyber registered Feb 9 07:14:01.561114 kernel: io scheduler bfq registered Feb 9 07:14:01.561155 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 9 07:14:01.561196 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 9 07:14:01.561238 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 9 07:14:01.561279 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 9 07:14:01.561321 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 9 07:14:01.561362 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 9 07:14:01.561409 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 07:14:01.561417 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 07:14:01.561422 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 07:14:01.561428 kernel: pstore: Registered erst as persistent store backend Feb 9 07:14:01.561433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 07:14:01.561438 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 07:14:01.561443 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 07:14:01.561448 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 07:14:01.561455 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 9 07:14:01.561501 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 07:14:01.561509 kernel: i8042: PNP: No PS/2 controller found. Feb 9 07:14:01.561545 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 07:14:01.561583 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 07:14:01.561620 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T07:14:00 UTC (1707462840) Feb 9 07:14:01.561657 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 07:14:01.561664 kernel: fail to initialize ptp_kvm Feb 9 07:14:01.561671 kernel: intel_pstate: Intel P-state driver initializing Feb 9 07:14:01.561676 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 07:14:01.561681 kernel: intel_pstate: HWP enabled Feb 9 07:14:01.561686 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 07:14:01.561691 kernel: vesafb: scrolling: redraw Feb 9 07:14:01.561697 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 07:14:01.561702 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000008a0f18c3, using 768k, total 768k Feb 9 07:14:01.561707 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 07:14:01.561712 kernel: fb0: VESA VGA frame buffer device Feb 9 07:14:01.561718 kernel: NET: Registered PF_INET6 protocol family Feb 9 07:14:01.561724 kernel: Segment Routing with IPv6 Feb 9 07:14:01.561729 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 07:14:01.561734 kernel: NET: Registered PF_PACKET protocol family Feb 9 07:14:01.561739 kernel: Key type dns_resolver registered Feb 9 07:14:01.561744 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 07:14:01.561749 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 07:14:01.561754 kernel: IPI shorthand broadcast: enabled Feb 9 07:14:01.561760 kernel: sched_clock: Marking stable (1678477253, 1338924098)->(4436199799, -1418798448) Feb 9 07:14:01.561766 kernel: registered taskstats version 1 Feb 9 07:14:01.561771 kernel: Loading compiled-in X.509 certificates Feb 9 07:14:01.561776 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 07:14:01.561781 kernel: Key type .fscrypt registered Feb 9 07:14:01.561786 kernel: Key type fscrypt-provisioning registered Feb 9 07:14:01.561791 kernel: pstore: Using crash dump compression: deflate Feb 9 07:14:01.561797 kernel: ima: Allocated hash algorithm: sha1 Feb 9 07:14:01.561802 kernel: ima: No architecture policies found Feb 9 07:14:01.561807 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 07:14:01.561813 kernel: Write protecting the kernel read-only data: 28672k Feb 9 07:14:01.561818 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 07:14:01.561823 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 07:14:01.561829 kernel: Run /init as init process Feb 9 07:14:01.561834 kernel: with arguments: Feb 9 07:14:01.561839 kernel: /init Feb 9 07:14:01.561844 kernel: with environment: Feb 9 07:14:01.561849 kernel: HOME=/ Feb 9 07:14:01.561854 kernel: TERM=linux Feb 9 07:14:01.561860 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 07:14:01.561866 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 07:14:01.561873 systemd[1]: Detected architecture x86-64. Feb 9 07:14:01.561878 systemd[1]: Running in initrd. Feb 9 07:14:01.561884 systemd[1]: No hostname configured, using default hostname. Feb 9 07:14:01.561889 systemd[1]: Hostname set to . Feb 9 07:14:01.561894 systemd[1]: Initializing machine ID from random generator. Feb 9 07:14:01.561900 systemd[1]: Queued start job for default target initrd.target. Feb 9 07:14:01.561906 systemd[1]: Started systemd-ask-password-console.path. Feb 9 07:14:01.561911 systemd[1]: Reached target cryptsetup.target. Feb 9 07:14:01.561916 systemd[1]: Reached target paths.target. Feb 9 07:14:01.561921 systemd[1]: Reached target slices.target. Feb 9 07:14:01.561927 systemd[1]: Reached target swap.target. Feb 9 07:14:01.561932 systemd[1]: Reached target timers.target. Feb 9 07:14:01.561937 systemd[1]: Listening on iscsid.socket. Feb 9 07:14:01.561944 systemd[1]: Listening on iscsiuio.socket. Feb 9 07:14:01.561949 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 07:14:01.561954 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 07:14:01.561960 systemd[1]: Listening on systemd-journald.socket. Feb 9 07:14:01.561965 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 9 07:14:01.561970 systemd[1]: Listening on systemd-networkd.socket. Feb 9 07:14:01.561976 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 9 07:14:01.561981 kernel: clocksource: Switched to clocksource tsc Feb 9 07:14:01.561988 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 07:14:01.561993 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 07:14:01.561998 systemd[1]: Reached target sockets.target. Feb 9 07:14:01.562004 systemd[1]: Starting kmod-static-nodes.service... Feb 9 07:14:01.562009 systemd[1]: Finished network-cleanup.service. Feb 9 07:14:01.562014 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 07:14:01.562020 systemd[1]: Starting systemd-journald.service... Feb 9 07:14:01.562025 systemd[1]: Starting systemd-modules-load.service... Feb 9 07:14:01.562034 systemd-journald[269]: Journal started Feb 9 07:14:01.562059 systemd-journald[269]: Runtime Journal (/run/log/journal/d1ed8769285948d19092087e12e82693) is 8.0M, max 640.1M, 632.1M free. Feb 9 07:14:01.565048 systemd-modules-load[270]: Inserted module 'overlay' Feb 9 07:14:01.623578 kernel: audit: type=1334 audit(1707462841.570:2): prog-id=6 op=LOAD Feb 9 07:14:01.623588 systemd[1]: Starting systemd-resolved.service... Feb 9 07:14:01.623597 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 07:14:01.570000 audit: BPF prog-id=6 op=LOAD Feb 9 07:14:01.657531 kernel: Bridge firewalling registered Feb 9 07:14:01.657545 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 07:14:01.672200 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 9 07:14:01.708574 systemd[1]: Started systemd-journald.service. Feb 9 07:14:01.708586 kernel: SCSI subsystem initialized Feb 9 07:14:01.678350 systemd-resolved[272]: Positive Trust Anchors: Feb 9 07:14:01.752691 kernel: audit: type=1130 audit(1707462841.710:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.678356 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 07:14:01.866519 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 07:14:01.866531 kernel: audit: type=1130 audit(1707462841.774:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.866538 kernel: device-mapper: uevent: version 1.0.3 Feb 9 07:14:01.866545 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 07:14:01.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.678376 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 07:14:01.939649 kernel: audit: type=1130 audit(1707462841.873:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.679905 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 9 07:14:01.990495 kernel: audit: type=1130 audit(1707462841.947:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.711619 systemd[1]: Started systemd-resolved.service. Feb 9 07:14:01.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.774646 systemd[1]: Finished kmod-static-nodes.service. Feb 9 07:14:02.099940 kernel: audit: type=1130 audit(1707462841.999:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.099950 kernel: audit: type=1130 audit(1707462842.053:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:01.868664 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 9 07:14:01.895019 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 07:14:01.947737 systemd[1]: Finished systemd-modules-load.service. Feb 9 07:14:01.999846 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 07:14:02.053776 systemd[1]: Reached target nss-lookup.target. Feb 9 07:14:02.110094 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 07:14:02.132094 systemd[1]: Starting systemd-sysctl.service... Feb 9 07:14:02.132404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 07:14:02.135167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 07:14:02.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.135639 systemd[1]: Finished systemd-sysctl.service. Feb 9 07:14:02.184553 kernel: audit: type=1130 audit(1707462842.133:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.196804 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 07:14:02.261596 kernel: audit: type=1130 audit(1707462842.196:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.253136 systemd[1]: Starting dracut-cmdline.service... Feb 9 07:14:02.276599 dracut-cmdline[294]: dracut-dracut-053 Feb 9 07:14:02.276599 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 07:14:02.276599 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 07:14:02.344582 kernel: Loading iSCSI transport class v2.0-870. Feb 9 07:14:02.344594 kernel: iscsi: registered transport (tcp) Feb 9 07:14:02.390865 kernel: iscsi: registered transport (qla4xxx) Feb 9 07:14:02.390918 kernel: QLogic iSCSI HBA Driver Feb 9 07:14:02.407426 systemd[1]: Finished dracut-cmdline.service. Feb 9 07:14:02.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:02.408015 systemd[1]: Starting dracut-pre-udev.service... Feb 9 07:14:02.464554 kernel: raid6: avx2x4 gen() 48045 MB/s Feb 9 07:14:02.499544 kernel: raid6: avx2x4 xor() 21724 MB/s Feb 9 07:14:02.534514 kernel: raid6: avx2x2 gen() 53854 MB/s Feb 9 07:14:02.569544 kernel: raid6: avx2x2 xor() 32124 MB/s Feb 9 07:14:02.604563 kernel: raid6: avx2x1 gen() 45269 MB/s Feb 9 07:14:02.638513 kernel: raid6: avx2x1 xor() 27303 MB/s Feb 9 07:14:02.672513 kernel: raid6: sse2x4 gen() 20874 MB/s Feb 9 07:14:02.706513 kernel: raid6: sse2x4 xor() 8872 MB/s Feb 9 07:14:02.740515 kernel: raid6: sse2x2 gen() 21140 MB/s Feb 9 07:14:02.774522 kernel: raid6: sse2x2 xor() 13117 MB/s Feb 9 07:14:02.808545 kernel: raid6: sse2x1 gen() 17890 MB/s Feb 9 07:14:02.859921 kernel: raid6: sse2x1 xor() 8745 MB/s Feb 9 07:14:02.859936 kernel: raid6: using algorithm avx2x2 gen() 53854 MB/s Feb 9 07:14:02.859943 kernel: raid6: .... xor() 32124 MB/s, rmw enabled Feb 9 07:14:02.877885 kernel: raid6: using avx2x2 recovery algorithm Feb 9 07:14:02.923492 kernel: xor: automatically using best checksumming function avx Feb 9 07:14:03.001492 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 07:14:03.006848 systemd[1]: Finished dracut-pre-udev.service. Feb 9 07:14:03.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:03.015000 audit: BPF prog-id=7 op=LOAD Feb 9 07:14:03.015000 audit: BPF prog-id=8 op=LOAD Feb 9 07:14:03.016473 systemd[1]: Starting systemd-udevd.service... Feb 9 07:14:03.024591 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 07:14:03.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:03.030894 systemd[1]: Started systemd-udevd.service. Feb 9 07:14:03.070596 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 9 07:14:03.047352 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 07:14:03.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:03.074194 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 07:14:03.087558 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 07:14:03.136120 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 07:14:03.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:03.163489 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 07:14:03.199810 kernel: ACPI: bus type USB registered Feb 9 07:14:03.199849 kernel: usbcore: registered new interface driver usbfs Feb 9 07:14:03.199858 kernel: usbcore: registered new interface driver hub Feb 9 07:14:03.234777 kernel: usbcore: registered new device driver usb Feb 9 07:14:03.235488 kernel: libata version 3.00 loaded. Feb 9 07:14:03.270033 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 07:14:03.270080 kernel: AES CTR mode by8 optimization enabled Feb 9 07:14:03.309209 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 07:14:03.309337 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 07:14:03.309528 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 07:14:03.312648 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 07:14:03.313485 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 9 07:14:03.313569 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 9 07:14:03.313639 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 07:14:03.350242 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 07:14:03.350318 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 07:14:03.402904 kernel: scsi host0: ahci Feb 9 07:14:03.403011 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 07:14:03.403093 kernel: scsi host1: ahci Feb 9 07:14:03.422779 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 07:14:03.453857 kernel: scsi host2: ahci Feb 9 07:14:03.466606 kernel: hub 1-0:1.0: USB hub found Feb 9 07:14:03.483515 kernel: scsi host3: ahci Feb 9 07:14:03.484538 kernel: hub 1-0:1.0: 16 ports detected Feb 9 07:14:03.484730 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 07:14:03.484746 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 07:14:03.509549 kernel: scsi host4: ahci Feb 9 07:14:03.520492 kernel: hub 2-0:1.0: USB hub found Feb 9 07:14:03.523500 kernel: scsi host5: ahci Feb 9 07:14:03.523601 kernel: pps pps0: new PPS source ptp0 Feb 9 07:14:03.523785 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 9 07:14:03.523937 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 07:14:03.524039 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d0 Feb 9 07:14:03.524229 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 9 07:14:03.524385 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 07:14:03.546663 kernel: hub 2-0:1.0: 10 ports detected Feb 9 07:14:03.547126 kernel: scsi host6: ahci Feb 9 07:14:03.556536 kernel: pps pps1: new PPS source ptp1 Feb 9 07:14:03.556604 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 9 07:14:03.556669 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 07:14:03.556721 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d1 Feb 9 07:14:03.556774 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 9 07:14:03.556825 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 07:14:03.570545 kernel: usb: port power management may be unreliable Feb 9 07:14:03.570562 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 9 07:14:03.571486 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 9 07:14:03.571561 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 07:14:03.764051 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 07:14:03.764078 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 9 07:14:03.892400 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 9 07:14:03.892417 kernel: hub 1-14:1.0: USB hub found Feb 9 07:14:03.892495 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 9 07:14:03.935975 kernel: hub 1-14:1.0: 4 ports detected Feb 9 07:14:03.936054 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 9 07:14:03.970347 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 9 07:14:03.970363 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 9 07:14:04.009676 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 9 07:14:04.038545 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 07:14:04.249554 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 07:14:04.249724 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 07:14:04.315310 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 9 07:14:04.315552 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 07:14:04.315696 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 07:14:04.331485 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 07:14:04.346509 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 07:14:04.361520 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 07:14:04.375485 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 07:14:04.390484 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 07:14:04.390500 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 07:14:04.405483 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 07:14:04.433491 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 07:14:04.449483 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 07:14:04.497635 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 07:14:04.497654 kernel: ata2.00: Features: NCQ-prio Feb 9 07:14:04.497662 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 07:14:04.526608 kernel: ata1.00: Features: NCQ-prio Feb 9 07:14:04.544485 kernel: ata2.00: configured for UDMA/133 Feb 9 07:14:04.544500 kernel: ata1.00: configured for UDMA/133 Feb 9 07:14:04.558543 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 07:14:04.575485 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 07:14:04.593527 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 07:14:04.644272 kernel: usbcore: registered new interface driver usbhid Feb 9 07:14:04.644338 kernel: port_module: 9 callbacks suppressed Feb 9 07:14:04.644346 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 9 07:14:04.644439 kernel: usbhid: USB HID core driver Feb 9 07:14:04.661503 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 07:14:04.726485 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 07:14:04.726503 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:04.741338 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 07:14:04.778543 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 07:14:04.778742 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 07:14:04.778905 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 07:14:04.779059 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 07:14:04.779206 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 07:14:04.779382 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 07:14:04.779399 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 07:14:04.806099 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 07:14:04.806277 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 07:14:04.806436 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 07:14:04.851648 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 07:14:04.866541 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:04.866573 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 07:14:04.916928 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 07:14:04.933485 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 07:14:04.933500 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 07:14:04.969980 kernel: GPT:9289727 != 937703087 Feb 9 07:14:04.969991 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 07:14:04.969998 kernel: GPT:9289727 != 937703087 Feb 9 07:14:04.970005 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 07:14:04.970524 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 07:14:05.006434 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 07:14:05.158048 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 07:14:05.158065 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:05.174180 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 07:14:05.174254 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 07:14:05.225518 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 9 07:14:05.248692 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 07:14:05.288744 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (539) Feb 9 07:14:05.288757 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 9 07:14:05.270548 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 07:14:05.304795 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 07:14:05.330623 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 07:14:05.345856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 07:14:05.353659 systemd[1]: Starting disk-uuid.service... Feb 9 07:14:05.375151 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:05.375208 disk-uuid[688]: Primary Header is updated. Feb 9 07:14:05.375208 disk-uuid[688]: Secondary Entries is updated. Feb 9 07:14:05.375208 disk-uuid[688]: Secondary Header is updated. Feb 9 07:14:05.473551 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 07:14:05.473563 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:05.473570 kernel: GPT:disk_guids don't match. Feb 9 07:14:05.473577 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 07:14:05.473583 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 07:14:05.473589 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:05.497485 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 07:14:06.462909 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:14:06.481485 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 07:14:06.481936 disk-uuid[690]: The operation has completed successfully. Feb 9 07:14:06.516788 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 07:14:06.611737 kernel: audit: type=1130 audit(1707462846.524:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.611752 kernel: audit: type=1131 audit(1707462846.524:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.516831 systemd[1]: Finished disk-uuid.service. Feb 9 07:14:06.640518 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 07:14:06.550582 systemd[1]: Starting verity-setup.service... Feb 9 07:14:06.714703 systemd[1]: Found device dev-mapper-usr.device. Feb 9 07:14:06.725996 systemd[1]: Mounting sysusr-usr.mount... Feb 9 07:14:06.737168 systemd[1]: Finished verity-setup.service. Feb 9 07:14:06.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.805489 kernel: audit: type=1130 audit(1707462846.751:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.860488 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 07:14:06.860576 systemd[1]: Mounted sysusr-usr.mount. Feb 9 07:14:06.867768 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 07:14:06.868153 systemd[1]: Starting ignition-setup.service... Feb 9 07:14:06.953028 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 07:14:06.953044 kernel: BTRFS info (device sda6): using free space tree Feb 9 07:14:06.953052 kernel: BTRFS info (device sda6): has skinny extents Feb 9 07:14:06.953059 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 07:14:06.903960 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 07:14:06.961024 systemd[1]: Finished ignition-setup.service. Feb 9 07:14:06.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:06.976836 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 07:14:07.077430 kernel: audit: type=1130 audit(1707462846.976:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.077445 kernel: audit: type=1130 audit(1707462847.031:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.032181 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 07:14:07.084000 audit: BPF prog-id=9 op=LOAD Feb 9 07:14:07.086435 systemd[1]: Starting systemd-networkd.service... Feb 9 07:14:07.120524 kernel: audit: type=1334 audit(1707462847.084:24): prog-id=9 op=LOAD Feb 9 07:14:07.119778 systemd-networkd[877]: lo: Link UP Feb 9 07:14:07.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.151131 ignition[865]: Ignition 2.14.0 Feb 9 07:14:07.189759 kernel: audit: type=1130 audit(1707462847.128:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.119781 systemd-networkd[877]: lo: Gained carrier Feb 9 07:14:07.151135 ignition[865]: Stage: fetch-offline Feb 9 07:14:07.120105 systemd-networkd[877]: Enumeration completed Feb 9 07:14:07.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.151387 ignition[865]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:14:07.336593 kernel: audit: type=1130 audit(1707462847.214:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.336608 kernel: audit: type=1130 audit(1707462847.269:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.336619 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 07:14:07.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.120180 systemd[1]: Started systemd-networkd.service. Feb 9 07:14:07.365684 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 9 07:14:07.151400 ignition[865]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:14:07.120981 systemd-networkd[877]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:14:07.154281 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:14:07.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.128634 systemd[1]: Reached target network.target. Feb 9 07:14:07.417609 iscsid[906]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 07:14:07.417609 iscsid[906]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 07:14:07.417609 iscsid[906]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 07:14:07.417609 iscsid[906]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 07:14:07.417609 iscsid[906]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 07:14:07.417609 iscsid[906]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 07:14:07.417609 iscsid[906]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 07:14:07.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:07.154345 ignition[865]: parsed url from cmdline: "" Feb 9 07:14:07.178938 unknown[865]: fetched base config from "system" Feb 9 07:14:07.584644 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 07:14:07.154347 ignition[865]: no config URL provided Feb 9 07:14:07.178942 unknown[865]: fetched user config from "system" Feb 9 07:14:07.154350 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 07:14:07.184043 systemd[1]: Starting iscsiuio.service... Feb 9 07:14:07.158483 ignition[865]: parsing config with SHA512: 40fa04dfb65b3a3695f2131fab6e69b04273ba77990b048ff2967692a49bb4f3416954ea63c34ea4194d0e26800f744ae81d8604fe754a2d54dbc1778c48efc3 Feb 9 07:14:07.196781 systemd[1]: Started iscsiuio.service. Feb 9 07:14:07.179308 ignition[865]: fetch-offline: fetch-offline passed Feb 9 07:14:07.214784 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 07:14:07.179311 ignition[865]: POST message to Packet Timeline Feb 9 07:14:07.269826 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 07:14:07.179315 ignition[865]: POST Status error: resource requires networking Feb 9 07:14:07.270339 systemd[1]: Starting ignition-kargs.service... Feb 9 07:14:07.179346 ignition[865]: Ignition finished successfully Feb 9 07:14:07.337648 systemd-networkd[877]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:14:07.341039 ignition[895]: Ignition 2.14.0 Feb 9 07:14:07.351075 systemd[1]: Starting iscsid.service... Feb 9 07:14:07.341043 ignition[895]: Stage: kargs Feb 9 07:14:07.372627 systemd[1]: Started iscsid.service. Feb 9 07:14:07.341098 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:14:07.392018 systemd[1]: Starting dracut-initqueue.service... Feb 9 07:14:07.341107 ignition[895]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:14:07.409681 systemd[1]: Finished dracut-initqueue.service. Feb 9 07:14:07.343410 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:14:07.425613 systemd[1]: Reached target remote-fs-pre.target. Feb 9 07:14:07.344144 ignition[895]: kargs: kargs passed Feb 9 07:14:07.437674 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 07:14:07.344147 ignition[895]: POST message to Packet Timeline Feb 9 07:14:07.479703 systemd[1]: Reached target remote-fs.target. Feb 9 07:14:07.344157 ignition[895]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:14:07.510115 systemd[1]: Starting dracut-pre-mount.service... Feb 9 07:14:07.346640 ignition[895]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38124->[::1]:53: read: connection refused Feb 9 07:14:07.516795 systemd[1]: Finished dracut-pre-mount.service. Feb 9 07:14:07.547042 ignition[895]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 07:14:07.579680 systemd-networkd[877]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:14:07.547514 ignition[895]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44950->[::1]:53: read: connection refused Feb 9 07:14:07.609139 systemd-networkd[877]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:14:07.640208 systemd-networkd[877]: enp1s0f1np1: Link UP Feb 9 07:14:07.640642 systemd-networkd[877]: enp1s0f1np1: Gained carrier Feb 9 07:14:07.652057 systemd-networkd[877]: enp1s0f0np0: Link UP Feb 9 07:14:07.652419 systemd-networkd[877]: eno2: Link UP Feb 9 07:14:07.652780 systemd-networkd[877]: eno1: Link UP Feb 9 07:14:07.948353 ignition[895]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 07:14:07.949717 ignition[895]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44226->[::1]:53: read: connection refused Feb 9 07:14:08.376710 systemd-networkd[877]: enp1s0f0np0: Gained carrier Feb 9 07:14:08.385721 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 9 07:14:08.419792 systemd-networkd[877]: enp1s0f0np0: DHCPv4 address 147.75.49.59/31, gateway 147.75.49.58 acquired from 145.40.83.140 Feb 9 07:14:08.750025 ignition[895]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 07:14:08.751169 ignition[895]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56872->[::1]:53: read: connection refused Feb 9 07:14:08.995981 systemd-networkd[877]: enp1s0f1np1: Gained IPv6LL Feb 9 07:14:10.340075 systemd-networkd[877]: enp1s0f0np0: Gained IPv6LL Feb 9 07:14:10.352656 ignition[895]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 07:14:10.353816 ignition[895]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56253->[::1]:53: read: connection refused Feb 9 07:14:13.556325 ignition[895]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 07:14:13.596357 ignition[895]: GET result: OK Feb 9 07:14:13.801635 ignition[895]: Ignition finished successfully Feb 9 07:14:13.806284 systemd[1]: Finished ignition-kargs.service. Feb 9 07:14:13.887091 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 07:14:13.887121 kernel: audit: type=1130 audit(1707462853.816:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:13.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:13.826280 ignition[925]: Ignition 2.14.0 Feb 9 07:14:13.818780 systemd[1]: Starting ignition-disks.service... Feb 9 07:14:13.826283 ignition[925]: Stage: disks Feb 9 07:14:13.826354 ignition[925]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:14:13.826363 ignition[925]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:14:13.827836 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:14:13.829367 ignition[925]: disks: disks passed Feb 9 07:14:13.829370 ignition[925]: POST message to Packet Timeline Feb 9 07:14:13.829380 ignition[925]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:14:13.852518 ignition[925]: GET result: OK Feb 9 07:14:14.064449 ignition[925]: Ignition finished successfully Feb 9 07:14:14.067547 systemd[1]: Finished ignition-disks.service. Feb 9 07:14:14.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.081027 systemd[1]: Reached target initrd-root-device.target. Feb 9 07:14:14.155743 kernel: audit: type=1130 audit(1707462854.080:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.141717 systemd[1]: Reached target local-fs-pre.target. Feb 9 07:14:14.141751 systemd[1]: Reached target local-fs.target. Feb 9 07:14:14.164723 systemd[1]: Reached target sysinit.target. Feb 9 07:14:14.178691 systemd[1]: Reached target basic.target. Feb 9 07:14:14.194313 systemd[1]: Starting systemd-fsck-root.service... Feb 9 07:14:14.222350 systemd-fsck[938]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 07:14:14.233858 systemd[1]: Finished systemd-fsck-root.service. Feb 9 07:14:14.323112 kernel: audit: type=1130 audit(1707462854.243:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.323127 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 07:14:14.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.248881 systemd[1]: Mounting sysroot.mount... Feb 9 07:14:14.330897 systemd[1]: Mounted sysroot.mount. Feb 9 07:14:14.344889 systemd[1]: Reached target initrd-root-fs.target. Feb 9 07:14:14.361570 systemd[1]: Mounting sysroot-usr.mount... Feb 9 07:14:14.377868 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 07:14:14.391382 systemd[1]: Starting flatcar-static-network.service... Feb 9 07:14:14.405713 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 07:14:14.405805 systemd[1]: Reached target ignition-diskful.target. Feb 9 07:14:14.424706 systemd[1]: Mounted sysroot-usr.mount. Feb 9 07:14:14.447149 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 07:14:14.518601 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) Feb 9 07:14:14.518625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 07:14:14.461192 systemd[1]: Starting initrd-setup-root.service... Feb 9 07:14:14.565834 kernel: BTRFS info (device sda6): using free space tree Feb 9 07:14:14.565847 kernel: BTRFS info (device sda6): has skinny extents Feb 9 07:14:14.545322 systemd[1]: Finished initrd-setup-root.service. Feb 9 07:14:14.650741 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 07:14:14.650758 kernel: audit: type=1130 audit(1707462854.585:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.650800 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 07:14:14.667736 coreos-metadata[945]: Feb 09 07:14:14.544 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:14:14.667736 coreos-metadata[945]: Feb 09 07:14:14.587 INFO Fetch successful Feb 9 07:14:14.667736 coreos-metadata[945]: Feb 09 07:14:14.604 INFO wrote hostname ci-3510.3.2-a-29c32a4854 to /sysroot/etc/hostname Feb 9 07:14:14.695705 coreos-metadata[946]: Feb 09 07:14:14.544 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:14:14.695705 coreos-metadata[946]: Feb 09 07:14:14.608 INFO Fetch successful Feb 9 07:14:14.891241 kernel: audit: type=1130 audit(1707462854.715:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.891256 kernel: audit: type=1130 audit(1707462854.778:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.891264 kernel: audit: type=1131 audit(1707462854.778:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.586069 systemd[1]: Starting ignition-mount.service... Feb 9 07:14:14.906608 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Feb 9 07:14:14.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.658094 systemd[1]: Starting sysroot-boot.service... Feb 9 07:14:14.978737 kernel: audit: type=1130 audit(1707462854.914:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:14.978751 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 07:14:14.988743 bash[1016]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 07:14:14.675634 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 07:14:15.008667 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 07:14:15.019690 ignition[1022]: INFO : Ignition 2.14.0 Feb 9 07:14:15.019690 ignition[1022]: INFO : Stage: mount Feb 9 07:14:15.019690 ignition[1022]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:14:15.019690 ignition[1022]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:14:15.019690 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:14:15.019690 ignition[1022]: INFO : mount: mount passed Feb 9 07:14:15.019690 ignition[1022]: INFO : POST message to Packet Timeline Feb 9 07:14:15.019690 ignition[1022]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:14:15.019690 ignition[1022]: INFO : GET result: OK Feb 9 07:14:14.717419 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 07:14:14.717496 systemd[1]: Finished flatcar-static-network.service. Feb 9 07:14:14.778739 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 07:14:14.900115 systemd[1]: Finished sysroot-boot.service. Feb 9 07:14:15.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:15.213105 ignition[1022]: INFO : Ignition finished successfully Feb 9 07:14:15.227577 kernel: audit: type=1130 audit(1707462855.146:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:15.133580 systemd[1]: Finished ignition-mount.service. Feb 9 07:14:15.148663 systemd[1]: Starting ignition-files.service... Feb 9 07:14:15.233027 unknown[1035]: wrote ssh authorized keys file for user: core Feb 9 07:14:15.249590 ignition[1035]: INFO : Ignition 2.14.0 Feb 9 07:14:15.249590 ignition[1035]: INFO : Stage: files Feb 9 07:14:15.249590 ignition[1035]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:14:15.249590 ignition[1035]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:14:15.249590 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:14:15.249590 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Feb 9 07:14:15.249590 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 07:14:15.249590 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 07:14:15.249590 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 07:14:15.249590 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 07:14:15.249590 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 07:14:15.249590 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 07:14:15.249590 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 07:14:15.249590 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 07:14:15.425886 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 07:14:15.425886 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 07:14:15.425886 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 07:14:15.768401 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 07:14:15.866214 ignition[1035]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 07:14:15.866214 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 07:14:15.910723 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 07:14:15.910723 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 07:14:16.279803 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 07:14:16.366091 ignition[1035]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 07:14:16.389804 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 07:14:16.389804 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 07:14:16.389804 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 07:14:16.446128 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 07:14:16.739163 ignition[1035]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 07:14:16.739163 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 07:14:16.780698 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 07:14:16.780698 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 07:14:16.812706 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 07:14:17.217411 ignition[1035]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 07:14:17.217411 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 07:14:17.257697 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 07:14:17.257697 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 07:14:17.289571 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 07:14:17.429693 ignition[1035]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 07:14:17.429693 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 07:14:17.472704 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 07:14:17.472704 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 07:14:17.472704 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 07:14:17.472704 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 07:14:17.838217 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 07:14:17.985787 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 07:14:17.985787 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2246380122" Feb 9 07:14:18.038576 ignition[1035]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2246380122": device or resource busy Feb 9 07:14:18.038576 ignition[1035]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2246380122", trying btrfs: device or resource busy Feb 9 07:14:18.038576 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2246380122" Feb 9 07:14:18.344414 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1043) Feb 9 07:14:18.344446 kernel: audit: type=1130 audit(1707462858.285:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.344485 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2246380122" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2246380122" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2246380122" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 07:14:18.344485 ignition[1035]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 07:14:18.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.272083 systemd[1]: Finished ignition-files.service. Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-critools.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-helm.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(20): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: op(20): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 07:14:18.714845 ignition[1035]: INFO : files: files passed Feb 9 07:14:18.714845 ignition[1035]: INFO : POST message to Packet Timeline Feb 9 07:14:18.714845 ignition[1035]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:14:18.714845 ignition[1035]: INFO : GET result: OK Feb 9 07:14:18.714845 ignition[1035]: INFO : Ignition finished successfully Feb 9 07:14:19.259696 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 07:14:19.259713 kernel: audit: type=1131 audit(1707462859.009:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.259726 kernel: audit: type=1131 audit(1707462859.107:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.259733 kernel: audit: type=1131 audit(1707462859.176:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.291323 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 07:14:18.352757 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 07:14:19.303706 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 07:14:18.353066 systemd[1]: Starting ignition-quench.service... Feb 9 07:14:18.389759 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 07:14:19.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.404944 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 07:14:19.489682 kernel: audit: type=1131 audit(1707462859.349:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.489696 kernel: audit: type=1131 audit(1707462859.429:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.405014 systemd[1]: Finished ignition-quench.service. Feb 9 07:14:19.560107 kernel: audit: type=1131 audit(1707462859.498:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.439013 systemd[1]: Reached target ignition-complete.target. Feb 9 07:14:18.461595 systemd[1]: Starting initrd-parse-etc.service... Feb 9 07:14:18.493369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 07:14:19.605931 ignition[1085]: INFO : Ignition 2.14.0 Feb 9 07:14:19.605931 ignition[1085]: INFO : Stage: umount Feb 9 07:14:19.605931 ignition[1085]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:14:19.605931 ignition[1085]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:14:19.605931 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:14:19.605931 ignition[1085]: INFO : umount: umount passed Feb 9 07:14:19.605931 ignition[1085]: INFO : POST message to Packet Timeline Feb 9 07:14:19.605931 ignition[1085]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:14:19.605931 ignition[1085]: INFO : GET result: OK Feb 9 07:14:19.939708 kernel: audit: type=1131 audit(1707462859.613:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.939814 kernel: audit: type=1131 audit(1707462859.688:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.939827 kernel: audit: type=1131 audit(1707462859.757:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.939838 kernel: audit: type=1131 audit(1707462859.818:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.493411 systemd[1]: Finished initrd-parse-etc.service. Feb 9 07:14:19.953886 ignition[1085]: INFO : Ignition finished successfully Feb 9 07:14:19.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.521708 systemd[1]: Reached target initrd-fs.target. Feb 9 07:14:19.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.548689 systemd[1]: Reached target initrd.target. Feb 9 07:14:19.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.573911 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 07:14:20.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.576115 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 07:14:18.610191 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 07:14:18.630589 systemd[1]: Starting initrd-cleanup.service... Feb 9 07:14:20.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.668610 systemd[1]: Stopped target nss-lookup.target. Feb 9 07:14:20.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.683039 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 07:14:20.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:20.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:20.089000 audit: BPF prog-id=6 op=UNLOAD Feb 9 07:14:18.708249 systemd[1]: Stopped target timers.target. Feb 9 07:14:18.722064 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 07:14:18.722430 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 07:14:20.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.748361 systemd[1]: Stopped target initrd.target. Feb 9 07:14:20.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.767080 systemd[1]: Stopped target basic.target. Feb 9 07:14:20.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.788199 systemd[1]: Stopped target ignition-complete.target. Feb 9 07:14:18.812213 systemd[1]: Stopped target ignition-diskful.target. Feb 9 07:14:20.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.833074 systemd[1]: Stopped target initrd-root-device.target. Feb 9 07:14:18.855204 systemd[1]: Stopped target remote-fs.target. Feb 9 07:14:18.876073 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 07:14:18.898088 systemd[1]: Stopped target sysinit.target. Feb 9 07:14:20.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.922100 systemd[1]: Stopped target local-fs.target. Feb 9 07:14:20.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.945083 systemd[1]: Stopped target local-fs-pre.target. Feb 9 07:14:20.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.966075 systemd[1]: Stopped target swap.target. Feb 9 07:14:18.985965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 07:14:20.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:18.986333 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 07:14:20.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.010303 systemd[1]: Stopped target cryptsetup.target. Feb 9 07:14:20.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.100794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 07:14:20.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:20.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.100897 systemd[1]: Stopped dracut-initqueue.service. Feb 9 07:14:19.107973 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 07:14:19.108080 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 07:14:19.176816 systemd[1]: Stopped target paths.target. Feb 9 07:14:19.243714 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 07:14:19.249702 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 07:14:19.268844 systemd[1]: Stopped target slices.target. Feb 9 07:14:19.286773 systemd[1]: Stopped target sockets.target. Feb 9 07:14:19.311758 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 07:14:19.311858 systemd[1]: Closed iscsid.socket. Feb 9 07:14:19.333092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 07:14:19.333399 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 07:14:19.350189 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 07:14:20.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:19.350558 systemd[1]: Stopped ignition-files.service. Feb 9 07:14:19.429743 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 07:14:19.429802 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 07:14:19.499257 systemd[1]: Stopping ignition-mount.service... Feb 9 07:14:19.566793 systemd[1]: Stopping iscsiuio.service... Feb 9 07:14:19.584764 systemd[1]: Stopping sysroot-boot.service... Feb 9 07:14:19.597742 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 07:14:20.571382 iscsid[906]: iscsid shutting down. Feb 9 07:14:19.598143 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 07:14:19.614166 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 07:14:20.571496 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 9 07:14:19.614535 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 07:14:19.692952 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 07:14:19.694055 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 07:14:19.694180 systemd[1]: Stopped iscsiuio.service. Feb 9 07:14:19.757905 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 07:14:19.757944 systemd[1]: Stopped sysroot-boot.service. Feb 9 07:14:19.819033 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 07:14:19.819072 systemd[1]: Stopped ignition-mount.service. Feb 9 07:14:19.916922 systemd[1]: Stopped target network.target. Feb 9 07:14:19.926831 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 07:14:19.926865 systemd[1]: Closed iscsiuio.socket. Feb 9 07:14:19.946659 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 07:14:19.946711 systemd[1]: Stopped ignition-disks.service. Feb 9 07:14:19.961724 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 07:14:19.961793 systemd[1]: Stopped ignition-kargs.service. Feb 9 07:14:19.978877 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 07:14:19.979002 systemd[1]: Stopped ignition-setup.service. Feb 9 07:14:19.993899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 07:14:19.994048 systemd[1]: Stopped initrd-setup-root.service. Feb 9 07:14:20.010183 systemd[1]: Stopping systemd-networkd.service... Feb 9 07:14:20.020639 systemd-networkd[877]: enp1s0f0np0: DHCPv6 lease lost Feb 9 07:14:20.027702 systemd-networkd[877]: enp1s0f1np1: DHCPv6 lease lost Feb 9 07:14:20.027963 systemd[1]: Stopping systemd-resolved.service... Feb 9 07:14:20.570000 audit: BPF prog-id=9 op=UNLOAD Feb 9 07:14:20.043607 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 07:14:20.043848 systemd[1]: Stopped systemd-resolved.service. Feb 9 07:14:20.059095 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 07:14:20.059444 systemd[1]: Stopped systemd-networkd.service. Feb 9 07:14:20.073799 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 07:14:20.073844 systemd[1]: Finished initrd-cleanup.service. Feb 9 07:14:20.090165 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 07:14:20.090183 systemd[1]: Closed systemd-networkd.socket. Feb 9 07:14:20.107073 systemd[1]: Stopping network-cleanup.service... Feb 9 07:14:20.113687 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 07:14:20.113729 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 07:14:20.135962 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 07:14:20.136067 systemd[1]: Stopped systemd-sysctl.service. Feb 9 07:14:20.151160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 07:14:20.151289 systemd[1]: Stopped systemd-modules-load.service. Feb 9 07:14:20.166121 systemd[1]: Stopping systemd-udevd.service... Feb 9 07:14:20.183324 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 07:14:20.184594 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 07:14:20.184653 systemd[1]: Stopped systemd-udevd.service. Feb 9 07:14:20.199995 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 07:14:20.200024 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 07:14:20.214729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 07:14:20.214763 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 07:14:20.231716 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 07:14:20.231752 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 07:14:20.253622 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 07:14:20.253652 systemd[1]: Stopped dracut-cmdline.service. Feb 9 07:14:20.268727 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 07:14:20.268783 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 07:14:20.284786 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 07:14:20.298552 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 07:14:20.298584 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 07:14:20.313045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 07:14:20.313155 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 07:14:20.328754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 07:14:20.328866 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 07:14:20.345937 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 07:14:20.347144 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 07:14:20.347340 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 07:14:20.462993 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 07:14:20.463201 systemd[1]: Stopped network-cleanup.service. Feb 9 07:14:20.471954 systemd[1]: Reached target initrd-switch-root.target. Feb 9 07:14:20.489517 systemd[1]: Starting initrd-switch-root.service... Feb 9 07:14:20.526699 systemd[1]: Switching root. Feb 9 07:14:20.573199 systemd-journald[269]: Journal stopped Feb 9 07:14:24.527626 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 07:14:24.527641 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 07:14:24.527650 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 07:14:24.527655 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 07:14:24.527660 kernel: SELinux: policy capability open_perms=1 Feb 9 07:14:24.527665 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 07:14:24.527671 kernel: SELinux: policy capability always_check_network=0 Feb 9 07:14:24.527677 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 07:14:24.527682 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 07:14:24.527688 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 07:14:24.527694 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 07:14:24.527700 systemd[1]: Successfully loaded SELinux policy in 328.253ms. Feb 9 07:14:24.527707 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.755ms. Feb 9 07:14:24.527714 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 07:14:24.527722 systemd[1]: Detected architecture x86-64. Feb 9 07:14:24.527728 systemd[1]: Detected first boot. Feb 9 07:14:24.527733 systemd[1]: Hostname set to . Feb 9 07:14:24.527740 systemd[1]: Initializing machine ID from random generator. Feb 9 07:14:24.527746 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 07:14:24.527751 systemd[1]: Populated /etc with preset unit settings. Feb 9 07:14:24.527758 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:14:24.527765 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:14:24.527772 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:14:24.527778 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 07:14:24.527784 systemd[1]: Stopped iscsid.service. Feb 9 07:14:24.527790 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 07:14:24.527797 systemd[1]: Stopped initrd-switch-root.service. Feb 9 07:14:24.527804 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 07:14:24.527810 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 07:14:24.527817 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 07:14:24.527823 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 07:14:24.527829 systemd[1]: Created slice system-getty.slice. Feb 9 07:14:24.527835 systemd[1]: Created slice system-modprobe.slice. Feb 9 07:14:24.527841 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 07:14:24.527847 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 07:14:24.527853 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 07:14:24.527860 systemd[1]: Created slice user.slice. Feb 9 07:14:24.527866 systemd[1]: Started systemd-ask-password-console.path. Feb 9 07:14:24.527872 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 07:14:24.527878 systemd[1]: Set up automount boot.automount. Feb 9 07:14:24.527886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 07:14:24.527892 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 07:14:24.527898 systemd[1]: Stopped target initrd-fs.target. Feb 9 07:14:24.527905 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 07:14:24.527912 systemd[1]: Reached target integritysetup.target. Feb 9 07:14:24.527919 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 07:14:24.527925 systemd[1]: Reached target remote-fs.target. Feb 9 07:14:24.527931 systemd[1]: Reached target slices.target. Feb 9 07:14:24.527937 systemd[1]: Reached target swap.target. Feb 9 07:14:24.527944 systemd[1]: Reached target torcx.target. Feb 9 07:14:24.527950 systemd[1]: Reached target veritysetup.target. Feb 9 07:14:24.527957 systemd[1]: Listening on systemd-coredump.socket. Feb 9 07:14:24.527964 systemd[1]: Listening on systemd-initctl.socket. Feb 9 07:14:24.527971 systemd[1]: Listening on systemd-networkd.socket. Feb 9 07:14:24.527977 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 07:14:24.527984 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 07:14:24.527990 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 07:14:24.527996 systemd[1]: Mounting dev-hugepages.mount... Feb 9 07:14:24.528004 systemd[1]: Mounting dev-mqueue.mount... Feb 9 07:14:24.528010 systemd[1]: Mounting media.mount... Feb 9 07:14:24.528017 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 07:14:24.528023 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 07:14:24.528030 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 07:14:24.528036 systemd[1]: Mounting tmp.mount... Feb 9 07:14:24.528042 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 07:14:24.528049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 07:14:24.528055 systemd[1]: Starting kmod-static-nodes.service... Feb 9 07:14:24.528063 systemd[1]: Starting modprobe@configfs.service... Feb 9 07:14:24.528069 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 07:14:24.528076 systemd[1]: Starting modprobe@drm.service... Feb 9 07:14:24.528082 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 07:14:24.528088 systemd[1]: Starting modprobe@fuse.service... Feb 9 07:14:24.528095 kernel: fuse: init (API version 7.34) Feb 9 07:14:24.528101 systemd[1]: Starting modprobe@loop.service... Feb 9 07:14:24.528107 kernel: loop: module loaded Feb 9 07:14:24.528114 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 07:14:24.528121 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 07:14:24.528128 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 07:14:24.528134 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 07:14:24.528140 kernel: kauditd_printk_skb: 64 callbacks suppressed Feb 9 07:14:24.528146 kernel: audit: type=1131 audit(1707462864.169:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.528153 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 07:14:24.528159 kernel: audit: type=1131 audit(1707462864.257:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.528166 systemd[1]: Stopped systemd-journald.service. Feb 9 07:14:24.528173 kernel: audit: type=1130 audit(1707462864.321:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.528179 kernel: audit: type=1131 audit(1707462864.321:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.528185 kernel: audit: type=1334 audit(1707462864.406:119): prog-id=21 op=LOAD Feb 9 07:14:24.528191 kernel: audit: type=1334 audit(1707462864.424:120): prog-id=22 op=LOAD Feb 9 07:14:24.528197 kernel: audit: type=1334 audit(1707462864.443:121): prog-id=23 op=LOAD Feb 9 07:14:24.528202 kernel: audit: type=1334 audit(1707462864.461:122): prog-id=19 op=UNLOAD Feb 9 07:14:24.528208 systemd[1]: Starting systemd-journald.service... Feb 9 07:14:24.528215 kernel: audit: type=1334 audit(1707462864.461:123): prog-id=20 op=UNLOAD Feb 9 07:14:24.528221 systemd[1]: Starting systemd-modules-load.service... Feb 9 07:14:24.528228 kernel: audit: type=1305 audit(1707462864.524:124): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 07:14:24.528236 systemd-journald[1235]: Journal started Feb 9 07:14:24.528261 systemd-journald[1235]: Runtime Journal (/run/log/journal/7ae56cb81d7c492c887b6e151a9488d7) is 8.0M, max 640.1M, 632.1M free. Feb 9 07:14:20.984000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 07:14:21.260000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 07:14:21.262000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 07:14:21.262000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 07:14:21.263000 audit: BPF prog-id=10 op=LOAD Feb 9 07:14:21.263000 audit: BPF prog-id=10 op=UNLOAD Feb 9 07:14:21.263000 audit: BPF prog-id=11 op=LOAD Feb 9 07:14:21.263000 audit: BPF prog-id=11 op=UNLOAD Feb 9 07:14:21.330000 audit[1125]: AVC avc: denied { associate } for pid=1125 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 07:14:21.330000 audit[1125]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1108 pid=1125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:14:21.330000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 07:14:21.358000 audit[1125]: AVC avc: denied { associate } for pid=1125 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 07:14:21.358000 audit[1125]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1108 pid=1125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:14:21.358000 audit: CWD cwd="/" Feb 9 07:14:21.358000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:21.358000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:21.358000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 07:14:22.870000 audit: BPF prog-id=12 op=LOAD Feb 9 07:14:22.870000 audit: BPF prog-id=3 op=UNLOAD Feb 9 07:14:22.870000 audit: BPF prog-id=13 op=LOAD Feb 9 07:14:22.870000 audit: BPF prog-id=14 op=LOAD Feb 9 07:14:22.870000 audit: BPF prog-id=4 op=UNLOAD Feb 9 07:14:22.870000 audit: BPF prog-id=5 op=UNLOAD Feb 9 07:14:22.870000 audit: BPF prog-id=15 op=LOAD Feb 9 07:14:22.870000 audit: BPF prog-id=12 op=UNLOAD Feb 9 07:14:22.871000 audit: BPF prog-id=16 op=LOAD Feb 9 07:14:22.871000 audit: BPF prog-id=17 op=LOAD Feb 9 07:14:22.871000 audit: BPF prog-id=13 op=UNLOAD Feb 9 07:14:22.871000 audit: BPF prog-id=14 op=UNLOAD Feb 9 07:14:22.871000 audit: BPF prog-id=18 op=LOAD Feb 9 07:14:22.871000 audit: BPF prog-id=15 op=UNLOAD Feb 9 07:14:22.871000 audit: BPF prog-id=19 op=LOAD Feb 9 07:14:22.871000 audit: BPF prog-id=20 op=LOAD Feb 9 07:14:22.871000 audit: BPF prog-id=16 op=UNLOAD Feb 9 07:14:22.871000 audit: BPF prog-id=17 op=UNLOAD Feb 9 07:14:22.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:22.917000 audit: BPF prog-id=18 op=UNLOAD Feb 9 07:14:22.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:22.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:22.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.406000 audit: BPF prog-id=21 op=LOAD Feb 9 07:14:24.424000 audit: BPF prog-id=22 op=LOAD Feb 9 07:14:24.443000 audit: BPF prog-id=23 op=LOAD Feb 9 07:14:24.461000 audit: BPF prog-id=19 op=UNLOAD Feb 9 07:14:24.461000 audit: BPF prog-id=20 op=UNLOAD Feb 9 07:14:24.524000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 07:14:22.869566 systemd[1]: Queued start job for default target multi-user.target. Feb 9 07:14:21.330190 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:14:22.873487 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 07:14:21.330635 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 07:14:21.330649 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 07:14:21.330671 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 07:14:21.330679 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 07:14:21.330701 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 07:14:21.330710 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 07:14:21.330842 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 07:14:21.330868 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 07:14:21.330877 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 07:14:21.331260 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 07:14:21.331284 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 07:14:21.331298 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 07:14:21.331308 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 07:14:21.331319 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 07:14:21.331328 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 07:14:22.524045 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:14:22.524186 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:14:22.524242 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:14:22.524330 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:14:22.524359 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 07:14:22.524393 /usr/lib/systemd/system-generators/torcx-generator[1125]: time="2024-02-09T07:14:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 07:14:24.524000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff1b065ef0 a2=4000 a3=7fff1b065f8c items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:14:24.524000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 07:14:24.606677 systemd[1]: Starting systemd-network-generator.service... Feb 9 07:14:24.633485 systemd[1]: Starting systemd-remount-fs.service... Feb 9 07:14:24.660531 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 07:14:24.703278 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 07:14:24.703301 systemd[1]: Stopped verity-setup.service. Feb 9 07:14:24.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.748485 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 07:14:24.768671 systemd[1]: Started systemd-journald.service. Feb 9 07:14:24.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.777008 systemd[1]: Mounted dev-hugepages.mount. Feb 9 07:14:24.783747 systemd[1]: Mounted dev-mqueue.mount. Feb 9 07:14:24.790747 systemd[1]: Mounted media.mount. Feb 9 07:14:24.797732 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 07:14:24.806726 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 07:14:24.815706 systemd[1]: Mounted tmp.mount. Feb 9 07:14:24.822791 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 07:14:24.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.830811 systemd[1]: Finished kmod-static-nodes.service. Feb 9 07:14:24.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.839863 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 07:14:24.839984 systemd[1]: Finished modprobe@configfs.service. Feb 9 07:14:24.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.849154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 07:14:24.849364 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 07:14:24.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.858094 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 07:14:24.858338 systemd[1]: Finished modprobe@drm.service. Feb 9 07:14:24.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.867447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 07:14:24.867829 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 07:14:24.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.877325 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 07:14:24.877650 systemd[1]: Finished modprobe@fuse.service. Feb 9 07:14:24.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.886310 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 07:14:24.886756 systemd[1]: Finished modprobe@loop.service. Feb 9 07:14:24.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.895471 systemd[1]: Finished systemd-modules-load.service. Feb 9 07:14:24.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.904329 systemd[1]: Finished systemd-network-generator.service. Feb 9 07:14:24.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.913326 systemd[1]: Finished systemd-remount-fs.service. Feb 9 07:14:24.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.922330 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 07:14:24.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:24.931824 systemd[1]: Reached target network-pre.target. Feb 9 07:14:24.943245 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 07:14:24.954069 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 07:14:24.960758 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 07:14:24.961749 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 07:14:24.969100 systemd[1]: Starting systemd-journal-flush.service... Feb 9 07:14:24.972741 systemd-journald[1235]: Time spent on flushing to /var/log/journal/7ae56cb81d7c492c887b6e151a9488d7 is 14.792ms for 1625 entries. Feb 9 07:14:24.972741 systemd-journald[1235]: System Journal (/var/log/journal/7ae56cb81d7c492c887b6e151a9488d7) is 8.0M, max 195.6M, 187.6M free. Feb 9 07:14:25.017843 systemd-journald[1235]: Received client request to flush runtime journal. Feb 9 07:14:24.985625 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 07:14:24.986140 systemd[1]: Starting systemd-random-seed.service... Feb 9 07:14:25.001606 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 07:14:25.002096 systemd[1]: Starting systemd-sysctl.service... Feb 9 07:14:25.009177 systemd[1]: Starting systemd-sysusers.service... Feb 9 07:14:25.016152 systemd[1]: Starting systemd-udev-settle.service... Feb 9 07:14:25.024699 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 07:14:25.033660 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 07:14:25.041721 systemd[1]: Finished systemd-journal-flush.service. Feb 9 07:14:25.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.049722 systemd[1]: Finished systemd-random-seed.service. Feb 9 07:14:25.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.057683 systemd[1]: Finished systemd-sysctl.service. Feb 9 07:14:25.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.065676 systemd[1]: Finished systemd-sysusers.service. Feb 9 07:14:25.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.074657 systemd[1]: Reached target first-boot-complete.target. Feb 9 07:14:25.083235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 07:14:25.092663 udevadm[1251]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 07:14:25.103989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 07:14:25.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.263536 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 07:14:25.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.272000 audit: BPF prog-id=24 op=LOAD Feb 9 07:14:25.272000 audit: BPF prog-id=25 op=LOAD Feb 9 07:14:25.272000 audit: BPF prog-id=7 op=UNLOAD Feb 9 07:14:25.272000 audit: BPF prog-id=8 op=UNLOAD Feb 9 07:14:25.273822 systemd[1]: Starting systemd-udevd.service... Feb 9 07:14:25.285809 systemd-udevd[1254]: Using default interface naming scheme 'v252'. Feb 9 07:14:25.305380 systemd[1]: Started systemd-udevd.service. Feb 9 07:14:25.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.315427 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 07:14:25.314000 audit: BPF prog-id=26 op=LOAD Feb 9 07:14:25.316573 systemd[1]: Starting systemd-networkd.service... Feb 9 07:14:25.339000 audit: BPF prog-id=27 op=LOAD Feb 9 07:14:25.359099 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 07:14:25.359229 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 07:14:25.359243 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1265) Feb 9 07:14:25.359255 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 07:14:25.383000 audit: BPF prog-id=28 op=LOAD Feb 9 07:14:25.403000 audit: BPF prog-id=29 op=LOAD Feb 9 07:14:25.405484 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 07:14:25.405965 systemd[1]: Starting systemd-userdbd.service... Feb 9 07:14:25.425487 kernel: ACPI: button: Power Button [PWRF] Feb 9 07:14:25.456955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 07:14:25.469653 systemd[1]: Started systemd-userdbd.service. Feb 9 07:14:25.479491 kernel: IPMI message handler: version 39.2 Feb 9 07:14:25.345000 audit[1267]: AVC avc: denied { confidentiality } for pid=1267 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 07:14:25.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:25.501494 kernel: ipmi device interface Feb 9 07:14:25.345000 audit[1267]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556ddadedd00 a1=4d8bc a2=7f71e4a25bc5 a3=5 items=42 ppid=1254 pid=1267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:14:25.345000 audit: CWD cwd="/" Feb 9 07:14:25.345000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=1 name=(null) inode=23516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=2 name=(null) inode=23516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=3 name=(null) inode=23517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=4 name=(null) inode=23516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=5 name=(null) inode=23518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=6 name=(null) inode=23516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=7 name=(null) inode=23519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=8 name=(null) inode=23519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=9 name=(null) inode=23520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=10 name=(null) inode=23519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=11 name=(null) inode=23521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=12 name=(null) inode=23519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=13 name=(null) inode=23522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=14 name=(null) inode=23519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=15 name=(null) inode=23523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=16 name=(null) inode=23519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=17 name=(null) inode=23524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=18 name=(null) inode=23516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=19 name=(null) inode=23525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=20 name=(null) inode=23525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=21 name=(null) inode=23526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=22 name=(null) inode=23525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=23 name=(null) inode=23527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=24 name=(null) inode=23525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=25 name=(null) inode=23528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=26 name=(null) inode=23525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=27 name=(null) inode=23529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=28 name=(null) inode=23525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=29 name=(null) inode=23530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=30 name=(null) inode=23516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=31 name=(null) inode=23531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=32 name=(null) inode=23531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=33 name=(null) inode=23532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=34 name=(null) inode=23531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=35 name=(null) inode=23533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=36 name=(null) inode=23531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=37 name=(null) inode=23534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=38 name=(null) inode=23531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=39 name=(null) inode=23535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=40 name=(null) inode=23531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PATH item=41 name=(null) inode=23536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:14:25.345000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 07:14:25.513487 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 07:14:25.513787 kernel: ipmi_si: IPMI System Interface driver Feb 9 07:14:25.513811 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 07:14:25.513990 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 07:14:25.567920 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 07:14:25.568071 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 07:14:25.608183 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 07:14:25.608283 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 07:14:25.687566 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 07:14:25.687600 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 07:14:25.687711 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 07:14:25.773010 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 07:14:25.773143 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 07:14:25.773156 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 07:14:25.796498 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 07:14:25.853489 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 07:14:25.876571 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 9 07:14:25.877085 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 9 07:14:25.877339 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 9 07:14:25.950153 systemd-networkd[1301]: bond0: netdev ready Feb 9 07:14:25.952437 systemd-networkd[1301]: lo: Link UP Feb 9 07:14:25.952440 systemd-networkd[1301]: lo: Gained carrier Feb 9 07:14:25.952914 systemd-networkd[1301]: Enumeration completed Feb 9 07:14:25.952982 systemd[1]: Started systemd-networkd.service. Feb 9 07:14:25.953205 systemd-networkd[1301]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 07:14:25.960596 systemd-networkd[1301]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:74:e9.network. Feb 9 07:14:25.961859 kernel: intel_rapl_common: Found RAPL domain package Feb 9 07:14:25.961884 kernel: intel_rapl_common: Found RAPL domain core Feb 9 07:14:25.961895 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 07:14:25.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.009486 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 07:14:26.029516 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 07:14:26.029742 systemd[1]: Finished systemd-udev-settle.service. Feb 9 07:14:26.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.038215 systemd[1]: Starting lvm2-activation-early.service... Feb 9 07:14:26.053913 lvm[1360]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 07:14:26.094891 systemd[1]: Finished lvm2-activation-early.service. Feb 9 07:14:26.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.103606 systemd[1]: Reached target cryptsetup.target. Feb 9 07:14:26.112155 systemd[1]: Starting lvm2-activation.service... Feb 9 07:14:26.114304 lvm[1361]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 07:14:26.147899 systemd[1]: Finished lvm2-activation.service. Feb 9 07:14:26.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.155605 systemd[1]: Reached target local-fs-pre.target. Feb 9 07:14:26.163578 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 07:14:26.163593 systemd[1]: Reached target local-fs.target. Feb 9 07:14:26.172537 systemd[1]: Reached target machines.target. Feb 9 07:14:26.182216 systemd[1]: Starting ldconfig.service... Feb 9 07:14:26.189313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 07:14:26.189359 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 07:14:26.189983 systemd[1]: Starting systemd-boot-update.service... Feb 9 07:14:26.197014 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 07:14:26.207212 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 07:14:26.207326 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 07:14:26.207356 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 07:14:26.207977 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 07:14:26.208229 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1363 (bootctl) Feb 9 07:14:26.209071 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 07:14:26.228963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 07:14:26.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.233718 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 07:14:26.240206 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 07:14:26.248182 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 07:14:26.513598 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 07:14:26.538514 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 9 07:14:26.539724 systemd-networkd[1301]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:74:e8.network. Feb 9 07:14:26.600513 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 07:14:26.716540 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 07:14:26.716958 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 07:14:26.741549 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 9 07:14:26.761531 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 07:14:26.780826 systemd-networkd[1301]: bond0: Link UP Feb 9 07:14:26.781014 systemd-networkd[1301]: enp1s0f1np1: Link UP Feb 9 07:14:26.781198 systemd-networkd[1301]: enp1s0f1np1: Gained carrier Feb 9 07:14:26.782201 systemd-networkd[1301]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:74:e8.network. Feb 9 07:14:26.830487 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.850515 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.851410 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 07:14:26.851744 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 07:14:26.856143 systemd-fsck[1371]: fsck.fat 4.2 (2021-01-31) Feb 9 07:14:26.856143 systemd-fsck[1371]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 07:14:26.870484 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.885732 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 07:14:26.891491 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.909314 systemd[1]: Mounting boot.mount... Feb 9 07:14:26.912486 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.925422 systemd[1]: Mounted boot.mount. Feb 9 07:14:26.932485 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.951485 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.961004 systemd[1]: Finished systemd-boot-update.service. Feb 9 07:14:26.970525 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:26.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:26.991377 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 07:14:26.991485 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:14:27.007314 systemd[1]: Starting audit-rules.service... Feb 9 07:14:27.011484 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.025120 systemd[1]: Starting clean-ca-certificates.service... Feb 9 07:14:27.030488 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.046087 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 07:14:27.050486 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.059000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 07:14:27.059000 audit[1391]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd007e7370 a2=420 a3=0 items=0 ppid=1374 pid=1391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:14:27.059000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 07:14:27.060529 augenrules[1391]: No rules Feb 9 07:14:27.070032 systemd[1]: Starting systemd-resolved.service... Feb 9 07:14:27.070486 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.088486 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.088767 systemd[1]: Starting systemd-timesyncd.service... Feb 9 07:14:27.103071 systemd[1]: Starting systemd-update-utmp.service... Feb 9 07:14:27.107540 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.111614 ldconfig[1362]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 07:14:27.120845 systemd[1]: Finished ldconfig.service. Feb 9 07:14:27.125542 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.138701 systemd[1]: Finished audit-rules.service. Feb 9 07:14:27.143539 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.156688 systemd[1]: Finished clean-ca-certificates.service. Feb 9 07:14:27.161538 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.163236 systemd-networkd[1301]: enp1s0f0np0: Link UP Feb 9 07:14:27.163391 systemd-networkd[1301]: bond0: Gained carrier Feb 9 07:14:27.163473 systemd-networkd[1301]: enp1s0f0np0: Gained carrier Feb 9 07:14:27.175697 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 07:14:27.180519 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 9 07:14:27.180547 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave Feb 9 07:14:27.180562 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 07:14:27.222386 systemd[1]: Starting systemd-update-done.service... Feb 9 07:14:27.228527 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 07:14:27.228560 kernel: bond0: active interface up! Feb 9 07:14:27.246565 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 07:14:27.246745 systemd-networkd[1301]: enp1s0f1np1: Link DOWN Feb 9 07:14:27.246748 systemd-networkd[1301]: enp1s0f1np1: Lost carrier Feb 9 07:14:27.246798 systemd[1]: Finished systemd-update-utmp.service. Feb 9 07:14:27.254695 systemd[1]: Finished systemd-update-done.service. Feb 9 07:14:27.264833 systemd[1]: Started systemd-timesyncd.service. Feb 9 07:14:27.267850 systemd-resolved[1396]: Positive Trust Anchors: Feb 9 07:14:27.267855 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 07:14:27.267874 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 07:14:27.271508 systemd-resolved[1396]: Using system hostname 'ci-3510.3.2-a-29c32a4854'. Feb 9 07:14:27.272773 systemd[1]: Reached target time-set.target. Feb 9 07:14:27.390510 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 07:14:27.394836 systemd-networkd[1301]: enp1s0f1np1: Link UP Feb 9 07:14:27.394999 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:27.395080 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:27.395080 systemd-networkd[1301]: enp1s0f1np1: Gained carrier Feb 9 07:14:27.396037 systemd[1]: Started systemd-resolved.service. Feb 9 07:14:27.405597 systemd[1]: Reached target network.target. Feb 9 07:14:27.409822 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:27.409930 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:27.413577 systemd[1]: Reached target nss-lookup.target. Feb 9 07:14:27.422592 systemd[1]: Reached target sysinit.target. Feb 9 07:14:27.431635 systemd[1]: Started motdgen.path. Feb 9 07:14:27.446613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 07:14:27.451533 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms Feb 9 07:14:27.451577 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave Feb 9 07:14:27.475658 systemd[1]: Started logrotate.timer. Feb 9 07:14:27.482623 systemd[1]: Started mdadm.timer. Feb 9 07:14:27.489555 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 07:14:27.497577 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 07:14:27.497593 systemd[1]: Reached target paths.target. Feb 9 07:14:27.504548 systemd[1]: Reached target timers.target. Feb 9 07:14:27.511691 systemd[1]: Listening on dbus.socket. Feb 9 07:14:27.519144 systemd[1]: Starting docker.socket... Feb 9 07:14:27.526950 systemd[1]: Listening on sshd.socket. Feb 9 07:14:27.533621 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 07:14:27.533856 systemd[1]: Listening on docker.socket. Feb 9 07:14:27.540612 systemd[1]: Reached target sockets.target. Feb 9 07:14:27.548553 systemd[1]: Reached target basic.target. Feb 9 07:14:27.555590 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 07:14:27.555604 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 07:14:27.556057 systemd[1]: Starting containerd.service... Feb 9 07:14:27.563082 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 07:14:27.572031 systemd[1]: Starting coreos-metadata.service... Feb 9 07:14:27.579032 systemd[1]: Starting dbus.service... Feb 9 07:14:27.585142 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 07:14:27.589461 jq[1411]: false Feb 9 07:14:27.591668 coreos-metadata[1404]: Feb 09 07:14:27.591 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:14:27.593160 systemd[1]: Starting extend-filesystems.service... Feb 9 07:14:27.597726 dbus-daemon[1410]: [system] SELinux support is enabled Feb 9 07:14:27.600461 coreos-metadata[1407]: Feb 09 07:14:27.600 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:14:27.600566 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 07:14:27.600633 extend-filesystems[1413]: Found sda Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda1 Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda2 Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda3 Feb 9 07:14:27.607634 extend-filesystems[1413]: Found usr Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda4 Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda6 Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda7 Feb 9 07:14:27.607634 extend-filesystems[1413]: Found sda9 Feb 9 07:14:27.607634 extend-filesystems[1413]: Checking size of /dev/sda9 Feb 9 07:14:27.607634 extend-filesystems[1413]: Resized partition /dev/sda9 Feb 9 07:14:27.744529 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 9 07:14:27.744558 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 07:14:27.601152 systemd[1]: Starting motdgen.service... Feb 9 07:14:27.744640 extend-filesystems[1428]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 07:14:27.623354 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 07:14:27.639190 systemd[1]: Starting prepare-critools.service... Feb 9 07:14:27.659044 systemd[1]: Starting prepare-helm.service... Feb 9 07:14:27.678310 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 07:14:27.689029 systemd[1]: Starting sshd-keygen.service... Feb 9 07:14:27.701757 systemd[1]: Starting systemd-logind.service... Feb 9 07:14:27.722546 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 07:14:27.723030 systemd[1]: Starting tcsd.service... Feb 9 07:14:27.725020 systemd-logind[1441]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 07:14:27.762255 jq[1444]: true Feb 9 07:14:27.725031 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 07:14:27.725040 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 07:14:27.725182 systemd-logind[1441]: New seat seat0. Feb 9 07:14:27.731911 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 07:14:27.732264 systemd[1]: Starting update-engine.service... Feb 9 07:14:27.753082 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 07:14:27.770807 systemd[1]: Started dbus.service. Feb 9 07:14:27.775534 update_engine[1443]: I0209 07:14:27.775074 1443 main.cc:92] Flatcar Update Engine starting Feb 9 07:14:27.778373 update_engine[1443]: I0209 07:14:27.778365 1443 update_check_scheduler.cc:74] Next update check in 5m41s Feb 9 07:14:27.779213 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 07:14:27.779303 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 07:14:27.779469 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 07:14:27.779550 systemd[1]: Finished motdgen.service. Feb 9 07:14:27.787614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 07:14:27.787694 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 07:14:27.792589 tar[1446]: ./ Feb 9 07:14:27.792589 tar[1446]: ./loopback Feb 9 07:14:27.798062 jq[1452]: true Feb 9 07:14:27.798463 dbus-daemon[1410]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 07:14:27.799744 tar[1447]: crictl Feb 9 07:14:27.800995 tar[1448]: linux-amd64/helm Feb 9 07:14:27.803896 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 07:14:27.803990 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 07:14:27.804069 systemd[1]: Started update-engine.service. Feb 9 07:14:27.809118 env[1453]: time="2024-02-09T07:14:27.809086946Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 07:14:27.812661 tar[1446]: ./bandwidth Feb 9 07:14:27.816392 systemd[1]: Started systemd-logind.service. Feb 9 07:14:27.820919 env[1453]: time="2024-02-09T07:14:27.820871613Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 07:14:27.822202 env[1453]: time="2024-02-09T07:14:27.822161611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:14:27.822821 env[1453]: time="2024-02-09T07:14:27.822804492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 07:14:27.822821 env[1453]: time="2024-02-09T07:14:27.822819467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825052 env[1453]: time="2024-02-09T07:14:27.825037796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825088 env[1453]: time="2024-02-09T07:14:27.825051863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825088 env[1453]: time="2024-02-09T07:14:27.825065065Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 07:14:27.825088 env[1453]: time="2024-02-09T07:14:27.825072016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825142 env[1453]: time="2024-02-09T07:14:27.825124270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825166 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Feb 9 07:14:27.825272 env[1453]: time="2024-02-09T07:14:27.825261228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825363 env[1453]: time="2024-02-09T07:14:27.825342014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 07:14:27.825363 env[1453]: time="2024-02-09T07:14:27.825351746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 07:14:27.825414 env[1453]: time="2024-02-09T07:14:27.825379506Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 07:14:27.825414 env[1453]: time="2024-02-09T07:14:27.825389918Z" level=info msg="metadata content store policy set" policy=shared Feb 9 07:14:27.826284 systemd[1]: Started locksmithd.service. Feb 9 07:14:27.832627 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 07:14:27.832723 systemd[1]: Reached target system-config.target. Feb 9 07:14:27.833092 env[1453]: time="2024-02-09T07:14:27.833077939Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 07:14:27.833117 env[1453]: time="2024-02-09T07:14:27.833100874Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 07:14:27.833136 env[1453]: time="2024-02-09T07:14:27.833112929Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 07:14:27.833155 env[1453]: time="2024-02-09T07:14:27.833136337Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833155 env[1453]: time="2024-02-09T07:14:27.833149400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833186 env[1453]: time="2024-02-09T07:14:27.833162769Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833186 env[1453]: time="2024-02-09T07:14:27.833174064Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833215 env[1453]: time="2024-02-09T07:14:27.833186540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833215 env[1453]: time="2024-02-09T07:14:27.833198819Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833215 env[1453]: time="2024-02-09T07:14:27.833210932Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833260 env[1453]: time="2024-02-09T07:14:27.833222548Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833260 env[1453]: time="2024-02-09T07:14:27.833234831Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 07:14:27.833304 env[1453]: time="2024-02-09T07:14:27.833296588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 07:14:27.833354 env[1453]: time="2024-02-09T07:14:27.833346010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 07:14:27.833504 env[1453]: time="2024-02-09T07:14:27.833494222Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 07:14:27.833525 env[1453]: time="2024-02-09T07:14:27.833510863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833525 env[1453]: time="2024-02-09T07:14:27.833518197Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 07:14:27.833560 env[1453]: time="2024-02-09T07:14:27.833550320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833576 env[1453]: time="2024-02-09T07:14:27.833559646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833576 env[1453]: time="2024-02-09T07:14:27.833566831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833576 env[1453]: time="2024-02-09T07:14:27.833573566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833620 env[1453]: time="2024-02-09T07:14:27.833580121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833620 env[1453]: time="2024-02-09T07:14:27.833590424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833620 env[1453]: time="2024-02-09T07:14:27.833602026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833620 env[1453]: time="2024-02-09T07:14:27.833609116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833681 env[1453]: time="2024-02-09T07:14:27.833620539Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 07:14:27.833700 env[1453]: time="2024-02-09T07:14:27.833685879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833700 env[1453]: time="2024-02-09T07:14:27.833695515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833730 env[1453]: time="2024-02-09T07:14:27.833702520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.833730 env[1453]: time="2024-02-09T07:14:27.833708708Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 07:14:27.833730 env[1453]: time="2024-02-09T07:14:27.833716609Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 07:14:27.833730 env[1453]: time="2024-02-09T07:14:27.833722723Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 07:14:27.833791 env[1453]: time="2024-02-09T07:14:27.833732410Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 07:14:27.834037 env[1453]: time="2024-02-09T07:14:27.833847798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 07:14:27.834179 env[1453]: time="2024-02-09T07:14:27.834089640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834191699Z" level=info msg="Connect containerd service" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834221890Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834526821Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834589362Z" level=info msg="Start subscribing containerd event" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834623240Z" level=info msg="Start recovering state" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834667989Z" level=info msg="Start event monitor" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834678633Z" level=info msg="Start snapshots syncer" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834686743Z" level=info msg="Start cni network conf syncer for default" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834693050Z" level=info msg="Start streaming server" Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834712143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834743956Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 07:14:27.837436 env[1453]: time="2024-02-09T07:14:27.834769740Z" level=info msg="containerd successfully booted in 0.026117s" Feb 9 07:14:27.840652 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 07:14:27.840776 systemd[1]: Reached target user-config.target. Feb 9 07:14:27.845478 tar[1446]: ./ptp Feb 9 07:14:27.851017 systemd[1]: Started containerd.service. Feb 9 07:14:27.857858 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 07:14:27.869431 tar[1446]: ./vlan Feb 9 07:14:27.886364 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 07:14:27.892512 tar[1446]: ./host-device Feb 9 07:14:27.914877 tar[1446]: ./tuning Feb 9 07:14:27.934704 tar[1446]: ./vrf Feb 9 07:14:27.955455 tar[1446]: ./sbr Feb 9 07:14:27.975755 tar[1446]: ./tap Feb 9 07:14:27.999205 tar[1446]: ./dhcp Feb 9 07:14:28.058143 tar[1446]: ./static Feb 9 07:14:28.063338 tar[1448]: linux-amd64/LICENSE Feb 9 07:14:28.063380 tar[1448]: linux-amd64/README.md Feb 9 07:14:28.064051 systemd[1]: Finished prepare-critools.service. Feb 9 07:14:28.072801 systemd[1]: Finished prepare-helm.service. Feb 9 07:14:28.074874 tar[1446]: ./firewall Feb 9 07:14:28.100517 tar[1446]: ./macvlan Feb 9 07:14:28.108484 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 9 07:14:28.134694 extend-filesystems[1428]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 07:14:28.134694 extend-filesystems[1428]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 07:14:28.134694 extend-filesystems[1428]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 9 07:14:28.171579 extend-filesystems[1413]: Resized filesystem in /dev/sda9 Feb 9 07:14:28.171579 extend-filesystems[1413]: Found sdb Feb 9 07:14:28.186574 tar[1446]: ./dummy Feb 9 07:14:28.186574 tar[1446]: ./bridge Feb 9 07:14:28.135167 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 07:14:28.135252 systemd[1]: Finished extend-filesystems.service. Feb 9 07:14:28.198174 tar[1446]: ./ipvlan Feb 9 07:14:28.221184 tar[1446]: ./portmap Feb 9 07:14:28.242542 tar[1446]: ./host-local Feb 9 07:14:28.267129 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 07:14:28.465847 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 07:14:28.477341 systemd[1]: Finished sshd-keygen.service. Feb 9 07:14:28.485295 systemd[1]: Starting issuegen.service... Feb 9 07:14:28.492768 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 07:14:28.492840 systemd[1]: Finished issuegen.service. Feb 9 07:14:28.500235 systemd[1]: Starting systemd-user-sessions.service... Feb 9 07:14:28.508776 systemd[1]: Finished systemd-user-sessions.service. Feb 9 07:14:28.517182 systemd[1]: Started getty@tty1.service. Feb 9 07:14:28.524134 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 07:14:28.532658 systemd[1]: Reached target getty.target. Feb 9 07:14:28.835668 systemd-networkd[1301]: bond0: Gained IPv6LL Feb 9 07:14:28.836004 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:29.220676 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:29.221145 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:29.388696 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 07:14:32.845785 coreos-metadata[1404]: Feb 09 07:14:32.845 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 9 07:14:32.846584 coreos-metadata[1407]: Feb 09 07:14:32.845 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 9 07:14:33.564299 login[1516]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 07:14:33.571291 login[1515]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 07:14:33.576250 systemd-logind[1441]: New session 1 of user core. Feb 9 07:14:33.577087 systemd[1]: Created slice user-500.slice. Feb 9 07:14:33.577924 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 07:14:33.579911 systemd-logind[1441]: New session 2 of user core. Feb 9 07:14:33.585560 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 07:14:33.586561 systemd[1]: Starting user@500.service... Feb 9 07:14:33.589084 (systemd)[1520]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:33.694888 systemd[1520]: Queued start job for default target default.target. Feb 9 07:14:33.695113 systemd[1520]: Reached target paths.target. Feb 9 07:14:33.695124 systemd[1520]: Reached target sockets.target. Feb 9 07:14:33.695131 systemd[1520]: Reached target timers.target. Feb 9 07:14:33.695138 systemd[1520]: Reached target basic.target. Feb 9 07:14:33.695157 systemd[1520]: Reached target default.target. Feb 9 07:14:33.695170 systemd[1520]: Startup finished in 101ms. Feb 9 07:14:33.695220 systemd[1]: Started user@500.service. Feb 9 07:14:33.695804 systemd[1]: Started session-1.scope. Feb 9 07:14:33.696140 systemd[1]: Started session-2.scope. Feb 9 07:14:33.846178 coreos-metadata[1404]: Feb 09 07:14:33.845 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 07:14:33.846929 coreos-metadata[1407]: Feb 09 07:14:33.845 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 07:14:33.849919 coreos-metadata[1407]: Feb 09 07:14:33.849 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 9 07:14:33.853395 coreos-metadata[1404]: Feb 09 07:14:33.853 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 9 07:14:34.962532 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 9 07:14:34.969485 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 07:14:35.215469 systemd[1]: Created slice system-sshd.slice. Feb 9 07:14:35.216116 systemd[1]: Started sshd@0-147.75.49.59:22-147.75.109.163:52618.service. Feb 9 07:14:35.259883 sshd[1541]: Accepted publickey for core from 147.75.109.163 port 52618 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:35.260773 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:35.264085 systemd-logind[1441]: New session 3 of user core. Feb 9 07:14:35.264774 systemd[1]: Started session-3.scope. Feb 9 07:14:35.318347 systemd[1]: Started sshd@1-147.75.49.59:22-147.75.109.163:47566.service. Feb 9 07:14:35.345982 sshd[1546]: Accepted publickey for core from 147.75.109.163 port 47566 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:35.346626 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:35.348821 systemd-logind[1441]: New session 4 of user core. Feb 9 07:14:35.349257 systemd[1]: Started session-4.scope. Feb 9 07:14:35.400597 sshd[1546]: pam_unix(sshd:session): session closed for user core Feb 9 07:14:35.402206 systemd[1]: sshd@1-147.75.49.59:22-147.75.109.163:47566.service: Deactivated successfully. Feb 9 07:14:35.402518 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 07:14:35.402872 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Feb 9 07:14:35.403367 systemd[1]: Started sshd@2-147.75.49.59:22-147.75.109.163:47578.service. Feb 9 07:14:35.403838 systemd-logind[1441]: Removed session 4. Feb 9 07:14:35.431775 sshd[1552]: Accepted publickey for core from 147.75.109.163 port 47578 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:35.432744 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:35.436132 systemd-logind[1441]: New session 5 of user core. Feb 9 07:14:35.436962 systemd[1]: Started session-5.scope. Feb 9 07:14:35.492324 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 9 07:14:35.493694 systemd[1]: sshd@2-147.75.49.59:22-147.75.109.163:47578.service: Deactivated successfully. Feb 9 07:14:35.494044 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 07:14:35.494367 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Feb 9 07:14:35.495047 systemd-logind[1441]: Removed session 5. Feb 9 07:14:35.850432 coreos-metadata[1407]: Feb 09 07:14:35.850 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 9 07:14:35.853619 coreos-metadata[1404]: Feb 09 07:14:35.853 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 9 07:14:35.876621 coreos-metadata[1407]: Feb 09 07:14:35.876 INFO Fetch successful Feb 9 07:14:35.877714 coreos-metadata[1404]: Feb 09 07:14:35.877 INFO Fetch successful Feb 9 07:14:35.901727 systemd[1]: Finished coreos-metadata.service. Feb 9 07:14:35.902390 unknown[1404]: wrote ssh authorized keys file for user: core Feb 9 07:14:35.902441 systemd[1]: Started packet-phone-home.service. Feb 9 07:14:35.907366 curl[1559]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 07:14:35.907544 curl[1559]: Dload Upload Total Spent Left Speed Feb 9 07:14:35.912997 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Feb 9 07:14:35.913194 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 07:14:35.913359 systemd[1]: Reached target multi-user.target. Feb 9 07:14:35.913997 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 07:14:35.917861 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 07:14:35.917934 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 07:14:35.918018 systemd[1]: Startup finished in 1.847s (kernel) + 19.802s (initrd) + 15.284s (userspace) = 36.934s. Feb 9 07:14:36.107823 curl[1559]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 07:14:36.110160 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 07:14:41.869984 systemd[1]: Started sshd@3-147.75.49.59:22-61.154.122.122:58182.service. Feb 9 07:14:44.476142 sshd[1563]: Invalid user admin from 61.154.122.122 port 58182 Feb 9 07:14:44.482398 sshd[1563]: pam_faillock(sshd:auth): User unknown Feb 9 07:14:44.483441 sshd[1563]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:14:44.483551 sshd[1563]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.154.122.122 Feb 9 07:14:44.484422 sshd[1563]: pam_faillock(sshd:auth): User unknown Feb 9 07:14:45.501895 systemd[1]: Started sshd@4-147.75.49.59:22-147.75.109.163:50138.service. Feb 9 07:14:45.529876 sshd[1566]: Accepted publickey for core from 147.75.109.163 port 50138 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:45.530683 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:45.533387 systemd-logind[1441]: New session 6 of user core. Feb 9 07:14:45.534004 systemd[1]: Started session-6.scope. Feb 9 07:14:45.587603 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 9 07:14:45.589026 systemd[1]: sshd@4-147.75.49.59:22-147.75.109.163:50138.service: Deactivated successfully. Feb 9 07:14:45.589329 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 07:14:45.589717 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Feb 9 07:14:45.590199 systemd[1]: Started sshd@5-147.75.49.59:22-147.75.109.163:50148.service. Feb 9 07:14:45.590657 systemd-logind[1441]: Removed session 6. Feb 9 07:14:45.619547 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 50148 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:45.620501 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:45.623607 systemd-logind[1441]: New session 7 of user core. Feb 9 07:14:45.624285 systemd[1]: Started session-7.scope. Feb 9 07:14:45.678825 sshd[1572]: pam_unix(sshd:session): session closed for user core Feb 9 07:14:45.685633 systemd[1]: sshd@5-147.75.49.59:22-147.75.109.163:50148.service: Deactivated successfully. Feb 9 07:14:45.687159 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 07:14:45.688755 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Feb 9 07:14:45.691377 systemd[1]: Started sshd@6-147.75.49.59:22-147.75.109.163:50150.service. Feb 9 07:14:45.693696 systemd-logind[1441]: Removed session 7. Feb 9 07:14:45.722939 sshd[1578]: Accepted publickey for core from 147.75.109.163 port 50150 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:45.723588 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:45.725704 systemd-logind[1441]: New session 8 of user core. Feb 9 07:14:45.726159 systemd[1]: Started session-8.scope. Feb 9 07:14:45.778802 sshd[1578]: pam_unix(sshd:session): session closed for user core Feb 9 07:14:45.782426 systemd[1]: sshd@6-147.75.49.59:22-147.75.109.163:50150.service: Deactivated successfully. Feb 9 07:14:45.783242 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 07:14:45.784322 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Feb 9 07:14:45.785986 systemd[1]: Started sshd@7-147.75.49.59:22-147.75.109.163:50156.service. Feb 9 07:14:45.787524 systemd-logind[1441]: Removed session 8. Feb 9 07:14:45.877300 sshd[1584]: Accepted publickey for core from 147.75.109.163 port 50156 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:14:45.879036 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:14:45.884110 systemd-logind[1441]: New session 9 of user core. Feb 9 07:14:45.885221 systemd[1]: Started session-9.scope. Feb 9 07:14:45.968348 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 07:14:45.968957 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 07:14:46.136869 sshd[1563]: Failed password for invalid user admin from 61.154.122.122 port 58182 ssh2 Feb 9 07:14:47.664658 sshd[1592]: pam_faillock(sshd:auth): User unknown Feb 9 07:14:47.667784 sshd[1563]: Postponed keyboard-interactive for invalid user admin from 61.154.122.122 port 58182 ssh2 [preauth] Feb 9 07:14:48.200297 sshd[1592]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:14:48.201334 sshd[1592]: pam_faillock(sshd:auth): User unknown Feb 9 07:14:49.934479 sshd[1563]: PAM: Permission denied for illegal user admin from 61.154.122.122 Feb 9 07:14:49.935362 sshd[1563]: Failed keyboard-interactive/pam for invalid user admin from 61.154.122.122 port 58182 ssh2 Feb 9 07:14:49.993496 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 07:14:49.997580 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 07:14:49.997769 systemd[1]: Reached target network-online.target. Feb 9 07:14:49.998470 systemd[1]: Starting docker.service... Feb 9 07:14:50.018900 env[1608]: time="2024-02-09T07:14:50.018868653Z" level=info msg="Starting up" Feb 9 07:14:50.019616 env[1608]: time="2024-02-09T07:14:50.019570391Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 07:14:50.019616 env[1608]: time="2024-02-09T07:14:50.019585570Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 07:14:50.019616 env[1608]: time="2024-02-09T07:14:50.019605374Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 07:14:50.019616 env[1608]: time="2024-02-09T07:14:50.019613761Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 07:14:50.020626 env[1608]: time="2024-02-09T07:14:50.020585230Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 07:14:50.020626 env[1608]: time="2024-02-09T07:14:50.020596009Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 07:14:50.020626 env[1608]: time="2024-02-09T07:14:50.020607667Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 07:14:50.020626 env[1608]: time="2024-02-09T07:14:50.020617711Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 07:14:50.033180 env[1608]: time="2024-02-09T07:14:50.033134566Z" level=info msg="Loading containers: start." Feb 9 07:14:50.155568 kernel: Initializing XFRM netlink socket Feb 9 07:14:50.228944 env[1608]: time="2024-02-09T07:14:50.228892043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 07:14:50.229832 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 9 07:14:50.275533 systemd-networkd[1301]: docker0: Link UP Feb 9 07:14:50.280948 env[1608]: time="2024-02-09T07:14:50.280927983Z" level=info msg="Loading containers: done." Feb 9 07:14:50.287818 env[1608]: time="2024-02-09T07:14:50.287759213Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 07:14:50.287931 env[1608]: time="2024-02-09T07:14:50.287890191Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 07:14:50.287983 env[1608]: time="2024-02-09T07:14:50.287967740Z" level=info msg="Daemon has completed initialization" Feb 9 07:14:50.298149 systemd[1]: Started docker.service. Feb 9 07:14:50.304687 env[1608]: time="2024-02-09T07:14:50.304615801Z" level=info msg="API listen on /run/docker.sock" Feb 9 07:14:50.329761 systemd[1]: Reloading. Feb 9 07:14:50.387367 /usr/lib/systemd/system-generators/torcx-generator[1762]: time="2024-02-09T07:14:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:14:50.387384 /usr/lib/systemd/system-generators/torcx-generator[1762]: time="2024-02-09T07:14:50Z" level=info msg="torcx already run" Feb 9 07:14:50.407725 sshd[1563]: Connection closed by invalid user admin 61.154.122.122 port 58182 [preauth] Feb 9 07:14:50.444954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:14:50.444964 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:14:50.459803 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:14:50.510970 systemd[1]: sshd@3-147.75.49.59:22-61.154.122.122:58182.service: Deactivated successfully. Feb 9 07:14:50.513079 systemd[1]: Started kubelet.service. Feb 9 07:14:50.536828 kubelet[1822]: E0209 07:14:50.536796 1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 07:14:50.538151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 07:14:50.538253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 07:14:50.781176 systemd-timesyncd[1397]: Contacted time server [2604:2dc0:202:300::13ac]:123 (2.flatcar.pool.ntp.org). Feb 9 07:14:50.781323 systemd-timesyncd[1397]: Initial clock synchronization to Fri 2024-02-09 07:14:50.559043 UTC. Feb 9 07:14:51.124167 env[1453]: time="2024-02-09T07:14:51.124094238Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 07:14:51.798515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412417581.mount: Deactivated successfully. Feb 9 07:14:53.004631 env[1453]: time="2024-02-09T07:14:53.004567454Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:53.005284 env[1453]: time="2024-02-09T07:14:53.005242545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:53.006282 env[1453]: time="2024-02-09T07:14:53.006240213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:53.007600 env[1453]: time="2024-02-09T07:14:53.007548160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:53.007894 env[1453]: time="2024-02-09T07:14:53.007882023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 07:14:53.013261 env[1453]: time="2024-02-09T07:14:53.013234476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 07:14:54.850067 env[1453]: time="2024-02-09T07:14:54.850030368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:54.850951 env[1453]: time="2024-02-09T07:14:54.850904235Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:54.851903 env[1453]: time="2024-02-09T07:14:54.851859353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:54.853232 env[1453]: time="2024-02-09T07:14:54.853180559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:54.853649 env[1453]: time="2024-02-09T07:14:54.853595925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 07:14:54.859452 env[1453]: time="2024-02-09T07:14:54.859415407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 07:14:56.136250 env[1453]: time="2024-02-09T07:14:56.136199830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:56.136868 env[1453]: time="2024-02-09T07:14:56.136854642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:56.137849 env[1453]: time="2024-02-09T07:14:56.137803017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:56.139152 env[1453]: time="2024-02-09T07:14:56.139090497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:56.139451 env[1453]: time="2024-02-09T07:14:56.139437897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 07:14:56.145057 env[1453]: time="2024-02-09T07:14:56.145024829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 07:14:57.088815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557225011.mount: Deactivated successfully. Feb 9 07:14:57.582909 env[1453]: time="2024-02-09T07:14:57.582861388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:57.583481 env[1453]: time="2024-02-09T07:14:57.583434646Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:57.583999 env[1453]: time="2024-02-09T07:14:57.583947356Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:57.584990 env[1453]: time="2024-02-09T07:14:57.584956603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:57.585169 env[1453]: time="2024-02-09T07:14:57.585136111Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 07:14:57.591003 env[1453]: time="2024-02-09T07:14:57.590964438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 07:14:58.114205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182620015.mount: Deactivated successfully. Feb 9 07:14:58.116062 env[1453]: time="2024-02-09T07:14:58.116013356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:58.116730 env[1453]: time="2024-02-09T07:14:58.116719424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:58.117370 env[1453]: time="2024-02-09T07:14:58.117359602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:58.118163 env[1453]: time="2024-02-09T07:14:58.118123702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:14:58.118486 env[1453]: time="2024-02-09T07:14:58.118445867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 07:14:58.123623 env[1453]: time="2024-02-09T07:14:58.123595278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 07:14:58.696940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677244911.mount: Deactivated successfully. Feb 9 07:15:00.593992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 07:15:00.594503 systemd[1]: Stopped kubelet.service. Feb 9 07:15:00.597989 systemd[1]: Started kubelet.service. Feb 9 07:15:00.623189 kubelet[1915]: E0209 07:15:00.623122 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 07:15:00.625235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 07:15:00.625302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 07:15:02.826853 env[1453]: time="2024-02-09T07:15:02.826798356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:02.827469 env[1453]: time="2024-02-09T07:15:02.827432820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:02.828673 env[1453]: time="2024-02-09T07:15:02.828625881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:02.829703 env[1453]: time="2024-02-09T07:15:02.829625380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:02.830172 env[1453]: time="2024-02-09T07:15:02.830127165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 07:15:02.835355 env[1453]: time="2024-02-09T07:15:02.835342076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 07:15:03.403222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147790857.mount: Deactivated successfully. Feb 9 07:15:03.853354 env[1453]: time="2024-02-09T07:15:03.853296520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:03.853959 env[1453]: time="2024-02-09T07:15:03.853920609Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:03.854741 env[1453]: time="2024-02-09T07:15:03.854696146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:03.855416 env[1453]: time="2024-02-09T07:15:03.855377293Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:03.855791 env[1453]: time="2024-02-09T07:15:03.855737787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 07:15:05.396157 systemd[1]: Stopped kubelet.service. Feb 9 07:15:05.407599 systemd[1]: Reloading. Feb 9 07:15:05.443653 /usr/lib/systemd/system-generators/torcx-generator[2076]: time="2024-02-09T07:15:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:15:05.443672 /usr/lib/systemd/system-generators/torcx-generator[2076]: time="2024-02-09T07:15:05Z" level=info msg="torcx already run" Feb 9 07:15:05.501632 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:15:05.501641 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:15:05.516623 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:15:05.571457 systemd[1]: Started kubelet.service. Feb 9 07:15:05.592895 kubelet[2136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:15:05.592895 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 07:15:05.592895 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:15:05.592895 kubelet[2136]: I0209 07:15:05.592886 2136 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 07:15:05.757508 kubelet[2136]: I0209 07:15:05.757450 2136 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 07:15:05.757508 kubelet[2136]: I0209 07:15:05.757477 2136 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 07:15:05.757672 kubelet[2136]: I0209 07:15:05.757625 2136 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 07:15:05.759605 kubelet[2136]: I0209 07:15:05.759540 2136 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 07:15:05.760324 kubelet[2136]: E0209 07:15:05.760295 2136 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.49.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.780976 kubelet[2136]: I0209 07:15:05.780939 2136 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 07:15:05.781130 kubelet[2136]: I0209 07:15:05.781101 2136 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 07:15:05.781256 kubelet[2136]: I0209 07:15:05.781220 2136 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 07:15:05.781256 kubelet[2136]: I0209 07:15:05.781232 2136 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 07:15:05.781256 kubelet[2136]: I0209 07:15:05.781237 2136 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 07:15:05.781369 kubelet[2136]: I0209 07:15:05.781287 2136 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:15:05.781369 kubelet[2136]: I0209 07:15:05.781343 2136 kubelet.go:393] "Attempting to sync node with API server" Feb 9 07:15:05.781369 kubelet[2136]: I0209 07:15:05.781351 2136 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 07:15:05.781369 kubelet[2136]: I0209 07:15:05.781363 2136 kubelet.go:309] "Adding apiserver pod source" Feb 9 07:15:05.781457 kubelet[2136]: I0209 07:15:05.781371 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 07:15:05.781751 kubelet[2136]: I0209 07:15:05.781715 2136 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 07:15:05.781751 kubelet[2136]: W0209 07:15:05.781739 2136 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.49.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.781805 kubelet[2136]: W0209 07:15:05.781737 2136 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.49.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-29c32a4854&limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.781805 kubelet[2136]: E0209 07:15:05.781765 2136 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.49.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.781805 kubelet[2136]: E0209 07:15:05.781769 2136 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.49.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-29c32a4854&limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.781859 kubelet[2136]: W0209 07:15:05.781843 2136 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 07:15:05.782173 kubelet[2136]: I0209 07:15:05.782131 2136 server.go:1232] "Started kubelet" Feb 9 07:15:05.782208 kubelet[2136]: I0209 07:15:05.782191 2136 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 07:15:05.782208 kubelet[2136]: I0209 07:15:05.782200 2136 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 07:15:05.782385 kubelet[2136]: E0209 07:15:05.782306 2136 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-29c32a4854.17b220775e4ca62c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-29c32a4854", UID:"ci-3510.3.2-a-29c32a4854", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-29c32a4854"}, FirstTimestamp:time.Date(2024, time.February, 9, 7, 15, 5, 782117932, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 7, 15, 5, 782117932, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-29c32a4854"}': 'Post "https://147.75.49.59:6443/api/v1/namespaces/default/events": dial tcp 147.75.49.59:6443: connect: connection refused'(may retry after sleeping) Feb 9 07:15:05.782385 kubelet[2136]: I0209 07:15:05.782372 2136 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 07:15:05.782459 kubelet[2136]: E0209 07:15:05.782406 2136 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 07:15:05.782459 kubelet[2136]: E0209 07:15:05.782417 2136 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 07:15:05.782798 kubelet[2136]: I0209 07:15:05.782764 2136 server.go:462] "Adding debug handlers to kubelet server" Feb 9 07:15:05.791989 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 07:15:05.792039 kubelet[2136]: I0209 07:15:05.792007 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 07:15:05.792081 kubelet[2136]: I0209 07:15:05.792074 2136 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 07:15:05.792126 kubelet[2136]: E0209 07:15:05.792116 2136 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-29c32a4854\" not found" Feb 9 07:15:05.792157 kubelet[2136]: I0209 07:15:05.792125 2136 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 07:15:05.792204 kubelet[2136]: I0209 07:15:05.792193 2136 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 07:15:05.792286 kubelet[2136]: E0209 07:15:05.792276 2136 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.49.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-29c32a4854?timeout=10s\": dial tcp 147.75.49.59:6443: connect: connection refused" interval="200ms" Feb 9 07:15:05.792346 kubelet[2136]: W0209 07:15:05.792324 2136 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.49.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.792379 kubelet[2136]: E0209 07:15:05.792352 2136 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.49.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.799408 kubelet[2136]: I0209 07:15:05.799392 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 07:15:05.799879 kubelet[2136]: I0209 07:15:05.799868 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 07:15:05.799935 kubelet[2136]: I0209 07:15:05.799884 2136 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 07:15:05.799935 kubelet[2136]: I0209 07:15:05.799909 2136 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 07:15:05.799987 kubelet[2136]: E0209 07:15:05.799940 2136 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 07:15:05.800183 kubelet[2136]: W0209 07:15:05.800169 2136 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.49.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.800222 kubelet[2136]: E0209 07:15:05.800197 2136 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.49.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 9 07:15:05.803792 kubelet[2136]: I0209 07:15:05.803782 2136 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 07:15:05.803792 kubelet[2136]: I0209 07:15:05.803791 2136 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 07:15:05.803848 kubelet[2136]: I0209 07:15:05.803799 2136 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:15:05.804557 kubelet[2136]: I0209 07:15:05.804549 2136 policy_none.go:49] "None policy: Start" Feb 9 07:15:05.804778 kubelet[2136]: I0209 07:15:05.804770 2136 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 07:15:05.804815 kubelet[2136]: I0209 07:15:05.804780 2136 state_mem.go:35] "Initializing new in-memory state store" Feb 9 07:15:05.807013 systemd[1]: Created slice kubepods.slice. Feb 9 07:15:05.809113 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 07:15:05.810437 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 07:15:05.832077 kubelet[2136]: I0209 07:15:05.832038 2136 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 07:15:05.832197 kubelet[2136]: I0209 07:15:05.832155 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 07:15:05.832437 kubelet[2136]: E0209 07:15:05.832427 2136 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-29c32a4854\" not found" Feb 9 07:15:05.896832 kubelet[2136]: I0209 07:15:05.896734 2136 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.897462 kubelet[2136]: E0209 07:15:05.897390 2136 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.59:6443/api/v1/nodes\": dial tcp 147.75.49.59:6443: connect: connection refused" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.900684 kubelet[2136]: I0209 07:15:05.900593 2136 topology_manager.go:215] "Topology Admit Handler" podUID="54dc99d30362578c1d17c8c6de22d26c" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.903849 kubelet[2136]: I0209 07:15:05.903760 2136 topology_manager.go:215] "Topology Admit Handler" podUID="448b9b6ca623b703cfd546abd471cb79" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.907181 kubelet[2136]: I0209 07:15:05.907122 2136 topology_manager.go:215] "Topology Admit Handler" podUID="4963488af92f13b967524418b3f2ce0a" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.919389 systemd[1]: Created slice kubepods-burstable-pod54dc99d30362578c1d17c8c6de22d26c.slice. Feb 9 07:15:05.954964 systemd[1]: Created slice kubepods-burstable-pod448b9b6ca623b703cfd546abd471cb79.slice. Feb 9 07:15:05.979632 systemd[1]: Created slice kubepods-burstable-pod4963488af92f13b967524418b3f2ce0a.slice. Feb 9 07:15:05.993465 kubelet[2136]: E0209 07:15:05.993409 2136 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.49.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-29c32a4854?timeout=10s\": dial tcp 147.75.49.59:6443: connect: connection refused" interval="400ms" Feb 9 07:15:05.993742 kubelet[2136]: I0209 07:15:05.993553 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54dc99d30362578c1d17c8c6de22d26c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" (UID: \"54dc99d30362578c1d17c8c6de22d26c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.993742 kubelet[2136]: I0209 07:15:05.993642 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54dc99d30362578c1d17c8c6de22d26c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" (UID: \"54dc99d30362578c1d17c8c6de22d26c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.993742 kubelet[2136]: I0209 07:15:05.993704 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.994189 kubelet[2136]: I0209 07:15:05.993761 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.994189 kubelet[2136]: I0209 07:15:05.993822 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.994189 kubelet[2136]: I0209 07:15:05.993878 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4963488af92f13b967524418b3f2ce0a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-29c32a4854\" (UID: \"4963488af92f13b967524418b3f2ce0a\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.994189 kubelet[2136]: I0209 07:15:05.993931 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54dc99d30362578c1d17c8c6de22d26c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" (UID: \"54dc99d30362578c1d17c8c6de22d26c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.994189 kubelet[2136]: I0209 07:15:05.993985 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:05.994673 kubelet[2136]: I0209 07:15:05.994043 2136 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:06.102131 kubelet[2136]: I0209 07:15:06.102080 2136 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:06.102850 kubelet[2136]: E0209 07:15:06.102799 2136 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.59:6443/api/v1/nodes\": dial tcp 147.75.49.59:6443: connect: connection refused" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:06.250619 env[1453]: time="2024-02-09T07:15:06.250451933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-29c32a4854,Uid:54dc99d30362578c1d17c8c6de22d26c,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:06.275233 env[1453]: time="2024-02-09T07:15:06.275096051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-29c32a4854,Uid:448b9b6ca623b703cfd546abd471cb79,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:06.285368 env[1453]: time="2024-02-09T07:15:06.285255661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-29c32a4854,Uid:4963488af92f13b967524418b3f2ce0a,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:06.394732 kubelet[2136]: E0209 07:15:06.394555 2136 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.49.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-29c32a4854?timeout=10s\": dial tcp 147.75.49.59:6443: connect: connection refused" interval="800ms" Feb 9 07:15:06.507061 kubelet[2136]: I0209 07:15:06.507000 2136 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:06.507807 kubelet[2136]: E0209 07:15:06.507745 2136 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.59:6443/api/v1/nodes\": dial tcp 147.75.49.59:6443: connect: connection refused" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:06.739984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879560752.mount: Deactivated successfully. Feb 9 07:15:06.741053 env[1453]: time="2024-02-09T07:15:06.741008167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.742083 env[1453]: time="2024-02-09T07:15:06.742043778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.742606 env[1453]: time="2024-02-09T07:15:06.742572295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.743270 env[1453]: time="2024-02-09T07:15:06.743227837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.744055 env[1453]: time="2024-02-09T07:15:06.744003931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.745316 env[1453]: time="2024-02-09T07:15:06.745271052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.745742 env[1453]: time="2024-02-09T07:15:06.745700977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.747064 env[1453]: time="2024-02-09T07:15:06.747017905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.747438 env[1453]: time="2024-02-09T07:15:06.747404093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.748901 env[1453]: time="2024-02-09T07:15:06.748884770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.749707 env[1453]: time="2024-02-09T07:15:06.749694646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.750085 env[1453]: time="2024-02-09T07:15:06.750073954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:06.758468 env[1453]: time="2024-02-09T07:15:06.758432352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:06.758468 env[1453]: time="2024-02-09T07:15:06.758454712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:06.758468 env[1453]: time="2024-02-09T07:15:06.758465601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:06.758601 env[1453]: time="2024-02-09T07:15:06.758547451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/641d2ae1fe30ba05387fa18bd63ad01baba97dde4d60510ca35ab222329c9add pid=2196 runtime=io.containerd.runc.v2 Feb 9 07:15:06.758632 env[1453]: time="2024-02-09T07:15:06.758604842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:06.758632 env[1453]: time="2024-02-09T07:15:06.758622496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:06.758674 env[1453]: time="2024-02-09T07:15:06.758629346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:06.758703 env[1453]: time="2024-02-09T07:15:06.758689735Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b80b56748537b00a0ca514e05b119e2b14709f51a57b4fb81fd711dfe8d1cc9 pid=2197 runtime=io.containerd.runc.v2 Feb 9 07:15:06.759101 env[1453]: time="2024-02-09T07:15:06.759077396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:06.759101 env[1453]: time="2024-02-09T07:15:06.759094346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:06.759172 env[1453]: time="2024-02-09T07:15:06.759101152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:06.759227 env[1453]: time="2024-02-09T07:15:06.759194629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4e6a94275bd46f7e700d51b8c01d1f7361ce38430f0adce382950edbcba0760 pid=2209 runtime=io.containerd.runc.v2 Feb 9 07:15:06.764585 systemd[1]: Started cri-containerd-b4e6a94275bd46f7e700d51b8c01d1f7361ce38430f0adce382950edbcba0760.scope. Feb 9 07:15:06.775713 systemd[1]: Started cri-containerd-3b80b56748537b00a0ca514e05b119e2b14709f51a57b4fb81fd711dfe8d1cc9.scope. Feb 9 07:15:06.776297 systemd[1]: Started cri-containerd-641d2ae1fe30ba05387fa18bd63ad01baba97dde4d60510ca35ab222329c9add.scope. Feb 9 07:15:06.798805 env[1453]: time="2024-02-09T07:15:06.798775987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-29c32a4854,Uid:4963488af92f13b967524418b3f2ce0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"641d2ae1fe30ba05387fa18bd63ad01baba97dde4d60510ca35ab222329c9add\"" Feb 9 07:15:06.798902 env[1453]: time="2024-02-09T07:15:06.798823430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-29c32a4854,Uid:448b9b6ca623b703cfd546abd471cb79,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4e6a94275bd46f7e700d51b8c01d1f7361ce38430f0adce382950edbcba0760\"" Feb 9 07:15:06.801031 env[1453]: time="2024-02-09T07:15:06.801016380Z" level=info msg="CreateContainer within sandbox \"641d2ae1fe30ba05387fa18bd63ad01baba97dde4d60510ca35ab222329c9add\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 07:15:06.801081 env[1453]: time="2024-02-09T07:15:06.801041717Z" level=info msg="CreateContainer within sandbox \"b4e6a94275bd46f7e700d51b8c01d1f7361ce38430f0adce382950edbcba0760\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 07:15:06.809762 env[1453]: time="2024-02-09T07:15:06.809742922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-29c32a4854,Uid:54dc99d30362578c1d17c8c6de22d26c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b80b56748537b00a0ca514e05b119e2b14709f51a57b4fb81fd711dfe8d1cc9\"" Feb 9 07:15:06.810841 env[1453]: time="2024-02-09T07:15:06.810825270Z" level=info msg="CreateContainer within sandbox \"b4e6a94275bd46f7e700d51b8c01d1f7361ce38430f0adce382950edbcba0760\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d93ebc895c5e40cf353eb00d6055d89d06929b980cf2aab050a9ff4f35fc7dd2\"" Feb 9 07:15:06.810959 env[1453]: time="2024-02-09T07:15:06.810946800Z" level=info msg="CreateContainer within sandbox \"3b80b56748537b00a0ca514e05b119e2b14709f51a57b4fb81fd711dfe8d1cc9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 07:15:06.810990 env[1453]: time="2024-02-09T07:15:06.810975750Z" level=info msg="CreateContainer within sandbox \"641d2ae1fe30ba05387fa18bd63ad01baba97dde4d60510ca35ab222329c9add\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c32e040b3c4013cc1eceabdb02ecc56627d35a2adeb46ab5dc4946d2038e7e9\"" Feb 9 07:15:06.811052 env[1453]: time="2024-02-09T07:15:06.811042245Z" level=info msg="StartContainer for \"d93ebc895c5e40cf353eb00d6055d89d06929b980cf2aab050a9ff4f35fc7dd2\"" Feb 9 07:15:06.811132 env[1453]: time="2024-02-09T07:15:06.811114898Z" level=info msg="StartContainer for \"3c32e040b3c4013cc1eceabdb02ecc56627d35a2adeb46ab5dc4946d2038e7e9\"" Feb 9 07:15:06.816546 env[1453]: time="2024-02-09T07:15:06.816493912Z" level=info msg="CreateContainer within sandbox \"3b80b56748537b00a0ca514e05b119e2b14709f51a57b4fb81fd711dfe8d1cc9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"031f1a8978222a75ef5af6569011508721f834cce382989655b8718cbd26fdc8\"" Feb 9 07:15:06.816768 env[1453]: time="2024-02-09T07:15:06.816756937Z" level=info msg="StartContainer for \"031f1a8978222a75ef5af6569011508721f834cce382989655b8718cbd26fdc8\"" Feb 9 07:15:06.831178 systemd[1]: Started cri-containerd-3c32e040b3c4013cc1eceabdb02ecc56627d35a2adeb46ab5dc4946d2038e7e9.scope. Feb 9 07:15:06.831815 systemd[1]: Started cri-containerd-d93ebc895c5e40cf353eb00d6055d89d06929b980cf2aab050a9ff4f35fc7dd2.scope. Feb 9 07:15:06.835464 systemd[1]: Started cri-containerd-031f1a8978222a75ef5af6569011508721f834cce382989655b8718cbd26fdc8.scope. Feb 9 07:15:06.856820 env[1453]: time="2024-02-09T07:15:06.856793738Z" level=info msg="StartContainer for \"d93ebc895c5e40cf353eb00d6055d89d06929b980cf2aab050a9ff4f35fc7dd2\" returns successfully" Feb 9 07:15:06.866379 env[1453]: time="2024-02-09T07:15:06.866353404Z" level=info msg="StartContainer for \"3c32e040b3c4013cc1eceabdb02ecc56627d35a2adeb46ab5dc4946d2038e7e9\" returns successfully" Feb 9 07:15:06.872779 env[1453]: time="2024-02-09T07:15:06.872752622Z" level=info msg="StartContainer for \"031f1a8978222a75ef5af6569011508721f834cce382989655b8718cbd26fdc8\" returns successfully" Feb 9 07:15:07.309565 kubelet[2136]: I0209 07:15:07.309501 2136 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:07.628580 kubelet[2136]: E0209 07:15:07.628561 2136 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-29c32a4854\" not found" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:07.727624 kubelet[2136]: I0209 07:15:07.727537 2136 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:07.782947 kubelet[2136]: I0209 07:15:07.782861 2136 apiserver.go:52] "Watching apiserver" Feb 9 07:15:07.792658 kubelet[2136]: I0209 07:15:07.792607 2136 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 07:15:07.810402 kubelet[2136]: E0209 07:15:07.810371 2136 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:07.810402 kubelet[2136]: E0209 07:15:07.810373 2136 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-29c32a4854\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:07.810402 kubelet[2136]: E0209 07:15:07.810380 2136 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:08.819780 kubelet[2136]: W0209 07:15:08.819714 2136 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 07:15:08.820980 kubelet[2136]: W0209 07:15:08.820867 2136 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 07:15:08.820980 kubelet[2136]: W0209 07:15:08.820957 2136 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 07:15:10.683068 systemd[1]: Reloading. Feb 9 07:15:10.718372 /usr/lib/systemd/system-generators/torcx-generator[2470]: time="2024-02-09T07:15:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:15:10.718389 /usr/lib/systemd/system-generators/torcx-generator[2470]: time="2024-02-09T07:15:10Z" level=info msg="torcx already run" Feb 9 07:15:10.770449 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:15:10.770456 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:15:10.782830 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:15:10.847848 systemd[1]: Stopping kubelet.service... Feb 9 07:15:10.847962 kubelet[2136]: I0209 07:15:10.847868 2136 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 07:15:10.866969 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 07:15:10.867069 systemd[1]: Stopped kubelet.service. Feb 9 07:15:10.867962 systemd[1]: Started kubelet.service. Feb 9 07:15:10.891186 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:15:10.891186 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 07:15:10.891186 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:15:10.891186 kubelet[2531]: I0209 07:15:10.891148 2531 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 07:15:10.893733 kubelet[2531]: I0209 07:15:10.893691 2531 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 07:15:10.893733 kubelet[2531]: I0209 07:15:10.893704 2531 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 07:15:10.893820 kubelet[2531]: I0209 07:15:10.893814 2531 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 07:15:10.894737 kubelet[2531]: I0209 07:15:10.894711 2531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 07:15:10.895259 kubelet[2531]: I0209 07:15:10.895250 2531 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 07:15:10.915281 kubelet[2531]: I0209 07:15:10.915242 2531 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 07:15:10.915357 kubelet[2531]: I0209 07:15:10.915351 2531 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 07:15:10.915436 kubelet[2531]: I0209 07:15:10.915431 2531 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 07:15:10.915504 kubelet[2531]: I0209 07:15:10.915440 2531 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 07:15:10.915504 kubelet[2531]: I0209 07:15:10.915446 2531 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 07:15:10.915504 kubelet[2531]: I0209 07:15:10.915466 2531 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:15:10.915564 kubelet[2531]: I0209 07:15:10.915517 2531 kubelet.go:393] "Attempting to sync node with API server" Feb 9 07:15:10.915564 kubelet[2531]: I0209 07:15:10.915526 2531 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 07:15:10.915564 kubelet[2531]: I0209 07:15:10.915538 2531 kubelet.go:309] "Adding apiserver pod source" Feb 9 07:15:10.915564 kubelet[2531]: I0209 07:15:10.915547 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 07:15:10.915867 kubelet[2531]: I0209 07:15:10.915855 2531 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 07:15:10.916573 kubelet[2531]: I0209 07:15:10.916560 2531 server.go:1232] "Started kubelet" Feb 9 07:15:10.916744 kubelet[2531]: I0209 07:15:10.916718 2531 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 07:15:10.916744 kubelet[2531]: I0209 07:15:10.916722 2531 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 07:15:10.917007 kubelet[2531]: I0209 07:15:10.916995 2531 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 07:15:10.917050 kubelet[2531]: E0209 07:15:10.917037 2531 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 07:15:10.917074 kubelet[2531]: E0209 07:15:10.917055 2531 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 07:15:10.917788 kubelet[2531]: I0209 07:15:10.917780 2531 server.go:462] "Adding debug handlers to kubelet server" Feb 9 07:15:10.917888 kubelet[2531]: I0209 07:15:10.917880 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 07:15:10.917946 kubelet[2531]: I0209 07:15:10.917939 2531 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 07:15:10.917984 kubelet[2531]: E0209 07:15:10.917953 2531 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-29c32a4854\" not found" Feb 9 07:15:10.917984 kubelet[2531]: I0209 07:15:10.917962 2531 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 07:15:10.918085 kubelet[2531]: I0209 07:15:10.918075 2531 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 07:15:10.922250 kubelet[2531]: I0209 07:15:10.922230 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 07:15:10.922800 kubelet[2531]: I0209 07:15:10.922789 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 07:15:10.922867 kubelet[2531]: I0209 07:15:10.922810 2531 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 07:15:10.922867 kubelet[2531]: I0209 07:15:10.922825 2531 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 07:15:10.922867 kubelet[2531]: E0209 07:15:10.922859 2531 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 07:15:10.937130 kubelet[2531]: I0209 07:15:10.937081 2531 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 07:15:10.937130 kubelet[2531]: I0209 07:15:10.937094 2531 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 07:15:10.937130 kubelet[2531]: I0209 07:15:10.937103 2531 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:15:10.937247 kubelet[2531]: I0209 07:15:10.937187 2531 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 07:15:10.937247 kubelet[2531]: I0209 07:15:10.937200 2531 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 07:15:10.937247 kubelet[2531]: I0209 07:15:10.937204 2531 policy_none.go:49] "None policy: Start" Feb 9 07:15:10.937458 kubelet[2531]: I0209 07:15:10.937452 2531 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 07:15:10.937489 kubelet[2531]: I0209 07:15:10.937463 2531 state_mem.go:35] "Initializing new in-memory state store" Feb 9 07:15:10.937543 kubelet[2531]: I0209 07:15:10.937538 2531 state_mem.go:75] "Updated machine memory state" Feb 9 07:15:10.939220 kubelet[2531]: I0209 07:15:10.939175 2531 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 07:15:10.939294 kubelet[2531]: I0209 07:15:10.939285 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 07:15:11.023617 kubelet[2531]: I0209 07:15:11.023538 2531 topology_manager.go:215] "Topology Admit Handler" podUID="448b9b6ca623b703cfd546abd471cb79" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.023922 kubelet[2531]: I0209 07:15:11.023846 2531 topology_manager.go:215] "Topology Admit Handler" podUID="4963488af92f13b967524418b3f2ce0a" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.024082 kubelet[2531]: I0209 07:15:11.023999 2531 topology_manager.go:215] "Topology Admit Handler" podUID="54dc99d30362578c1d17c8c6de22d26c" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.024338 kubelet[2531]: I0209 07:15:11.024296 2531 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.034155 kubelet[2531]: W0209 07:15:11.034071 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 07:15:11.034360 kubelet[2531]: W0209 07:15:11.034184 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 07:15:11.034360 kubelet[2531]: W0209 07:15:11.034250 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 07:15:11.034360 kubelet[2531]: E0209 07:15:11.034254 2531 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.034775 kubelet[2531]: E0209 07:15:11.034367 2531 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-29c32a4854\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.034775 kubelet[2531]: E0209 07:15:11.034398 2531 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.037096 kubelet[2531]: I0209 07:15:11.037015 2531 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.037319 kubelet[2531]: I0209 07:15:11.037165 2531 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.096084 sudo[2573]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 07:15:11.096681 sudo[2573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 07:15:11.219416 kubelet[2531]: I0209 07:15:11.219349 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219416 kubelet[2531]: I0209 07:15:11.219374 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219416 kubelet[2531]: I0209 07:15:11.219388 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4963488af92f13b967524418b3f2ce0a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-29c32a4854\" (UID: \"4963488af92f13b967524418b3f2ce0a\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219416 kubelet[2531]: I0209 07:15:11.219399 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54dc99d30362578c1d17c8c6de22d26c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" (UID: \"54dc99d30362578c1d17c8c6de22d26c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219416 kubelet[2531]: I0209 07:15:11.219411 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54dc99d30362578c1d17c8c6de22d26c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" (UID: \"54dc99d30362578c1d17c8c6de22d26c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219653 kubelet[2531]: I0209 07:15:11.219442 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219653 kubelet[2531]: I0209 07:15:11.219497 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219653 kubelet[2531]: I0209 07:15:11.219526 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/448b9b6ca623b703cfd546abd471cb79-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-29c32a4854\" (UID: \"448b9b6ca623b703cfd546abd471cb79\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.219653 kubelet[2531]: I0209 07:15:11.219566 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54dc99d30362578c1d17c8c6de22d26c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-29c32a4854\" (UID: \"54dc99d30362578c1d17c8c6de22d26c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" Feb 9 07:15:11.468814 sudo[2573]: pam_unix(sudo:session): session closed for user root Feb 9 07:15:11.916373 kubelet[2531]: I0209 07:15:11.916353 2531 apiserver.go:52] "Watching apiserver" Feb 9 07:15:11.919018 kubelet[2531]: I0209 07:15:11.919006 2531 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 07:15:11.938495 kubelet[2531]: I0209 07:15:11.938471 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-29c32a4854" podStartSLOduration=3.938446988 podCreationTimestamp="2024-02-09 07:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:15:11.938423971 +0000 UTC m=+1.068722531" watchObservedRunningTime="2024-02-09 07:15:11.938446988 +0000 UTC m=+1.068745545" Feb 9 07:15:11.942660 kubelet[2531]: I0209 07:15:11.942621 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-29c32a4854" podStartSLOduration=3.942603712 podCreationTimestamp="2024-02-09 07:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:15:11.942602809 +0000 UTC m=+1.072901370" watchObservedRunningTime="2024-02-09 07:15:11.942603712 +0000 UTC m=+1.072902268" Feb 9 07:15:11.947168 kubelet[2531]: I0209 07:15:11.947154 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-29c32a4854" podStartSLOduration=3.947113673 podCreationTimestamp="2024-02-09 07:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:15:11.94704855 +0000 UTC m=+1.077347109" watchObservedRunningTime="2024-02-09 07:15:11.947113673 +0000 UTC m=+1.077412233" Feb 9 07:15:12.519695 sudo[1587]: pam_unix(sudo:session): session closed for user root Feb 9 07:15:12.520490 sshd[1584]: pam_unix(sshd:session): session closed for user core Feb 9 07:15:12.521910 systemd[1]: sshd@7-147.75.49.59:22-147.75.109.163:50156.service: Deactivated successfully. Feb 9 07:15:12.522346 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 07:15:12.522443 systemd[1]: session-9.scope: Consumed 2.864s CPU time. Feb 9 07:15:12.522832 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Feb 9 07:15:12.523367 systemd-logind[1441]: Removed session 9. Feb 9 07:15:12.984735 update_engine[1443]: I0209 07:15:12.984618 1443 update_attempter.cc:509] Updating boot flags... Feb 9 07:15:25.974678 kubelet[2531]: I0209 07:15:25.974604 2531 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 07:15:25.975749 env[1453]: time="2024-02-09T07:15:25.975657093Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 07:15:25.976343 kubelet[2531]: I0209 07:15:25.976126 2531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 07:15:26.091749 kubelet[2531]: I0209 07:15:26.091612 2531 topology_manager.go:215] "Topology Admit Handler" podUID="33210bc5-a004-4860-b871-9d6a1a8e52e6" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-w52vz" Feb 9 07:15:26.103586 systemd[1]: Created slice kubepods-besteffort-pod33210bc5_a004_4860_b871_9d6a1a8e52e6.slice. Feb 9 07:15:26.122447 kubelet[2531]: I0209 07:15:26.122422 2531 topology_manager.go:215] "Topology Admit Handler" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" podNamespace="kube-system" podName="cilium-hzsnz" Feb 9 07:15:26.122688 kubelet[2531]: I0209 07:15:26.122674 2531 topology_manager.go:215] "Topology Admit Handler" podUID="c2c778f9-4901-4b53-8558-98b037d4862a" podNamespace="kube-system" podName="kube-proxy-8bpsg" Feb 9 07:15:26.126630 systemd[1]: Created slice kubepods-burstable-podc0f72a64_a72d_459f_82cf_ae0dec9b054d.slice. Feb 9 07:15:26.129035 systemd[1]: Created slice kubepods-besteffort-podc2c778f9_4901_4b53_8558_98b037d4862a.slice. Feb 9 07:15:26.202529 kubelet[2531]: I0209 07:15:26.202500 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33210bc5-a004-4860-b871-9d6a1a8e52e6-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-w52vz\" (UID: \"33210bc5-a004-4860-b871-9d6a1a8e52e6\") " pod="kube-system/cilium-operator-6bc8ccdb58-w52vz" Feb 9 07:15:26.202671 kubelet[2531]: I0209 07:15:26.202549 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645r2\" (UniqueName: \"kubernetes.io/projected/33210bc5-a004-4860-b871-9d6a1a8e52e6-kube-api-access-645r2\") pod \"cilium-operator-6bc8ccdb58-w52vz\" (UID: \"33210bc5-a004-4860-b871-9d6a1a8e52e6\") " pod="kube-system/cilium-operator-6bc8ccdb58-w52vz" Feb 9 07:15:26.303052 kubelet[2531]: I0209 07:15:26.302852 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hostproc\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.303052 kubelet[2531]: I0209 07:15:26.302952 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-cgroup\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.303052 kubelet[2531]: I0209 07:15:26.303018 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2c778f9-4901-4b53-8558-98b037d4862a-kube-proxy\") pod \"kube-proxy-8bpsg\" (UID: \"c2c778f9-4901-4b53-8558-98b037d4862a\") " pod="kube-system/kube-proxy-8bpsg" Feb 9 07:15:26.303646 kubelet[2531]: I0209 07:15:26.303141 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2c778f9-4901-4b53-8558-98b037d4862a-lib-modules\") pod \"kube-proxy-8bpsg\" (UID: \"c2c778f9-4901-4b53-8558-98b037d4862a\") " pod="kube-system/kube-proxy-8bpsg" Feb 9 07:15:26.303646 kubelet[2531]: I0209 07:15:26.303453 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-kernel\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.304052 kubelet[2531]: I0209 07:15:26.303654 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gf8g\" (UniqueName: \"kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-kube-api-access-7gf8g\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.304052 kubelet[2531]: I0209 07:15:26.303777 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8lbb\" (UniqueName: \"kubernetes.io/projected/c2c778f9-4901-4b53-8558-98b037d4862a-kube-api-access-l8lbb\") pod \"kube-proxy-8bpsg\" (UID: \"c2c778f9-4901-4b53-8558-98b037d4862a\") " pod="kube-system/kube-proxy-8bpsg" Feb 9 07:15:26.304052 kubelet[2531]: I0209 07:15:26.303962 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-etc-cni-netd\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.304550 kubelet[2531]: I0209 07:15:26.304066 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cni-path\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.304550 kubelet[2531]: I0209 07:15:26.304177 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-lib-modules\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.304550 kubelet[2531]: I0209 07:15:26.304269 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0f72a64-a72d-459f-82cf-ae0dec9b054d-clustermesh-secrets\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.304550 kubelet[2531]: I0209 07:15:26.304372 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2c778f9-4901-4b53-8558-98b037d4862a-xtables-lock\") pod \"kube-proxy-8bpsg\" (UID: \"c2c778f9-4901-4b53-8558-98b037d4862a\") " pod="kube-system/kube-proxy-8bpsg" Feb 9 07:15:26.305027 kubelet[2531]: I0209 07:15:26.304584 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-run\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.305027 kubelet[2531]: I0209 07:15:26.304695 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-xtables-lock\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.305027 kubelet[2531]: I0209 07:15:26.304804 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-net\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.305027 kubelet[2531]: I0209 07:15:26.304875 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hubble-tls\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.305027 kubelet[2531]: I0209 07:15:26.304975 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-bpf-maps\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.305596 kubelet[2531]: I0209 07:15:26.305067 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-config-path\") pod \"cilium-hzsnz\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " pod="kube-system/cilium-hzsnz" Feb 9 07:15:26.423718 env[1453]: time="2024-02-09T07:15:26.423585061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-w52vz,Uid:33210bc5-a004-4860-b871-9d6a1a8e52e6,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:26.432512 env[1453]: time="2024-02-09T07:15:26.432367097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bpsg,Uid:c2c778f9-4901-4b53-8558-98b037d4862a,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:26.450265 env[1453]: time="2024-02-09T07:15:26.450183097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:26.450265 env[1453]: time="2024-02-09T07:15:26.450242179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:26.450518 env[1453]: time="2024-02-09T07:15:26.450263753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:26.450607 env[1453]: time="2024-02-09T07:15:26.450461344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600 pid=2710 runtime=io.containerd.runc.v2 Feb 9 07:15:26.452635 env[1453]: time="2024-02-09T07:15:26.452580746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:26.452635 env[1453]: time="2024-02-09T07:15:26.452619723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:26.452759 env[1453]: time="2024-02-09T07:15:26.452635567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:26.452830 env[1453]: time="2024-02-09T07:15:26.452790193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f540d5187ab10a7297b195370dc8a4141d498b3f27097e6a37a70967e57af9f pid=2718 runtime=io.containerd.runc.v2 Feb 9 07:15:26.463658 systemd[1]: Started cri-containerd-7f540d5187ab10a7297b195370dc8a4141d498b3f27097e6a37a70967e57af9f.scope. Feb 9 07:15:26.474637 systemd[1]: Started cri-containerd-0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600.scope. Feb 9 07:15:26.494459 env[1453]: time="2024-02-09T07:15:26.494412487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bpsg,Uid:c2c778f9-4901-4b53-8558-98b037d4862a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f540d5187ab10a7297b195370dc8a4141d498b3f27097e6a37a70967e57af9f\"" Feb 9 07:15:26.496527 env[1453]: time="2024-02-09T07:15:26.496500040Z" level=info msg="CreateContainer within sandbox \"7f540d5187ab10a7297b195370dc8a4141d498b3f27097e6a37a70967e57af9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 07:15:26.505273 env[1453]: time="2024-02-09T07:15:26.505215075Z" level=info msg="CreateContainer within sandbox \"7f540d5187ab10a7297b195370dc8a4141d498b3f27097e6a37a70967e57af9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8300208bf4ea76a09b08c3a7036344fae8d13e7a5cc36ecc641cfb16339cc7e5\"" Feb 9 07:15:26.505555 env[1453]: time="2024-02-09T07:15:26.505526383Z" level=info msg="StartContainer for \"8300208bf4ea76a09b08c3a7036344fae8d13e7a5cc36ecc641cfb16339cc7e5\"" Feb 9 07:15:26.516100 systemd[1]: Started cri-containerd-8300208bf4ea76a09b08c3a7036344fae8d13e7a5cc36ecc641cfb16339cc7e5.scope. Feb 9 07:15:26.524590 env[1453]: time="2024-02-09T07:15:26.524554914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-w52vz,Uid:33210bc5-a004-4860-b871-9d6a1a8e52e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600\"" Feb 9 07:15:26.525748 env[1453]: time="2024-02-09T07:15:26.525721313Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 07:15:26.547300 env[1453]: time="2024-02-09T07:15:26.547271577Z" level=info msg="StartContainer for \"8300208bf4ea76a09b08c3a7036344fae8d13e7a5cc36ecc641cfb16339cc7e5\" returns successfully" Feb 9 07:15:26.729785 env[1453]: time="2024-02-09T07:15:26.729658528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hzsnz,Uid:c0f72a64-a72d-459f-82cf-ae0dec9b054d,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:26.752525 env[1453]: time="2024-02-09T07:15:26.752329572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:26.752525 env[1453]: time="2024-02-09T07:15:26.752431866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:26.752525 env[1453]: time="2024-02-09T07:15:26.752471099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:26.753028 env[1453]: time="2024-02-09T07:15:26.752843117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837 pid=2881 runtime=io.containerd.runc.v2 Feb 9 07:15:26.780142 systemd[1]: Started cri-containerd-1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837.scope. Feb 9 07:15:26.826651 env[1453]: time="2024-02-09T07:15:26.826611059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hzsnz,Uid:c0f72a64-a72d-459f-82cf-ae0dec9b054d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\"" Feb 9 07:15:26.986539 kubelet[2531]: I0209 07:15:26.986293 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8bpsg" podStartSLOduration=0.986174605 podCreationTimestamp="2024-02-09 07:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:15:26.985716921 +0000 UTC m=+16.116015553" watchObservedRunningTime="2024-02-09 07:15:26.986174605 +0000 UTC m=+16.116473278" Feb 9 07:15:28.429733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275627834.mount: Deactivated successfully. Feb 9 07:15:28.990505 env[1453]: time="2024-02-09T07:15:28.990445632Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:28.991091 env[1453]: time="2024-02-09T07:15:28.991052604Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:28.991709 env[1453]: time="2024-02-09T07:15:28.991667481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:28.991970 env[1453]: time="2024-02-09T07:15:28.991926434Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 07:15:28.992367 env[1453]: time="2024-02-09T07:15:28.992304683Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 07:15:28.993203 env[1453]: time="2024-02-09T07:15:28.993143521Z" level=info msg="CreateContainer within sandbox \"0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 07:15:28.998075 env[1453]: time="2024-02-09T07:15:28.998030238Z" level=info msg="CreateContainer within sandbox \"0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\"" Feb 9 07:15:28.998377 env[1453]: time="2024-02-09T07:15:28.998315075Z" level=info msg="StartContainer for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\"" Feb 9 07:15:29.018591 systemd[1]: Started cri-containerd-eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191.scope. Feb 9 07:15:29.044475 env[1453]: time="2024-02-09T07:15:29.044441258Z" level=info msg="StartContainer for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" returns successfully" Feb 9 07:15:29.997886 kubelet[2531]: I0209 07:15:29.997825 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-w52vz" podStartSLOduration=1.5309159129999999 podCreationTimestamp="2024-02-09 07:15:26 +0000 UTC" firstStartedPulling="2024-02-09 07:15:26.525372114 +0000 UTC m=+15.655670680" lastFinishedPulling="2024-02-09 07:15:28.992200714 +0000 UTC m=+18.122499273" observedRunningTime="2024-02-09 07:15:29.99709318 +0000 UTC m=+19.127391824" watchObservedRunningTime="2024-02-09 07:15:29.997744506 +0000 UTC m=+19.128043108" Feb 9 07:15:32.610522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485616486.mount: Deactivated successfully. Feb 9 07:15:34.296563 env[1453]: time="2024-02-09T07:15:34.296510731Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:34.297128 env[1453]: time="2024-02-09T07:15:34.297088068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:34.297854 env[1453]: time="2024-02-09T07:15:34.297813021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:15:34.298584 env[1453]: time="2024-02-09T07:15:34.298540949Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 07:15:34.299659 env[1453]: time="2024-02-09T07:15:34.299640503Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 07:15:34.304118 env[1453]: time="2024-02-09T07:15:34.304074543Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03\"" Feb 9 07:15:34.304284 env[1453]: time="2024-02-09T07:15:34.304273111Z" level=info msg="StartContainer for \"f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03\"" Feb 9 07:15:34.330847 systemd[1]: Started cri-containerd-f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03.scope. Feb 9 07:15:34.343371 env[1453]: time="2024-02-09T07:15:34.343348799Z" level=info msg="StartContainer for \"f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03\" returns successfully" Feb 9 07:15:34.347763 systemd[1]: cri-containerd-f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03.scope: Deactivated successfully. Feb 9 07:15:35.306818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03-rootfs.mount: Deactivated successfully. Feb 9 07:15:35.582957 env[1453]: time="2024-02-09T07:15:35.582746009Z" level=info msg="shim disconnected" id=f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03 Feb 9 07:15:35.582957 env[1453]: time="2024-02-09T07:15:35.582856418Z" level=warning msg="cleaning up after shim disconnected" id=f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03 namespace=k8s.io Feb 9 07:15:35.582957 env[1453]: time="2024-02-09T07:15:35.582886121Z" level=info msg="cleaning up dead shim" Feb 9 07:15:35.608338 env[1453]: time="2024-02-09T07:15:35.608282500Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:15:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3088 runtime=io.containerd.runc.v2\n" Feb 9 07:15:35.999684 env[1453]: time="2024-02-09T07:15:35.999553775Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 07:15:36.013792 env[1453]: time="2024-02-09T07:15:36.013762939Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050\"" Feb 9 07:15:36.014069 env[1453]: time="2024-02-09T07:15:36.014054078Z" level=info msg="StartContainer for \"4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050\"" Feb 9 07:15:36.014764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367669891.mount: Deactivated successfully. Feb 9 07:15:36.035243 systemd[1]: Started cri-containerd-4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050.scope. Feb 9 07:15:36.058539 env[1453]: time="2024-02-09T07:15:36.058512035Z" level=info msg="StartContainer for \"4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050\" returns successfully" Feb 9 07:15:36.066788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 07:15:36.066986 systemd[1]: Stopped systemd-sysctl.service. Feb 9 07:15:36.067145 systemd[1]: Stopping systemd-sysctl.service... Feb 9 07:15:36.068364 systemd[1]: Starting systemd-sysctl.service... Feb 9 07:15:36.068652 systemd[1]: cri-containerd-4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050.scope: Deactivated successfully. Feb 9 07:15:36.074239 systemd[1]: Finished systemd-sysctl.service. Feb 9 07:15:36.096689 env[1453]: time="2024-02-09T07:15:36.096641332Z" level=info msg="shim disconnected" id=4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050 Feb 9 07:15:36.096689 env[1453]: time="2024-02-09T07:15:36.096687520Z" level=warning msg="cleaning up after shim disconnected" id=4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050 namespace=k8s.io Feb 9 07:15:36.096885 env[1453]: time="2024-02-09T07:15:36.096699244Z" level=info msg="cleaning up dead shim" Feb 9 07:15:36.118613 env[1453]: time="2024-02-09T07:15:36.118442309Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:15:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3152 runtime=io.containerd.runc.v2\n" Feb 9 07:15:36.307171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050-rootfs.mount: Deactivated successfully. Feb 9 07:15:37.006235 env[1453]: time="2024-02-09T07:15:37.006118802Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 07:15:37.024648 env[1453]: time="2024-02-09T07:15:37.024612912Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205\"" Feb 9 07:15:37.025131 env[1453]: time="2024-02-09T07:15:37.025093648Z" level=info msg="StartContainer for \"d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205\"" Feb 9 07:15:37.047347 systemd[1]: Started cri-containerd-d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205.scope. Feb 9 07:15:37.061489 env[1453]: time="2024-02-09T07:15:37.061456416Z" level=info msg="StartContainer for \"d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205\" returns successfully" Feb 9 07:15:37.062691 systemd[1]: cri-containerd-d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205.scope: Deactivated successfully. Feb 9 07:15:37.110085 env[1453]: time="2024-02-09T07:15:37.109988754Z" level=info msg="shim disconnected" id=d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205 Feb 9 07:15:37.110434 env[1453]: time="2024-02-09T07:15:37.110085810Z" level=warning msg="cleaning up after shim disconnected" id=d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205 namespace=k8s.io Feb 9 07:15:37.110434 env[1453]: time="2024-02-09T07:15:37.110114282Z" level=info msg="cleaning up dead shim" Feb 9 07:15:37.126042 env[1453]: time="2024-02-09T07:15:37.125934646Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:15:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3207 runtime=io.containerd.runc.v2\n" Feb 9 07:15:37.307918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205-rootfs.mount: Deactivated successfully. Feb 9 07:15:38.015332 env[1453]: time="2024-02-09T07:15:38.015204912Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 07:15:38.032066 env[1453]: time="2024-02-09T07:15:38.031933125Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff\"" Feb 9 07:15:38.032530 env[1453]: time="2024-02-09T07:15:38.032513123Z" level=info msg="StartContainer for \"372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff\"" Feb 9 07:15:38.053695 systemd[1]: Started cri-containerd-372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff.scope. Feb 9 07:15:38.078085 systemd[1]: cri-containerd-372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff.scope: Deactivated successfully. Feb 9 07:15:38.078256 env[1453]: time="2024-02-09T07:15:38.078230805Z" level=info msg="StartContainer for \"372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff\" returns successfully" Feb 9 07:15:38.103536 env[1453]: time="2024-02-09T07:15:38.103467154Z" level=info msg="shim disconnected" id=372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff Feb 9 07:15:38.103536 env[1453]: time="2024-02-09T07:15:38.103527268Z" level=warning msg="cleaning up after shim disconnected" id=372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff namespace=k8s.io Feb 9 07:15:38.103536 env[1453]: time="2024-02-09T07:15:38.103539847Z" level=info msg="cleaning up dead shim" Feb 9 07:15:38.125120 env[1453]: time="2024-02-09T07:15:38.125049451Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:15:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3261 runtime=io.containerd.runc.v2\n" Feb 9 07:15:38.304957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff-rootfs.mount: Deactivated successfully. Feb 9 07:15:39.025463 env[1453]: time="2024-02-09T07:15:39.025360101Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 07:15:39.038087 env[1453]: time="2024-02-09T07:15:39.038035776Z" level=info msg="CreateContainer within sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\"" Feb 9 07:15:39.038395 env[1453]: time="2024-02-09T07:15:39.038357138Z" level=info msg="StartContainer for \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\"" Feb 9 07:15:39.059088 systemd[1]: Started cri-containerd-c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc.scope. Feb 9 07:15:39.094815 env[1453]: time="2024-02-09T07:15:39.094698410Z" level=info msg="StartContainer for \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\" returns successfully" Feb 9 07:15:39.209553 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 07:15:39.261981 kubelet[2531]: I0209 07:15:39.261964 2531 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 07:15:39.273041 kubelet[2531]: I0209 07:15:39.273017 2531 topology_manager.go:215] "Topology Admit Handler" podUID="882f9838-0a48-4822-8f7f-8b3770dea3f7" podNamespace="kube-system" podName="coredns-5dd5756b68-lv4rv" Feb 9 07:15:39.274128 kubelet[2531]: I0209 07:15:39.274113 2531 topology_manager.go:215] "Topology Admit Handler" podUID="96241758-30e2-460e-9c5a-589c252e72ef" podNamespace="kube-system" podName="coredns-5dd5756b68-9tvtx" Feb 9 07:15:39.276522 systemd[1]: Created slice kubepods-burstable-pod882f9838_0a48_4822_8f7f_8b3770dea3f7.slice. Feb 9 07:15:39.278665 systemd[1]: Created slice kubepods-burstable-pod96241758_30e2_460e_9c5a_589c252e72ef.slice. Feb 9 07:15:39.288741 kubelet[2531]: I0209 07:15:39.288723 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96241758-30e2-460e-9c5a-589c252e72ef-config-volume\") pod \"coredns-5dd5756b68-9tvtx\" (UID: \"96241758-30e2-460e-9c5a-589c252e72ef\") " pod="kube-system/coredns-5dd5756b68-9tvtx" Feb 9 07:15:39.288836 kubelet[2531]: I0209 07:15:39.288749 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4z8j\" (UniqueName: \"kubernetes.io/projected/96241758-30e2-460e-9c5a-589c252e72ef-kube-api-access-b4z8j\") pod \"coredns-5dd5756b68-9tvtx\" (UID: \"96241758-30e2-460e-9c5a-589c252e72ef\") " pod="kube-system/coredns-5dd5756b68-9tvtx" Feb 9 07:15:39.288836 kubelet[2531]: I0209 07:15:39.288763 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882f9838-0a48-4822-8f7f-8b3770dea3f7-config-volume\") pod \"coredns-5dd5756b68-lv4rv\" (UID: \"882f9838-0a48-4822-8f7f-8b3770dea3f7\") " pod="kube-system/coredns-5dd5756b68-lv4rv" Feb 9 07:15:39.288836 kubelet[2531]: I0209 07:15:39.288776 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flb2\" (UniqueName: \"kubernetes.io/projected/882f9838-0a48-4822-8f7f-8b3770dea3f7-kube-api-access-4flb2\") pod \"coredns-5dd5756b68-lv4rv\" (UID: \"882f9838-0a48-4822-8f7f-8b3770dea3f7\") " pod="kube-system/coredns-5dd5756b68-lv4rv" Feb 9 07:15:39.348521 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 07:15:39.579990 env[1453]: time="2024-02-09T07:15:39.579770117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lv4rv,Uid:882f9838-0a48-4822-8f7f-8b3770dea3f7,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:39.580976 env[1453]: time="2024-02-09T07:15:39.580864032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9tvtx,Uid:96241758-30e2-460e-9c5a-589c252e72ef,Namespace:kube-system,Attempt:0,}" Feb 9 07:15:40.048676 kubelet[2531]: I0209 07:15:40.048660 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hzsnz" podStartSLOduration=6.577516364 podCreationTimestamp="2024-02-09 07:15:26 +0000 UTC" firstStartedPulling="2024-02-09 07:15:26.827522046 +0000 UTC m=+15.957820618" lastFinishedPulling="2024-02-09 07:15:34.298643134 +0000 UTC m=+23.428941692" observedRunningTime="2024-02-09 07:15:40.048112409 +0000 UTC m=+29.178410968" watchObservedRunningTime="2024-02-09 07:15:40.048637438 +0000 UTC m=+29.178935994" Feb 9 07:15:40.946852 systemd-networkd[1301]: cilium_host: Link UP Feb 9 07:15:40.946942 systemd-networkd[1301]: cilium_net: Link UP Feb 9 07:15:40.954022 systemd-networkd[1301]: cilium_net: Gained carrier Feb 9 07:15:40.961193 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 07:15:40.961228 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 07:15:40.961258 systemd-networkd[1301]: cilium_host: Gained carrier Feb 9 07:15:41.008752 systemd-networkd[1301]: cilium_vxlan: Link UP Feb 9 07:15:41.008755 systemd-networkd[1301]: cilium_vxlan: Gained carrier Feb 9 07:15:41.024611 systemd-networkd[1301]: cilium_net: Gained IPv6LL Feb 9 07:15:41.051582 systemd-networkd[1301]: cilium_host: Gained IPv6LL Feb 9 07:15:41.140548 kernel: NET: Registered PF_ALG protocol family Feb 9 07:15:41.733368 systemd-networkd[1301]: lxc_health: Link UP Feb 9 07:15:41.757249 systemd-networkd[1301]: lxc_health: Gained carrier Feb 9 07:15:41.757487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 07:15:42.112696 systemd-networkd[1301]: lxc05bf4edd5b1c: Link UP Feb 9 07:15:42.112787 systemd-networkd[1301]: lxc7b182f66a9cd: Link UP Feb 9 07:15:42.144548 kernel: eth0: renamed from tmpa0506 Feb 9 07:15:42.163541 kernel: eth0: renamed from tmp3ad32 Feb 9 07:15:42.186954 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 07:15:42.187009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc05bf4edd5b1c: link becomes ready Feb 9 07:15:42.187030 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 07:15:42.193494 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7b182f66a9cd: link becomes ready Feb 9 07:15:42.201473 systemd-networkd[1301]: lxc05bf4edd5b1c: Gained carrier Feb 9 07:15:42.201586 systemd-networkd[1301]: lxc7b182f66a9cd: Gained carrier Feb 9 07:15:42.691605 systemd-networkd[1301]: cilium_vxlan: Gained IPv6LL Feb 9 07:15:43.331613 systemd-networkd[1301]: lxc7b182f66a9cd: Gained IPv6LL Feb 9 07:15:43.331759 systemd-networkd[1301]: lxc_health: Gained IPv6LL Feb 9 07:15:44.036645 systemd-networkd[1301]: lxc05bf4edd5b1c: Gained IPv6LL Feb 9 07:15:44.499221 env[1453]: time="2024-02-09T07:15:44.499182060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:44.499221 env[1453]: time="2024-02-09T07:15:44.499203766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:44.499221 env[1453]: time="2024-02-09T07:15:44.499212293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:44.499477 env[1453]: time="2024-02-09T07:15:44.499278404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0506bd8a5e210a0c2e1b5e84988bca4c1f37dcb19a2c76bc6750a2f6323a531 pid=3948 runtime=io.containerd.runc.v2 Feb 9 07:15:44.499477 env[1453]: time="2024-02-09T07:15:44.499345865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:15:44.499477 env[1453]: time="2024-02-09T07:15:44.499364979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:15:44.499477 env[1453]: time="2024-02-09T07:15:44.499371916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:15:44.499477 env[1453]: time="2024-02-09T07:15:44.499427198Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ad3252c6f42b747dd9dd172dc353753eda4c2b55dc20bf8b8928a11f717dff4 pid=3950 runtime=io.containerd.runc.v2 Feb 9 07:15:44.505794 systemd[1]: Started cri-containerd-3ad3252c6f42b747dd9dd172dc353753eda4c2b55dc20bf8b8928a11f717dff4.scope. Feb 9 07:15:44.518199 systemd[1]: Started cri-containerd-a0506bd8a5e210a0c2e1b5e84988bca4c1f37dcb19a2c76bc6750a2f6323a531.scope. Feb 9 07:15:44.539768 env[1453]: time="2024-02-09T07:15:44.539744706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lv4rv,Uid:882f9838-0a48-4822-8f7f-8b3770dea3f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ad3252c6f42b747dd9dd172dc353753eda4c2b55dc20bf8b8928a11f717dff4\"" Feb 9 07:15:44.541064 env[1453]: time="2024-02-09T07:15:44.541025754Z" level=info msg="CreateContainer within sandbox \"3ad3252c6f42b747dd9dd172dc353753eda4c2b55dc20bf8b8928a11f717dff4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 07:15:44.545379 env[1453]: time="2024-02-09T07:15:44.545328359Z" level=info msg="CreateContainer within sandbox \"3ad3252c6f42b747dd9dd172dc353753eda4c2b55dc20bf8b8928a11f717dff4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45d3295314556756076d2722d4a565891ee855f5e0439bea2343ab3c6cc1ffc8\"" Feb 9 07:15:44.545550 env[1453]: time="2024-02-09T07:15:44.545536990Z" level=info msg="StartContainer for \"45d3295314556756076d2722d4a565891ee855f5e0439bea2343ab3c6cc1ffc8\"" Feb 9 07:15:44.552942 env[1453]: time="2024-02-09T07:15:44.552880107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9tvtx,Uid:96241758-30e2-460e-9c5a-589c252e72ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0506bd8a5e210a0c2e1b5e84988bca4c1f37dcb19a2c76bc6750a2f6323a531\"" Feb 9 07:15:44.554065 env[1453]: time="2024-02-09T07:15:44.554022707Z" level=info msg="CreateContainer within sandbox \"a0506bd8a5e210a0c2e1b5e84988bca4c1f37dcb19a2c76bc6750a2f6323a531\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 07:15:44.564632 systemd[1]: Started cri-containerd-45d3295314556756076d2722d4a565891ee855f5e0439bea2343ab3c6cc1ffc8.scope. Feb 9 07:15:44.597516 env[1453]: time="2024-02-09T07:15:44.597376440Z" level=info msg="StartContainer for \"45d3295314556756076d2722d4a565891ee855f5e0439bea2343ab3c6cc1ffc8\" returns successfully" Feb 9 07:15:44.606728 env[1453]: time="2024-02-09T07:15:44.606610492Z" level=info msg="CreateContainer within sandbox \"a0506bd8a5e210a0c2e1b5e84988bca4c1f37dcb19a2c76bc6750a2f6323a531\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad602525041bf3e770a844356cd1104231b549dc38c291cd47f7070192b4fdb6\"" Feb 9 07:15:44.607519 env[1453]: time="2024-02-09T07:15:44.607400545Z" level=info msg="StartContainer for \"ad602525041bf3e770a844356cd1104231b549dc38c291cd47f7070192b4fdb6\"" Feb 9 07:15:44.659694 systemd[1]: Started cri-containerd-ad602525041bf3e770a844356cd1104231b549dc38c291cd47f7070192b4fdb6.scope. Feb 9 07:15:44.721199 env[1453]: time="2024-02-09T07:15:44.721112005Z" level=info msg="StartContainer for \"ad602525041bf3e770a844356cd1104231b549dc38c291cd47f7070192b4fdb6\" returns successfully" Feb 9 07:15:45.062360 kubelet[2531]: I0209 07:15:45.062296 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9tvtx" podStartSLOduration=19.062193244 podCreationTimestamp="2024-02-09 07:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:15:45.061265836 +0000 UTC m=+34.191564455" watchObservedRunningTime="2024-02-09 07:15:45.062193244 +0000 UTC m=+34.192491850" Feb 9 07:15:45.080689 kubelet[2531]: I0209 07:15:45.080625 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lv4rv" podStartSLOduration=19.080507786 podCreationTimestamp="2024-02-09 07:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:15:45.079076176 +0000 UTC m=+34.209374808" watchObservedRunningTime="2024-02-09 07:15:45.080507786 +0000 UTC m=+34.210806431" Feb 9 07:16:37.122681 systemd[1]: Started sshd@8-147.75.49.59:22-218.92.0.118:11574.service. Feb 9 07:16:38.220307 sshd[4129]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 9 07:16:40.525534 sshd[4129]: Failed password for root from 218.92.0.118 port 11574 ssh2 Feb 9 07:16:44.322725 sshd[4129]: Failed password for root from 218.92.0.118 port 11574 ssh2 Feb 9 07:16:47.141785 sshd[4129]: Failed password for root from 218.92.0.118 port 11574 ssh2 Feb 9 07:16:48.929569 sshd[4129]: Received disconnect from 218.92.0.118 port 11574:11: [preauth] Feb 9 07:16:48.929569 sshd[4129]: Disconnected from authenticating user root 218.92.0.118 port 11574 [preauth] Feb 9 07:16:48.930136 sshd[4129]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 9 07:16:48.932144 systemd[1]: sshd@8-147.75.49.59:22-218.92.0.118:11574.service: Deactivated successfully. Feb 9 07:16:49.088992 systemd[1]: Started sshd@9-147.75.49.59:22-218.92.0.118:37439.service. Feb 9 07:16:50.153199 sshd[4135]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 9 07:16:52.438424 sshd[4135]: Failed password for root from 218.92.0.118 port 37439 ssh2 Feb 9 07:16:54.405785 sshd[4135]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:16:56.239145 sshd[4135]: Failed password for root from 218.92.0.118 port 37439 ssh2 Feb 9 07:16:59.058735 sshd[4135]: Failed password for root from 218.92.0.118 port 37439 ssh2 Feb 9 07:17:00.877896 sshd[4135]: Received disconnect from 218.92.0.118 port 37439:11: [preauth] Feb 9 07:17:00.877896 sshd[4135]: Disconnected from authenticating user root 218.92.0.118 port 37439 [preauth] Feb 9 07:17:00.878436 sshd[4135]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 9 07:17:00.880457 systemd[1]: sshd@9-147.75.49.59:22-218.92.0.118:37439.service: Deactivated successfully. Feb 9 07:17:01.065452 systemd[1]: Started sshd@10-147.75.49.59:22-218.92.0.118:59472.service. Feb 9 07:17:02.164124 sshd[4142]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 9 07:17:04.428644 sshd[4142]: Failed password for root from 218.92.0.118 port 59472 ssh2 Feb 9 07:17:08.244732 sshd[4142]: Failed password for root from 218.92.0.118 port 59472 ssh2 Feb 9 07:17:11.063209 sshd[4142]: Failed password for root from 218.92.0.118 port 59472 ssh2 Feb 9 07:17:12.906511 sshd[4142]: Received disconnect from 218.92.0.118 port 59472:11: [preauth] Feb 9 07:17:12.906511 sshd[4142]: Disconnected from authenticating user root 218.92.0.118 port 59472 [preauth] Feb 9 07:17:12.907057 sshd[4142]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 9 07:17:12.909130 systemd[1]: sshd@10-147.75.49.59:22-218.92.0.118:59472.service: Deactivated successfully. Feb 9 07:19:47.613202 systemd[1]: Started sshd@11-147.75.49.59:22-141.98.11.90:23528.service. Feb 9 07:19:51.505029 sshd[4165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.90 user=root Feb 9 07:19:53.303599 sshd[4165]: Failed password for root from 141.98.11.90 port 23528 ssh2 Feb 9 07:19:53.791509 sshd[4165]: Connection closed by authenticating user root 141.98.11.90 port 23528 [preauth] Feb 9 07:19:53.794041 systemd[1]: sshd@11-147.75.49.59:22-141.98.11.90:23528.service: Deactivated successfully. Feb 9 07:20:09.084795 update_engine[1443]: I0209 07:20:09.084676 1443 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 07:20:09.084795 update_engine[1443]: I0209 07:20:09.084759 1443 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 07:20:09.086429 update_engine[1443]: I0209 07:20:09.086349 1443 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 07:20:09.087354 update_engine[1443]: I0209 07:20:09.087272 1443 omaha_request_params.cc:62] Current group set to lts Feb 9 07:20:09.087639 update_engine[1443]: I0209 07:20:09.087603 1443 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 07:20:09.087639 update_engine[1443]: I0209 07:20:09.087625 1443 update_attempter.cc:643] Scheduling an action processor start. Feb 9 07:20:09.088013 update_engine[1443]: I0209 07:20:09.087658 1443 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 07:20:09.088013 update_engine[1443]: I0209 07:20:09.087723 1443 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 07:20:09.088013 update_engine[1443]: I0209 07:20:09.087872 1443 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 07:20:09.088013 update_engine[1443]: I0209 07:20:09.087888 1443 omaha_request_action.cc:271] Request: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: Feb 9 07:20:09.088013 update_engine[1443]: I0209 07:20:09.087898 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 07:20:09.089285 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 07:20:09.091124 update_engine[1443]: I0209 07:20:09.091068 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 07:20:09.091396 update_engine[1443]: E0209 07:20:09.091315 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 07:20:09.091599 update_engine[1443]: I0209 07:20:09.091502 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 07:20:09.350611 systemd[1]: Started sshd@12-147.75.49.59:22-61.146.122.50:47245.service. Feb 9 07:20:12.536413 sshd[4175]: Invalid user test from 61.146.122.50 port 47245 Feb 9 07:20:12.542466 sshd[4175]: pam_faillock(sshd:auth): User unknown Feb 9 07:20:12.543539 sshd[4175]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:20:12.543626 sshd[4175]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.146.122.50 Feb 9 07:20:12.544526 sshd[4175]: pam_faillock(sshd:auth): User unknown Feb 9 07:20:13.891654 sshd[4175]: Failed password for invalid user test from 61.146.122.50 port 47245 ssh2 Feb 9 07:20:15.220987 sshd[4179]: pam_faillock(sshd:auth): User unknown Feb 9 07:20:15.226075 sshd[4175]: Postponed keyboard-interactive for invalid user test from 61.146.122.50 port 47245 ssh2 [preauth] Feb 9 07:20:15.870562 sshd[4179]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:20:15.871747 sshd[4179]: pam_faillock(sshd:auth): User unknown Feb 9 07:20:17.630433 sshd[4175]: PAM: Permission denied for illegal user test from 61.146.122.50 Feb 9 07:20:17.631303 sshd[4175]: Failed keyboard-interactive/pam for invalid user test from 61.146.122.50 port 47245 ssh2 Feb 9 07:20:18.332877 sshd[4175]: Connection closed by invalid user test 61.146.122.50 port 47245 [preauth] Feb 9 07:20:18.335258 systemd[1]: sshd@12-147.75.49.59:22-61.146.122.50:47245.service: Deactivated successfully. Feb 9 07:20:18.993798 update_engine[1443]: I0209 07:20:18.993664 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 07:20:18.994633 update_engine[1443]: I0209 07:20:18.994143 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 07:20:18.994633 update_engine[1443]: E0209 07:20:18.994343 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 07:20:18.994633 update_engine[1443]: I0209 07:20:18.994552 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 07:20:28.993933 update_engine[1443]: I0209 07:20:28.993721 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 07:20:28.994845 update_engine[1443]: I0209 07:20:28.994252 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 07:20:28.994845 update_engine[1443]: E0209 07:20:28.994455 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 07:20:28.994845 update_engine[1443]: I0209 07:20:28.994669 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 07:20:38.994180 update_engine[1443]: I0209 07:20:38.994053 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 07:20:38.995111 update_engine[1443]: I0209 07:20:38.994601 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 07:20:38.995111 update_engine[1443]: E0209 07:20:38.994812 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 07:20:38.995111 update_engine[1443]: I0209 07:20:38.994958 1443 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 07:20:38.995111 update_engine[1443]: I0209 07:20:38.994975 1443 omaha_request_action.cc:621] Omaha request response: Feb 9 07:20:38.995537 update_engine[1443]: E0209 07:20:38.995115 1443 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995143 1443 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995153 1443 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995162 1443 update_attempter.cc:306] Processing Done. Feb 9 07:20:38.995537 update_engine[1443]: E0209 07:20:38.995187 1443 update_attempter.cc:619] Update failed. Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995196 1443 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995205 1443 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995215 1443 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995363 1443 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995416 1443 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995426 1443 omaha_request_action.cc:271] Request: Feb 9 07:20:38.995537 update_engine[1443]: Feb 9 07:20:38.995537 update_engine[1443]: Feb 9 07:20:38.995537 update_engine[1443]: Feb 9 07:20:38.995537 update_engine[1443]: Feb 9 07:20:38.995537 update_engine[1443]: Feb 9 07:20:38.995537 update_engine[1443]: Feb 9 07:20:38.995537 update_engine[1443]: I0209 07:20:38.995435 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.995750 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 07:20:38.997284 update_engine[1443]: E0209 07:20:38.995914 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996048 1443 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996063 1443 omaha_request_action.cc:621] Omaha request response: Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996073 1443 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996081 1443 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996089 1443 update_attempter.cc:306] Processing Done. Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996096 1443 update_attempter.cc:310] Error event sent. Feb 9 07:20:38.997284 update_engine[1443]: I0209 07:20:38.996116 1443 update_check_scheduler.cc:74] Next update check in 47m53s Feb 9 07:20:38.998130 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 07:20:38.998130 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 07:21:34.089660 systemd[1]: Started sshd@13-147.75.49.59:22-218.92.0.55:33126.service. Feb 9 07:21:34.262561 sshd[4190]: Unable to negotiate with 218.92.0.55 port 33126: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 07:21:34.264570 systemd[1]: sshd@13-147.75.49.59:22-218.92.0.55:33126.service: Deactivated successfully. Feb 9 07:22:43.822681 systemd[1]: Started sshd@14-147.75.49.59:22-61.177.172.140:27647.service. Feb 9 07:22:44.899936 sshd[4201]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:22:45.854799 systemd[1]: Started sshd@15-147.75.49.59:22-192.241.215.38:37236.service. Feb 9 07:22:47.115068 sshd[4201]: Failed password for root from 61.177.172.140 port 27647 ssh2 Feb 9 07:22:51.387887 sshd[4201]: Failed password for root from 61.177.172.140 port 27647 ssh2 Feb 9 07:22:55.524936 sshd[4201]: Failed password for root from 61.177.172.140 port 27647 ssh2 Feb 9 07:22:55.844167 sshd[4204]: kex_exchange_identification: Connection closed by remote host Feb 9 07:22:55.844167 sshd[4204]: Connection closed by 192.241.215.38 port 37236 Feb 9 07:22:55.845570 systemd[1]: sshd@15-147.75.49.59:22-192.241.215.38:37236.service: Deactivated successfully. Feb 9 07:22:57.658302 sshd[4201]: Received disconnect from 61.177.172.140 port 27647:11: [preauth] Feb 9 07:22:57.658302 sshd[4201]: Disconnected from authenticating user root 61.177.172.140 port 27647 [preauth] Feb 9 07:22:57.658851 sshd[4201]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:22:57.660993 systemd[1]: sshd@14-147.75.49.59:22-61.177.172.140:27647.service: Deactivated successfully. Feb 9 07:22:57.840999 systemd[1]: Started sshd@16-147.75.49.59:22-61.177.172.140:15031.service. Feb 9 07:22:59.193056 sshd[4211]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:23:00.800917 sshd[4211]: Failed password for root from 61.177.172.140 port 15031 ssh2 Feb 9 07:23:01.405950 sshd[4211]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:23:03.620890 sshd[4211]: Failed password for root from 61.177.172.140 port 15031 ssh2 Feb 9 07:23:07.750345 sshd[4211]: Failed password for root from 61.177.172.140 port 15031 ssh2 Feb 9 07:23:09.898263 sshd[4211]: Received disconnect from 61.177.172.140 port 15031:11: [preauth] Feb 9 07:23:09.898263 sshd[4211]: Disconnected from authenticating user root 61.177.172.140 port 15031 [preauth] Feb 9 07:23:09.898836 sshd[4211]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:23:09.900859 systemd[1]: sshd@16-147.75.49.59:22-61.177.172.140:15031.service: Deactivated successfully. Feb 9 07:23:10.067188 systemd[1]: Started sshd@17-147.75.49.59:22-61.177.172.140:38832.service. Feb 9 07:23:11.137819 sshd[4216]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:23:12.726020 sshd[4216]: Failed password for root from 61.177.172.140 port 38832 ssh2 Feb 9 07:23:15.530832 sshd[4216]: Failed password for root from 61.177.172.140 port 38832 ssh2 Feb 9 07:23:19.673532 sshd[4216]: Failed password for root from 61.177.172.140 port 38832 ssh2 Feb 9 07:23:21.844987 sshd[4216]: Received disconnect from 61.177.172.140 port 38832:11: [preauth] Feb 9 07:23:21.844987 sshd[4216]: Disconnected from authenticating user root 61.177.172.140 port 38832 [preauth] Feb 9 07:23:21.845134 sshd[4216]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:23:21.845664 systemd[1]: sshd@17-147.75.49.59:22-61.177.172.140:38832.service: Deactivated successfully. Feb 9 07:29:03.847190 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 9 07:29:03.859056 systemd-tmpfiles[4260]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 07:29:03.859268 systemd-tmpfiles[4260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 07:29:03.859979 systemd-tmpfiles[4260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 07:29:03.869391 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 9 07:29:03.869504 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 9 07:29:03.870676 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 9 07:29:10.659004 systemd[1]: Started sshd@18-147.75.49.59:22-218.92.0.24:26014.service. Feb 9 07:29:11.598559 sshd[4264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.24 user=root Feb 9 07:29:13.607731 sshd[4264]: Failed password for root from 218.92.0.24 port 26014 ssh2 Feb 9 07:29:15.401003 sshd[4264]: Failed password for root from 218.92.0.24 port 26014 ssh2 Feb 9 07:29:18.193210 sshd[4264]: Failed password for root from 218.92.0.24 port 26014 ssh2 Feb 9 07:29:20.197175 sshd[4264]: Received disconnect from 218.92.0.24 port 26014:11: [preauth] Feb 9 07:29:20.197175 sshd[4264]: Disconnected from authenticating user root 218.92.0.24 port 26014 [preauth] Feb 9 07:29:20.197722 sshd[4264]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.24 user=root Feb 9 07:29:20.199843 systemd[1]: sshd@18-147.75.49.59:22-218.92.0.24:26014.service: Deactivated successfully. Feb 9 07:29:20.348371 systemd[1]: Started sshd@19-147.75.49.59:22-218.92.0.24:36279.service. Feb 9 07:29:21.296630 sshd[4271]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.24 user=root Feb 9 07:29:23.346206 sshd[4271]: Failed password for root from 218.92.0.24 port 36279 ssh2 Feb 9 07:29:25.528124 sshd[4271]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:29:27.125874 sshd[4271]: Failed password for root from 218.92.0.24 port 36279 ssh2 Feb 9 07:29:29.919316 sshd[4271]: Failed password for root from 218.92.0.24 port 36279 ssh2 Feb 9 07:29:31.944867 sshd[4271]: Received disconnect from 218.92.0.24 port 36279:11: [preauth] Feb 9 07:29:31.944867 sshd[4271]: Disconnected from authenticating user root 218.92.0.24 port 36279 [preauth] Feb 9 07:29:31.945395 sshd[4271]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.24 user=root Feb 9 07:29:31.947551 systemd[1]: sshd@19-147.75.49.59:22-218.92.0.24:36279.service: Deactivated successfully. Feb 9 07:29:32.080621 systemd[1]: Started sshd@20-147.75.49.59:22-218.92.0.24:60486.service. Feb 9 07:29:32.982784 sshd[4277]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.24 user=root Feb 9 07:29:34.876604 sshd[4277]: Failed password for root from 218.92.0.24 port 60486 ssh2 Feb 9 07:29:36.799760 sshd[4277]: Failed password for root from 218.92.0.24 port 60486 ssh2 Feb 9 07:29:38.919181 sshd[4277]: Failed password for root from 218.92.0.24 port 60486 ssh2 Feb 9 07:29:39.520165 sshd[4277]: Received disconnect from 218.92.0.24 port 60486:11: [preauth] Feb 9 07:29:39.520165 sshd[4277]: Disconnected from authenticating user root 218.92.0.24 port 60486 [preauth] Feb 9 07:29:39.520730 sshd[4277]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.24 user=root Feb 9 07:29:39.522795 systemd[1]: sshd@20-147.75.49.59:22-218.92.0.24:60486.service: Deactivated successfully. Feb 9 07:32:18.748522 systemd[1]: Started sshd@21-147.75.49.59:22-61.177.172.179:14681.service. Feb 9 07:34:18.753671 sshd[4300]: Timeout before authentication for 61.177.172.179 port 14681 Feb 9 07:34:18.755298 systemd[1]: sshd@21-147.75.49.59:22-61.177.172.179:14681.service: Deactivated successfully. Feb 9 07:34:27.140191 systemd[1]: Started sshd@22-147.75.49.59:22-218.92.0.33:58000.service. Feb 9 07:34:27.297739 sshd[4319]: Unable to negotiate with 218.92.0.33 port 58000: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 07:34:27.298341 systemd[1]: sshd@22-147.75.49.59:22-218.92.0.33:58000.service: Deactivated successfully. Feb 9 07:36:06.909006 systemd[1]: Started sshd@23-147.75.49.59:22-218.92.0.25:55818.service. Feb 9 07:36:07.996578 sshd[4335]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:36:10.451769 sshd[4335]: Failed password for root from 218.92.0.25 port 55818 ssh2 Feb 9 07:36:14.726380 sshd[4335]: Failed password for root from 218.92.0.25 port 55818 ssh2 Feb 9 07:36:18.192747 sshd[4335]: Failed password for root from 218.92.0.25 port 55818 ssh2 Feb 9 07:36:18.714282 sshd[4335]: Received disconnect from 218.92.0.25 port 55818:11: [preauth] Feb 9 07:36:18.714282 sshd[4335]: Disconnected from authenticating user root 218.92.0.25 port 55818 [preauth] Feb 9 07:36:18.714821 sshd[4335]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:36:18.716897 systemd[1]: sshd@23-147.75.49.59:22-218.92.0.25:55818.service: Deactivated successfully. Feb 9 07:36:18.885978 systemd[1]: Started sshd@24-147.75.49.59:22-218.92.0.25:13570.service. Feb 9 07:36:19.965308 sshd[4342]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:36:22.400716 sshd[4342]: Failed password for root from 218.92.0.25 port 13570 ssh2 Feb 9 07:36:24.218280 sshd[4342]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:36:26.673803 sshd[4342]: Failed password for root from 218.92.0.25 port 13570 ssh2 Feb 9 07:36:30.810685 sshd[4342]: Failed password for root from 218.92.0.25 port 13570 ssh2 Feb 9 07:36:32.723506 sshd[4342]: Received disconnect from 218.92.0.25 port 13570:11: [preauth] Feb 9 07:36:32.723506 sshd[4342]: Disconnected from authenticating user root 218.92.0.25 port 13570 [preauth] Feb 9 07:36:32.724029 sshd[4342]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:36:32.726102 systemd[1]: sshd@24-147.75.49.59:22-218.92.0.25:13570.service: Deactivated successfully. Feb 9 07:36:32.891127 systemd[1]: Started sshd@25-147.75.49.59:22-218.92.0.25:42919.service. Feb 9 07:36:33.962330 sshd[4349]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:36:35.985737 sshd[4349]: Failed password for root from 218.92.0.25 port 42919 ssh2 Feb 9 07:36:38.605310 sshd[4349]: Failed password for root from 218.92.0.25 port 42919 ssh2 Feb 9 07:36:42.741817 sshd[4349]: Failed password for root from 218.92.0.25 port 42919 ssh2 Feb 9 07:36:44.673782 sshd[4349]: Received disconnect from 218.92.0.25 port 42919:11: [preauth] Feb 9 07:36:44.673782 sshd[4349]: Disconnected from authenticating user root 218.92.0.25 port 42919 [preauth] Feb 9 07:36:44.674316 sshd[4349]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:36:44.676378 systemd[1]: sshd@25-147.75.49.59:22-218.92.0.25:42919.service: Deactivated successfully. Feb 9 07:36:55.572208 systemd[1]: Started sshd@26-147.75.49.59:22-141.98.11.11:27800.service. Feb 9 07:36:57.125379 sshd[4353]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.11 user=sshd Feb 9 07:36:59.108735 sshd[4353]: Failed password for sshd from 141.98.11.11 port 27800 ssh2 Feb 9 07:37:00.608974 sshd[4353]: Connection closed by authenticating user sshd 141.98.11.11 port 27800 [preauth] Feb 9 07:37:00.611453 systemd[1]: sshd@26-147.75.49.59:22-141.98.11.11:27800.service: Deactivated successfully. Feb 9 07:37:41.827195 systemd[1]: Started sshd@27-147.75.49.59:22-180.101.88.197:63469.service. Feb 9 07:37:42.765711 sshd[4363]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 07:37:45.260746 sshd[4363]: Failed password for root from 180.101.88.197 port 63469 ssh2 Feb 9 07:37:48.707918 sshd[4363]: Failed password for root from 180.101.88.197 port 63469 ssh2 Feb 9 07:37:51.971774 sshd[4363]: Failed password for root from 180.101.88.197 port 63469 ssh2 Feb 9 07:37:53.409962 sshd[4363]: Received disconnect from 180.101.88.197 port 63469:11: [preauth] Feb 9 07:37:53.409962 sshd[4363]: Disconnected from authenticating user root 180.101.88.197 port 63469 [preauth] Feb 9 07:37:53.410539 sshd[4363]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 07:37:53.412589 systemd[1]: sshd@27-147.75.49.59:22-180.101.88.197:63469.service: Deactivated successfully. Feb 9 07:37:53.557911 systemd[1]: Started sshd@28-147.75.49.59:22-180.101.88.197:32751.service. Feb 9 07:37:54.499362 sshd[4367]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 07:37:56.974733 sshd[4367]: Failed password for root from 180.101.88.197 port 32751 ssh2 Feb 9 07:38:00.421697 sshd[4367]: Failed password for root from 180.101.88.197 port 32751 ssh2 Feb 9 07:38:02.883762 sshd[4367]: Failed password for root from 180.101.88.197 port 32751 ssh2 Feb 9 07:38:03.100257 sshd[4367]: Received disconnect from 180.101.88.197 port 32751:11: [preauth] Feb 9 07:38:03.100257 sshd[4367]: Disconnected from authenticating user root 180.101.88.197 port 32751 [preauth] Feb 9 07:38:03.100990 sshd[4367]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 07:38:03.103135 systemd[1]: sshd@28-147.75.49.59:22-180.101.88.197:32751.service: Deactivated successfully. Feb 9 07:38:03.254674 systemd[1]: Started sshd@29-147.75.49.59:22-180.101.88.197:46682.service. Feb 9 07:38:04.205383 sshd[4373]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 07:38:06.053737 sshd[4373]: Failed password for root from 180.101.88.197 port 46682 ssh2 Feb 9 07:38:08.516805 sshd[4373]: Failed password for root from 180.101.88.197 port 46682 ssh2 Feb 9 07:38:12.297659 sshd[4373]: Failed password for root from 180.101.88.197 port 46682 ssh2 Feb 9 07:38:13.179085 sshd[4373]: Received disconnect from 180.101.88.197 port 46682:11: [preauth] Feb 9 07:38:13.179085 sshd[4373]: Disconnected from authenticating user root 180.101.88.197 port 46682 [preauth] Feb 9 07:38:13.179625 sshd[4373]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 07:38:13.181675 systemd[1]: sshd@29-147.75.49.59:22-180.101.88.197:46682.service: Deactivated successfully. Feb 9 07:43:19.087432 systemd[1]: Started sshd@30-147.75.49.59:22-49.43.35.192:38266.service. Feb 9 07:43:21.450967 sshd[4417]: Invalid user user from 49.43.35.192 port 38266 Feb 9 07:43:21.457334 sshd[4417]: pam_faillock(sshd:auth): User unknown Feb 9 07:43:21.457974 sshd[4417]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:43:21.457996 sshd[4417]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=49.43.35.192 Feb 9 07:43:21.458204 sshd[4417]: pam_faillock(sshd:auth): User unknown Feb 9 07:43:23.491846 sshd[4417]: Failed password for invalid user user from 49.43.35.192 port 38266 ssh2 Feb 9 07:43:24.269964 sshd[4419]: pam_faillock(sshd:auth): User unknown Feb 9 07:43:24.278320 sshd[4417]: Postponed keyboard-interactive for invalid user user from 49.43.35.192 port 38266 ssh2 [preauth] Feb 9 07:43:24.833882 sshd[4419]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:43:24.835053 sshd[4419]: pam_faillock(sshd:auth): User unknown Feb 9 07:43:26.310364 systemd[1]: Started sshd@31-147.75.49.59:22-124.220.81.132:48854.service. Feb 9 07:43:27.155364 sshd[4421]: Invalid user rhz from 124.220.81.132 port 48854 Feb 9 07:43:27.161464 sshd[4421]: pam_faillock(sshd:auth): User unknown Feb 9 07:43:27.162443 sshd[4421]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:43:27.162553 sshd[4421]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.81.132 Feb 9 07:43:27.163464 sshd[4421]: pam_faillock(sshd:auth): User unknown Feb 9 07:43:27.280716 sshd[4417]: PAM: Permission denied for illegal user user from 49.43.35.192 Feb 9 07:43:27.280949 sshd[4417]: Failed keyboard-interactive/pam for invalid user user from 49.43.35.192 port 38266 ssh2 Feb 9 07:43:27.830572 sshd[4417]: Connection closed by invalid user user 49.43.35.192 port 38266 [preauth] Feb 9 07:43:27.833317 systemd[1]: sshd@30-147.75.49.59:22-49.43.35.192:38266.service: Deactivated successfully. Feb 9 07:43:28.354557 sshd[4421]: Failed password for invalid user rhz from 124.220.81.132 port 48854 ssh2 Feb 9 07:43:29.631516 sshd[4421]: Received disconnect from 124.220.81.132 port 48854:11: Bye Bye [preauth] Feb 9 07:43:29.631516 sshd[4421]: Disconnected from invalid user rhz 124.220.81.132 port 48854 [preauth] Feb 9 07:43:29.633707 systemd[1]: sshd@31-147.75.49.59:22-124.220.81.132:48854.service: Deactivated successfully. Feb 9 07:44:10.598098 systemd[1]: Started sshd@32-147.75.49.59:22-24.199.84.116:44840.service. Feb 9 07:44:11.034081 sshd[4432]: Invalid user rahaa from 24.199.84.116 port 44840 Feb 9 07:44:11.040227 sshd[4432]: pam_faillock(sshd:auth): User unknown Feb 9 07:44:11.040749 sshd[4432]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:44:11.040766 sshd[4432]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=24.199.84.116 Feb 9 07:44:11.040928 sshd[4432]: pam_faillock(sshd:auth): User unknown Feb 9 07:44:12.607734 sshd[4432]: Failed password for invalid user rahaa from 24.199.84.116 port 44840 ssh2 Feb 9 07:44:12.697182 sshd[4432]: Received disconnect from 24.199.84.116 port 44840:11: Bye Bye [preauth] Feb 9 07:44:12.697182 sshd[4432]: Disconnected from invalid user rahaa 24.199.84.116 port 44840 [preauth] Feb 9 07:44:12.699719 systemd[1]: sshd@32-147.75.49.59:22-24.199.84.116:44840.service: Deactivated successfully. Feb 9 07:44:19.312251 systemd[1]: Started sshd@33-147.75.49.59:22-43.140.221.64:60094.service. Feb 9 07:44:20.158604 sshd[4438]: Invalid user zcho from 43.140.221.64 port 60094 Feb 9 07:44:20.164991 sshd[4438]: pam_faillock(sshd:auth): User unknown Feb 9 07:44:20.165670 sshd[4438]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:44:20.165687 sshd[4438]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.140.221.64 Feb 9 07:44:20.165887 sshd[4438]: pam_faillock(sshd:auth): User unknown Feb 9 07:44:22.300023 sshd[4438]: Failed password for invalid user zcho from 43.140.221.64 port 60094 ssh2 Feb 9 07:44:23.278797 sshd[4438]: Received disconnect from 43.140.221.64 port 60094:11: Bye Bye [preauth] Feb 9 07:44:23.278797 sshd[4438]: Disconnected from invalid user zcho 43.140.221.64 port 60094 [preauth] Feb 9 07:44:23.279508 systemd[1]: sshd@33-147.75.49.59:22-43.140.221.64:60094.service: Deactivated successfully. Feb 9 07:44:29.418698 systemd[1]: Started sshd@34-147.75.49.59:22-165.227.213.175:50280.service. Feb 9 07:44:29.875796 sshd[4444]: Invalid user taba from 165.227.213.175 port 50280 Feb 9 07:44:29.881944 sshd[4444]: pam_faillock(sshd:auth): User unknown Feb 9 07:44:29.882947 sshd[4444]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:44:29.883036 sshd[4444]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:44:29.883940 sshd[4444]: pam_faillock(sshd:auth): User unknown Feb 9 07:44:32.252110 sshd[4444]: Failed password for invalid user taba from 165.227.213.175 port 50280 ssh2 Feb 9 07:44:32.654966 sshd[4444]: Received disconnect from 165.227.213.175 port 50280:11: Bye Bye [preauth] Feb 9 07:44:32.654966 sshd[4444]: Disconnected from invalid user taba 165.227.213.175 port 50280 [preauth] Feb 9 07:44:32.657453 systemd[1]: sshd@34-147.75.49.59:22-165.227.213.175:50280.service: Deactivated successfully. Feb 9 07:44:55.269347 systemd[1]: Started sshd@35-147.75.49.59:22-218.92.0.56:36259.service. Feb 9 07:44:56.216889 sshd[4452]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 07:44:57.959754 sshd[4452]: Failed password for root from 218.92.0.56 port 36259 ssh2 Feb 9 07:44:59.754705 sshd[4452]: Failed password for root from 218.92.0.56 port 36259 ssh2 Feb 9 07:45:02.548728 sshd[4452]: Failed password for root from 218.92.0.56 port 36259 ssh2 Feb 9 07:45:02.776024 sshd[4452]: Received disconnect from 218.92.0.56 port 36259:11: [preauth] Feb 9 07:45:02.776024 sshd[4452]: Disconnected from authenticating user root 218.92.0.56 port 36259 [preauth] Feb 9 07:45:02.776573 sshd[4452]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 07:45:02.778568 systemd[1]: sshd@35-147.75.49.59:22-218.92.0.56:36259.service: Deactivated successfully. Feb 9 07:45:02.941143 systemd[1]: Started sshd@36-147.75.49.59:22-218.92.0.56:28664.service. Feb 9 07:45:03.939949 sshd[4460]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 07:45:06.310410 sshd[4460]: Failed password for root from 218.92.0.56 port 28664 ssh2 Feb 9 07:45:08.179768 sshd[4460]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:45:09.385088 systemd[1]: Started sshd@37-147.75.49.59:22-218.92.0.31:22940.service. Feb 9 07:45:10.238569 sshd[4460]: Failed password for root from 218.92.0.56 port 28664 ssh2 Feb 9 07:45:10.371404 sshd[4463]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 07:45:11.702758 sshd[4463]: Failed password for root from 218.92.0.31 port 22940 ssh2 Feb 9 07:45:14.357765 sshd[4460]: Failed password for root from 218.92.0.56 port 28664 ssh2 Feb 9 07:45:14.502721 sshd[4463]: Failed password for root from 218.92.0.31 port 22940 ssh2 Feb 9 07:45:14.614206 sshd[4460]: Received disconnect from 218.92.0.56 port 28664:11: [preauth] Feb 9 07:45:14.614206 sshd[4460]: Disconnected from authenticating user root 218.92.0.56 port 28664 [preauth] Feb 9 07:45:14.614622 sshd[4460]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 07:45:14.616587 systemd[1]: sshd@36-147.75.49.59:22-218.92.0.56:28664.service: Deactivated successfully. Feb 9 07:45:14.779994 systemd[1]: Started sshd@38-147.75.49.59:22-218.92.0.56:41571.service. Feb 9 07:45:15.803067 sshd[4469]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 07:45:16.971767 sshd[4463]: Failed password for root from 218.92.0.31 port 22940 ssh2 Feb 9 07:45:18.153669 sshd[4469]: Failed password for root from 218.92.0.56 port 41571 ssh2 Feb 9 07:45:18.995067 sshd[4463]: Received disconnect from 218.92.0.31 port 22940:11: [preauth] Feb 9 07:45:18.995067 sshd[4463]: Disconnected from authenticating user root 218.92.0.31 port 22940 [preauth] Feb 9 07:45:18.995617 sshd[4463]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 07:45:18.997600 systemd[1]: sshd@37-147.75.49.59:22-218.92.0.31:22940.service: Deactivated successfully. Feb 9 07:45:19.163159 systemd[1]: Started sshd@39-147.75.49.59:22-218.92.0.31:28623.service. Feb 9 07:45:20.183416 sshd[4473]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 07:45:22.085757 sshd[4469]: Failed password for root from 218.92.0.56 port 41571 ssh2 Feb 9 07:45:22.221737 sshd[4473]: Failed password for root from 218.92.0.31 port 28623 ssh2 Feb 9 07:45:23.556744 sshd[4469]: Failed password for root from 218.92.0.56 port 41571 ssh2 Feb 9 07:45:23.693759 sshd[4473]: Failed password for root from 218.92.0.31 port 28623 ssh2 Feb 9 07:45:24.444302 sshd[4469]: Received disconnect from 218.92.0.56 port 41571:11: [preauth] Feb 9 07:45:24.444302 sshd[4469]: Disconnected from authenticating user root 218.92.0.56 port 41571 [preauth] Feb 9 07:45:24.444913 sshd[4469]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 07:45:24.446879 systemd[1]: sshd@38-147.75.49.59:22-218.92.0.56:41571.service: Deactivated successfully. Feb 9 07:45:26.168272 sshd[4473]: Failed password for root from 218.92.0.31 port 28623 ssh2 Feb 9 07:45:26.779801 sshd[4473]: Received disconnect from 218.92.0.31 port 28623:11: [preauth] Feb 9 07:45:26.779801 sshd[4473]: Disconnected from authenticating user root 218.92.0.31 port 28623 [preauth] Feb 9 07:45:26.780352 sshd[4473]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 07:45:26.782359 systemd[1]: sshd@39-147.75.49.59:22-218.92.0.31:28623.service: Deactivated successfully. Feb 9 07:45:26.924327 systemd[1]: Started sshd@40-147.75.49.59:22-218.92.0.31:31040.service. Feb 9 07:45:27.899897 sshd[4481]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 07:45:30.563960 sshd[4481]: Failed password for root from 218.92.0.31 port 31040 ssh2 Feb 9 07:45:34.154449 sshd[4481]: Failed password for root from 218.92.0.31 port 31040 ssh2 Feb 9 07:45:35.617888 sshd[4481]: Failed password for root from 218.92.0.31 port 31040 ssh2 Feb 9 07:45:36.517190 sshd[4481]: Received disconnect from 218.92.0.31 port 31040:11: [preauth] Feb 9 07:45:36.517190 sshd[4481]: Disconnected from authenticating user root 218.92.0.31 port 31040 [preauth] Feb 9 07:45:36.517737 sshd[4481]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 07:45:36.519753 systemd[1]: sshd@40-147.75.49.59:22-218.92.0.31:31040.service: Deactivated successfully. Feb 9 07:46:14.565813 systemd[1]: Started sshd@41-147.75.49.59:22-49.232.235.48:46752.service. Feb 9 07:46:40.013726 systemd[1]: Started sshd@42-147.75.49.59:22-170.106.152.162:57526.service. Feb 9 07:46:43.845045 sshd[4493]: Invalid user tanvir from 170.106.152.162 port 57526 Feb 9 07:46:43.851165 sshd[4493]: pam_faillock(sshd:auth): User unknown Feb 9 07:46:43.852310 sshd[4493]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:46:43.852399 sshd[4493]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.106.152.162 Feb 9 07:46:43.853339 sshd[4493]: pam_faillock(sshd:auth): User unknown Feb 9 07:46:45.950813 sshd[4493]: Failed password for invalid user tanvir from 170.106.152.162 port 57526 ssh2 Feb 9 07:46:47.788817 sshd[4493]: Received disconnect from 170.106.152.162 port 57526:11: Bye Bye [preauth] Feb 9 07:46:47.788817 sshd[4493]: Disconnected from invalid user tanvir 170.106.152.162 port 57526 [preauth] Feb 9 07:46:47.789399 systemd[1]: sshd@42-147.75.49.59:22-170.106.152.162:57526.service: Deactivated successfully. Feb 9 07:46:49.127957 systemd[1]: Started sshd@43-147.75.49.59:22-124.156.202.69:56674.service. Feb 9 07:46:50.139336 sshd[4497]: Invalid user jurist2 from 124.156.202.69 port 56674 Feb 9 07:46:50.145396 sshd[4497]: pam_faillock(sshd:auth): User unknown Feb 9 07:46:50.146374 sshd[4497]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:46:50.146465 sshd[4497]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.156.202.69 Feb 9 07:46:50.147391 sshd[4497]: pam_faillock(sshd:auth): User unknown Feb 9 07:46:52.206798 sshd[4497]: Failed password for invalid user jurist2 from 124.156.202.69 port 56674 ssh2 Feb 9 07:46:53.764427 sshd[4497]: Received disconnect from 124.156.202.69 port 56674:11: Bye Bye [preauth] Feb 9 07:46:53.764427 sshd[4497]: Disconnected from invalid user jurist2 124.156.202.69 port 56674 [preauth] Feb 9 07:46:53.766895 systemd[1]: sshd@43-147.75.49.59:22-124.156.202.69:56674.service: Deactivated successfully. Feb 9 07:47:10.345733 systemd[1]: Started sshd@44-147.75.49.59:22-152.136.35.30:58508.service. Feb 9 07:47:14.270028 systemd[1]: Started sshd@45-147.75.49.59:22-198.20.246.131:59744.service. Feb 9 07:47:14.461052 sshd[4507]: Invalid user dada from 198.20.246.131 port 59744 Feb 9 07:47:14.467230 sshd[4507]: pam_faillock(sshd:auth): User unknown Feb 9 07:47:14.468315 sshd[4507]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:47:14.468404 sshd[4507]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:47:14.469551 sshd[4507]: pam_faillock(sshd:auth): User unknown Feb 9 07:47:16.823881 sshd[4507]: Failed password for invalid user dada from 198.20.246.131 port 59744 ssh2 Feb 9 07:47:19.180979 sshd[4507]: Received disconnect from 198.20.246.131 port 59744:11: Bye Bye [preauth] Feb 9 07:47:19.180979 sshd[4507]: Disconnected from invalid user dada 198.20.246.131 port 59744 [preauth] Feb 9 07:47:19.183435 systemd[1]: sshd@45-147.75.49.59:22-198.20.246.131:59744.service: Deactivated successfully. Feb 9 07:47:27.491719 systemd[1]: Started sshd@46-147.75.49.59:22-218.92.0.33:50358.service. Feb 9 07:47:27.651166 sshd[4513]: Unable to negotiate with 218.92.0.33 port 50358: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 07:47:27.652965 systemd[1]: sshd@46-147.75.49.59:22-218.92.0.33:50358.service: Deactivated successfully. Feb 9 07:47:31.905801 systemd[1]: Started sshd@47-147.75.49.59:22-54.37.228.73:59838.service. Feb 9 07:47:32.746600 sshd[4517]: Invalid user aylin from 54.37.228.73 port 59838 Feb 9 07:47:32.752792 sshd[4517]: pam_faillock(sshd:auth): User unknown Feb 9 07:47:32.753882 sshd[4517]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:47:32.753972 sshd[4517]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:47:32.754868 sshd[4517]: pam_faillock(sshd:auth): User unknown Feb 9 07:47:34.913871 sshd[4517]: Failed password for invalid user aylin from 54.37.228.73 port 59838 ssh2 Feb 9 07:47:36.061086 sshd[4517]: Received disconnect from 54.37.228.73 port 59838:11: Bye Bye [preauth] Feb 9 07:47:36.061086 sshd[4517]: Disconnected from invalid user aylin 54.37.228.73 port 59838 [preauth] Feb 9 07:47:36.063595 systemd[1]: sshd@47-147.75.49.59:22-54.37.228.73:59838.service: Deactivated successfully. Feb 9 07:48:02.008777 systemd[1]: Started sshd@48-147.75.49.59:22-43.142.87.223:56066.service. Feb 9 07:48:03.648630 sshd[4523]: Invalid user pouya from 43.142.87.223 port 56066 Feb 9 07:48:03.654644 sshd[4523]: pam_faillock(sshd:auth): User unknown Feb 9 07:48:03.655646 sshd[4523]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:48:03.655735 sshd[4523]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.142.87.223 Feb 9 07:48:03.656753 sshd[4523]: pam_faillock(sshd:auth): User unknown Feb 9 07:48:06.072088 sshd[4523]: Failed password for invalid user pouya from 43.142.87.223 port 56066 ssh2 Feb 9 07:48:07.166891 sshd[4523]: Received disconnect from 43.142.87.223 port 56066:11: Bye Bye [preauth] Feb 9 07:48:07.166891 sshd[4523]: Disconnected from invalid user pouya 43.142.87.223 port 56066 [preauth] Feb 9 07:48:07.169359 systemd[1]: sshd@48-147.75.49.59:22-43.142.87.223:56066.service: Deactivated successfully. Feb 9 07:48:14.570840 sshd[4489]: Timeout before authentication for 49.232.235.48 port 46752 Feb 9 07:48:14.572194 systemd[1]: sshd@41-147.75.49.59:22-49.232.235.48:46752.service: Deactivated successfully. Feb 9 07:48:27.039117 systemd[1]: Started sshd@49-147.75.49.59:22-51.77.245.237:60420.service. Feb 9 07:48:27.922110 sshd[4534]: Invalid user mxrush from 51.77.245.237 port 60420 Feb 9 07:48:27.928395 sshd[4534]: pam_faillock(sshd:auth): User unknown Feb 9 07:48:27.929405 sshd[4534]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:48:27.929518 sshd[4534]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:48:27.930421 sshd[4534]: pam_faillock(sshd:auth): User unknown Feb 9 07:48:30.640942 sshd[4534]: Failed password for invalid user mxrush from 51.77.245.237 port 60420 ssh2 Feb 9 07:48:33.504823 sshd[4534]: Received disconnect from 51.77.245.237 port 60420:11: Bye Bye [preauth] Feb 9 07:48:33.504823 sshd[4534]: Disconnected from invalid user mxrush 51.77.245.237 port 60420 [preauth] Feb 9 07:48:33.507324 systemd[1]: sshd@49-147.75.49.59:22-51.77.245.237:60420.service: Deactivated successfully. Feb 9 07:49:10.350995 sshd[4503]: Timeout before authentication for 152.136.35.30 port 58508 Feb 9 07:49:10.352439 systemd[1]: sshd@44-147.75.49.59:22-152.136.35.30:58508.service: Deactivated successfully. Feb 9 07:49:19.597362 systemd[1]: Started sshd@50-147.75.49.59:22-54.37.228.73:35080.service. Feb 9 07:49:20.439995 sshd[4544]: Invalid user nahid from 54.37.228.73 port 35080 Feb 9 07:49:20.446041 sshd[4544]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:20.447079 sshd[4544]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:49:20.447195 sshd[4544]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:49:20.448367 sshd[4544]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:22.768334 sshd[4544]: Failed password for invalid user nahid from 54.37.228.73 port 35080 ssh2 Feb 9 07:49:23.271701 sshd[4544]: Received disconnect from 54.37.228.73 port 35080:11: Bye Bye [preauth] Feb 9 07:49:23.271701 sshd[4544]: Disconnected from invalid user nahid 54.37.228.73 port 35080 [preauth] Feb 9 07:49:23.272324 systemd[1]: sshd@50-147.75.49.59:22-54.37.228.73:35080.service: Deactivated successfully. Feb 9 07:49:24.609269 systemd[1]: Started sshd@51-147.75.49.59:22-24.199.84.116:53308.service. Feb 9 07:49:25.047659 sshd[4548]: Invalid user sampad from 24.199.84.116 port 53308 Feb 9 07:49:25.053602 sshd[4548]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:25.054575 sshd[4548]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:49:25.054659 sshd[4548]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=24.199.84.116 Feb 9 07:49:25.055511 sshd[4548]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:27.395445 sshd[4548]: Failed password for invalid user sampad from 24.199.84.116 port 53308 ssh2 Feb 9 07:49:29.798813 sshd[4548]: Received disconnect from 24.199.84.116 port 53308:11: Bye Bye [preauth] Feb 9 07:49:29.798813 sshd[4548]: Disconnected from invalid user sampad 24.199.84.116 port 53308 [preauth] Feb 9 07:49:29.801432 systemd[1]: sshd@51-147.75.49.59:22-24.199.84.116:53308.service: Deactivated successfully. Feb 9 07:49:33.494557 systemd[1]: Started sshd@52-147.75.49.59:22-165.227.213.175:52398.service. Feb 9 07:49:33.932967 sshd[4554]: Invalid user hasani from 165.227.213.175 port 52398 Feb 9 07:49:33.939260 sshd[4554]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:33.940003 sshd[4554]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:49:33.940035 sshd[4554]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:49:33.940316 sshd[4554]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:36.711693 sshd[4554]: Failed password for invalid user hasani from 165.227.213.175 port 52398 ssh2 Feb 9 07:49:37.215791 sshd[4554]: Received disconnect from 165.227.213.175 port 52398:11: Bye Bye [preauth] Feb 9 07:49:37.215791 sshd[4554]: Disconnected from invalid user hasani 165.227.213.175 port 52398 [preauth] Feb 9 07:49:37.218332 systemd[1]: sshd@52-147.75.49.59:22-165.227.213.175:52398.service: Deactivated successfully. Feb 9 07:49:41.137635 systemd[1]: Started sshd@53-147.75.49.59:22-43.157.98.116:57332.service. Feb 9 07:49:42.014083 sshd[4558]: Invalid user rezakh from 43.157.98.116 port 57332 Feb 9 07:49:42.020150 sshd[4558]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:42.021253 sshd[4558]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:49:42.021339 sshd[4558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.98.116 Feb 9 07:49:42.022305 sshd[4558]: pam_faillock(sshd:auth): User unknown Feb 9 07:49:43.695016 sshd[4558]: Failed password for invalid user rezakh from 43.157.98.116 port 57332 ssh2 Feb 9 07:49:45.420839 sshd[4558]: Received disconnect from 43.157.98.116 port 57332:11: Bye Bye [preauth] Feb 9 07:49:45.420839 sshd[4558]: Disconnected from invalid user rezakh 43.157.98.116 port 57332 [preauth] Feb 9 07:49:45.423309 systemd[1]: sshd@53-147.75.49.59:22-43.157.98.116:57332.service: Deactivated successfully. Feb 9 07:49:59.168751 systemd[1]: Started sshd@54-147.75.49.59:22-124.156.202.69:47370.service. Feb 9 07:50:00.150693 sshd[4564]: Invalid user bahaar from 124.156.202.69 port 47370 Feb 9 07:50:00.157361 sshd[4564]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:00.158161 sshd[4564]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:00.158178 sshd[4564]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.156.202.69 Feb 9 07:50:00.159692 sshd[4564]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:01.760391 systemd[1]: Started sshd@55-147.75.49.59:22-198.20.246.131:33820.service. Feb 9 07:50:01.873166 systemd[1]: Started sshd@56-147.75.49.59:22-51.77.245.237:54472.service. Feb 9 07:50:01.993804 sshd[4570]: Invalid user ayeh from 198.20.246.131 port 33820 Feb 9 07:50:02.000065 sshd[4570]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:02.000886 sshd[4570]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:02.000903 sshd[4570]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:50:02.001097 sshd[4570]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:02.303869 sshd[4564]: Failed password for invalid user bahaar from 124.156.202.69 port 47370 ssh2 Feb 9 07:50:02.588192 sshd[4564]: Received disconnect from 124.156.202.69 port 47370:11: Bye Bye [preauth] Feb 9 07:50:02.588192 sshd[4564]: Disconnected from invalid user bahaar 124.156.202.69 port 47370 [preauth] Feb 9 07:50:02.590769 systemd[1]: sshd@54-147.75.49.59:22-124.156.202.69:47370.service: Deactivated successfully. Feb 9 07:50:02.701917 sshd[4573]: Invalid user m11 from 51.77.245.237 port 54472 Feb 9 07:50:02.703167 sshd[4573]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:02.703381 sshd[4573]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:02.703403 sshd[4573]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:50:02.703654 sshd[4573]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:04.280711 sshd[4570]: Failed password for invalid user ayeh from 198.20.246.131 port 33820 ssh2 Feb 9 07:50:04.787721 sshd[4573]: Failed password for invalid user m11 from 51.77.245.237 port 54472 ssh2 Feb 9 07:50:05.700054 sshd[4570]: Received disconnect from 198.20.246.131 port 33820:11: Bye Bye [preauth] Feb 9 07:50:05.700054 sshd[4570]: Disconnected from invalid user ayeh 198.20.246.131 port 33820 [preauth] Feb 9 07:50:05.702565 systemd[1]: sshd@55-147.75.49.59:22-198.20.246.131:33820.service: Deactivated successfully. Feb 9 07:50:06.796748 sshd[4573]: Received disconnect from 51.77.245.237 port 54472:11: Bye Bye [preauth] Feb 9 07:50:06.796748 sshd[4573]: Disconnected from invalid user m11 51.77.245.237 port 54472 [preauth] Feb 9 07:50:06.799577 systemd[1]: sshd@56-147.75.49.59:22-51.77.245.237:54472.service: Deactivated successfully. Feb 9 07:50:10.541463 systemd[1]: Started sshd@57-147.75.49.59:22-91.93.63.184:50884.service. Feb 9 07:50:11.654965 sshd[4579]: Invalid user sungsim from 91.93.63.184 port 50884 Feb 9 07:50:11.661070 sshd[4579]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:11.662153 sshd[4579]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:11.662241 sshd[4579]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.93.63.184 Feb 9 07:50:11.663251 sshd[4579]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:13.983506 sshd[4579]: Failed password for invalid user sungsim from 91.93.63.184 port 50884 ssh2 Feb 9 07:50:14.506104 sshd[4579]: Received disconnect from 91.93.63.184 port 50884:11: Bye Bye [preauth] Feb 9 07:50:14.506104 sshd[4579]: Disconnected from invalid user sungsim 91.93.63.184 port 50884 [preauth] Feb 9 07:50:14.508670 systemd[1]: sshd@57-147.75.49.59:22-91.93.63.184:50884.service: Deactivated successfully. Feb 9 07:50:20.511078 systemd[1]: Started sshd@58-147.75.49.59:22-43.153.53.166:38098.service. Feb 9 07:50:20.629173 sshd[4585]: Invalid user bartek from 43.153.53.166 port 38098 Feb 9 07:50:20.631044 sshd[4585]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:20.631339 sshd[4585]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:20.631366 sshd[4585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.53.166 Feb 9 07:50:20.631673 sshd[4585]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:21.761571 systemd[1]: Started sshd@59-147.75.49.59:22-24.199.84.116:36748.service. Feb 9 07:50:22.198524 sshd[4588]: Invalid user pclpredict from 24.199.84.116 port 36748 Feb 9 07:50:22.204582 sshd[4588]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:22.205753 sshd[4588]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:22.205842 sshd[4588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=24.199.84.116 Feb 9 07:50:22.206810 sshd[4588]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:23.187452 sshd[4585]: Failed password for invalid user bartek from 43.153.53.166 port 38098 ssh2 Feb 9 07:50:23.902181 sshd[4585]: Received disconnect from 43.153.53.166 port 38098:11: Bye Bye [preauth] Feb 9 07:50:23.902181 sshd[4585]: Disconnected from invalid user bartek 43.153.53.166 port 38098 [preauth] Feb 9 07:50:23.904637 systemd[1]: sshd@58-147.75.49.59:22-43.153.53.166:38098.service: Deactivated successfully. Feb 9 07:50:24.370714 sshd[4588]: Failed password for invalid user pclpredict from 24.199.84.116 port 36748 ssh2 Feb 9 07:50:24.816632 systemd[1]: Started sshd@60-147.75.49.59:22-54.37.228.73:55678.service. Feb 9 07:50:24.951245 sshd[4588]: Received disconnect from 24.199.84.116 port 36748:11: Bye Bye [preauth] Feb 9 07:50:24.951245 sshd[4588]: Disconnected from invalid user pclpredict 24.199.84.116 port 36748 [preauth] Feb 9 07:50:24.953856 systemd[1]: sshd@59-147.75.49.59:22-24.199.84.116:36748.service: Deactivated successfully. Feb 9 07:50:25.647514 sshd[4593]: Invalid user mrayan from 54.37.228.73 port 55678 Feb 9 07:50:25.653633 sshd[4593]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:25.654637 sshd[4593]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:25.654727 sshd[4593]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:50:25.655665 sshd[4593]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:27.563733 sshd[4593]: Failed password for invalid user mrayan from 54.37.228.73 port 55678 ssh2 Feb 9 07:50:29.484291 sshd[4593]: Received disconnect from 54.37.228.73 port 55678:11: Bye Bye [preauth] Feb 9 07:50:29.484291 sshd[4593]: Disconnected from invalid user mrayan 54.37.228.73 port 55678 [preauth] Feb 9 07:50:29.487504 systemd[1]: Started sshd@61-147.75.49.59:22-165.227.213.175:53474.service. Feb 9 07:50:29.489133 systemd[1]: sshd@60-147.75.49.59:22-54.37.228.73:55678.service: Deactivated successfully. Feb 9 07:50:29.933606 sshd[4599]: Invalid user alirh from 165.227.213.175 port 53474 Feb 9 07:50:29.939631 sshd[4599]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:29.940602 sshd[4599]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:29.940691 sshd[4599]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:50:29.941610 sshd[4599]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:30.978924 systemd[1]: Started sshd@62-147.75.49.59:22-185.248.23.63:41788.service. Feb 9 07:50:31.955965 sshd[4603]: Invalid user fat from 185.248.23.63 port 41788 Feb 9 07:50:31.962063 sshd[4603]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:31.963148 sshd[4603]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:31.963238 sshd[4603]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.248.23.63 Feb 9 07:50:31.964264 sshd[4603]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:32.066017 sshd[4599]: Failed password for invalid user alirh from 165.227.213.175 port 53474 ssh2 Feb 9 07:50:33.518052 sshd[4599]: Received disconnect from 165.227.213.175 port 53474:11: Bye Bye [preauth] Feb 9 07:50:33.518052 sshd[4599]: Disconnected from invalid user alirh 165.227.213.175 port 53474 [preauth] Feb 9 07:50:33.520472 systemd[1]: sshd@61-147.75.49.59:22-165.227.213.175:53474.service: Deactivated successfully. Feb 9 07:50:34.364454 sshd[4603]: Failed password for invalid user fat from 185.248.23.63 port 41788 ssh2 Feb 9 07:50:36.724431 sshd[4603]: Received disconnect from 185.248.23.63 port 41788:11: Bye Bye [preauth] Feb 9 07:50:36.724431 sshd[4603]: Disconnected from invalid user fat 185.248.23.63 port 41788 [preauth] Feb 9 07:50:36.727023 systemd[1]: sshd@62-147.75.49.59:22-185.248.23.63:41788.service: Deactivated successfully. Feb 9 07:50:38.511875 systemd[1]: Started sshd@63-147.75.49.59:22-77.73.131.239:37378.service. Feb 9 07:50:39.430726 sshd[4609]: Invalid user mpds from 77.73.131.239 port 37378 Feb 9 07:50:39.436830 sshd[4609]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:39.437567 sshd[4609]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:50:39.437585 sshd[4609]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=77.73.131.239 Feb 9 07:50:39.437778 sshd[4609]: pam_faillock(sshd:auth): User unknown Feb 9 07:50:41.602117 sshd[4609]: Failed password for invalid user mpds from 77.73.131.239 port 37378 ssh2 Feb 9 07:50:42.322012 sshd[4609]: Received disconnect from 77.73.131.239 port 37378:11: Bye Bye [preauth] Feb 9 07:50:42.322012 sshd[4609]: Disconnected from invalid user mpds 77.73.131.239 port 37378 [preauth] Feb 9 07:50:42.323663 systemd[1]: sshd@63-147.75.49.59:22-77.73.131.239:37378.service: Deactivated successfully. Feb 9 07:51:02.654438 systemd[1]: Started sshd@64-147.75.49.59:22-198.20.246.131:53182.service. Feb 9 07:51:02.854842 sshd[4616]: Invalid user bithika from 198.20.246.131 port 53182 Feb 9 07:51:02.856963 sshd[4616]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:02.857305 sshd[4616]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:02.857338 sshd[4616]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:51:02.857692 sshd[4616]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:03.644037 systemd[1]: Started sshd@65-147.75.49.59:22-124.156.202.69:54074.service. Feb 9 07:51:04.687752 sshd[4619]: Invalid user cursosst from 124.156.202.69 port 54074 Feb 9 07:51:04.693906 sshd[4619]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:04.694994 sshd[4619]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:04.695082 sshd[4619]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.156.202.69 Feb 9 07:51:04.696000 sshd[4619]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:04.845869 sshd[4616]: Failed password for invalid user bithika from 198.20.246.131 port 53182 ssh2 Feb 9 07:51:06.624764 sshd[4619]: Failed password for invalid user cursosst from 124.156.202.69 port 54074 ssh2 Feb 9 07:51:06.632636 sshd[4616]: Received disconnect from 198.20.246.131 port 53182:11: Bye Bye [preauth] Feb 9 07:51:06.632636 sshd[4616]: Disconnected from invalid user bithika 198.20.246.131 port 53182 [preauth] Feb 9 07:51:06.635101 systemd[1]: sshd@64-147.75.49.59:22-198.20.246.131:53182.service: Deactivated successfully. Feb 9 07:51:07.274564 systemd[1]: Started sshd@66-147.75.49.59:22-51.77.245.237:44982.service. Feb 9 07:51:08.098621 sshd[4623]: Invalid user bithika from 51.77.245.237 port 44982 Feb 9 07:51:08.104754 sshd[4623]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:08.105865 sshd[4623]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:08.105957 sshd[4623]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:51:08.106949 sshd[4623]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:08.335892 sshd[4619]: Received disconnect from 124.156.202.69 port 54074:11: Bye Bye [preauth] Feb 9 07:51:08.335892 sshd[4619]: Disconnected from invalid user cursosst 124.156.202.69 port 54074 [preauth] Feb 9 07:51:08.338247 systemd[1]: sshd@65-147.75.49.59:22-124.156.202.69:54074.service: Deactivated successfully. Feb 9 07:51:10.251648 sshd[4623]: Failed password for invalid user bithika from 51.77.245.237 port 44982 ssh2 Feb 9 07:51:12.007633 sshd[4623]: Received disconnect from 51.77.245.237 port 44982:11: Bye Bye [preauth] Feb 9 07:51:12.007633 sshd[4623]: Disconnected from invalid user bithika 51.77.245.237 port 44982 [preauth] Feb 9 07:51:12.010133 systemd[1]: sshd@66-147.75.49.59:22-51.77.245.237:44982.service: Deactivated successfully. Feb 9 07:51:15.760385 systemd[1]: Started sshd@67-147.75.49.59:22-24.199.84.116:42614.service. Feb 9 07:51:16.198970 sshd[4630]: Invalid user mtproto-proxy from 24.199.84.116 port 42614 Feb 9 07:51:16.205169 sshd[4630]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:16.206185 sshd[4630]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:16.206276 sshd[4630]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=24.199.84.116 Feb 9 07:51:16.207247 sshd[4630]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:18.451779 sshd[4630]: Failed password for invalid user mtproto-proxy from 24.199.84.116 port 42614 ssh2 Feb 9 07:51:18.904568 sshd[4630]: Received disconnect from 24.199.84.116 port 42614:11: Bye Bye [preauth] Feb 9 07:51:18.904568 sshd[4630]: Disconnected from invalid user mtproto-proxy 24.199.84.116 port 42614 [preauth] Feb 9 07:51:18.907127 systemd[1]: sshd@67-147.75.49.59:22-24.199.84.116:42614.service: Deactivated successfully. Feb 9 07:51:23.054270 systemd[1]: Started sshd@68-147.75.49.59:22-165.227.213.175:40486.service. Feb 9 07:51:23.492732 sshd[4635]: Invalid user kernsp from 165.227.213.175 port 40486 Feb 9 07:51:23.498978 sshd[4635]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:23.499995 sshd[4635]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:23.500084 sshd[4635]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:51:23.501148 sshd[4635]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:25.038465 sshd[4635]: Failed password for invalid user kernsp from 165.227.213.175 port 40486 ssh2 Feb 9 07:51:26.634078 sshd[4635]: Received disconnect from 165.227.213.175 port 40486:11: Bye Bye [preauth] Feb 9 07:51:26.634078 sshd[4635]: Disconnected from invalid user kernsp 165.227.213.175 port 40486 [preauth] Feb 9 07:51:26.636609 systemd[1]: sshd@68-147.75.49.59:22-165.227.213.175:40486.service: Deactivated successfully. Feb 9 07:51:27.246625 systemd[1]: Started sshd@69-147.75.49.59:22-124.220.81.132:49086.service. Feb 9 07:51:28.740747 systemd[1]: Started sshd@70-147.75.49.59:22-54.37.228.73:46716.service. Feb 9 07:51:28.876586 sshd[4641]: Invalid user hamidsham from 124.220.81.132 port 49086 Feb 9 07:51:28.882627 sshd[4641]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:28.883620 sshd[4641]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:28.883712 sshd[4641]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.81.132 Feb 9 07:51:28.884792 sshd[4641]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:29.568809 systemd[1]: Started sshd@71-147.75.49.59:22-43.140.221.64:36290.service. Feb 9 07:51:29.569410 sshd[4644]: Invalid user atabak from 54.37.228.73 port 46716 Feb 9 07:51:29.570765 sshd[4644]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:29.571044 sshd[4644]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:29.571063 sshd[4644]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:51:29.571248 sshd[4644]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:30.414622 sshd[4647]: Invalid user liuyunhong from 43.140.221.64 port 36290 Feb 9 07:51:30.420959 sshd[4647]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:30.421931 sshd[4647]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:51:30.422021 sshd[4647]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.140.221.64 Feb 9 07:51:30.422916 sshd[4647]: pam_faillock(sshd:auth): User unknown Feb 9 07:51:31.108759 sshd[4641]: Failed password for invalid user hamidsham from 124.220.81.132 port 49086 ssh2 Feb 9 07:51:31.931597 sshd[4644]: Failed password for invalid user atabak from 54.37.228.73 port 46716 ssh2 Feb 9 07:51:32.587047 sshd[4644]: Received disconnect from 54.37.228.73 port 46716:11: Bye Bye [preauth] Feb 9 07:51:32.587047 sshd[4644]: Disconnected from invalid user atabak 54.37.228.73 port 46716 [preauth] Feb 9 07:51:32.587515 sshd[4647]: Failed password for invalid user liuyunhong from 43.140.221.64 port 36290 ssh2 Feb 9 07:51:32.589538 systemd[1]: sshd@70-147.75.49.59:22-54.37.228.73:46716.service: Deactivated successfully. Feb 9 07:51:33.074967 sshd[4647]: Received disconnect from 43.140.221.64 port 36290:11: Bye Bye [preauth] Feb 9 07:51:33.074967 sshd[4647]: Disconnected from invalid user liuyunhong 43.140.221.64 port 36290 [preauth] Feb 9 07:51:33.077454 systemd[1]: sshd@71-147.75.49.59:22-43.140.221.64:36290.service: Deactivated successfully. Feb 9 07:51:33.416075 sshd[4641]: Received disconnect from 124.220.81.132 port 49086:11: Bye Bye [preauth] Feb 9 07:51:33.416075 sshd[4641]: Disconnected from invalid user hamidsham 124.220.81.132 port 49086 [preauth] Feb 9 07:51:33.418654 systemd[1]: sshd@69-147.75.49.59:22-124.220.81.132:49086.service: Deactivated successfully. Feb 9 07:51:49.819947 systemd[1]: Started sshd@72-147.75.49.59:22-147.75.109.163:54698.service. Feb 9 07:51:49.850526 sshd[4655]: Accepted publickey for core from 147.75.109.163 port 54698 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:51:49.851290 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:51:49.854089 systemd-logind[1441]: New session 10 of user core. Feb 9 07:51:49.854661 systemd[1]: Started session-10.scope. Feb 9 07:51:49.990206 sshd[4655]: pam_unix(sshd:session): session closed for user core Feb 9 07:51:49.991620 systemd[1]: sshd@72-147.75.49.59:22-147.75.109.163:54698.service: Deactivated successfully. Feb 9 07:51:49.992058 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 07:51:49.992410 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Feb 9 07:51:49.992890 systemd-logind[1441]: Removed session 10. Feb 9 07:51:54.999659 systemd[1]: Started sshd@73-147.75.49.59:22-147.75.109.163:34708.service. Feb 9 07:51:55.028555 sshd[4681]: Accepted publickey for core from 147.75.109.163 port 34708 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:51:55.029370 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:51:55.032084 systemd-logind[1441]: New session 11 of user core. Feb 9 07:51:55.032700 systemd[1]: Started session-11.scope. Feb 9 07:51:55.122381 sshd[4681]: pam_unix(sshd:session): session closed for user core Feb 9 07:51:55.123829 systemd[1]: sshd@73-147.75.49.59:22-147.75.109.163:34708.service: Deactivated successfully. Feb 9 07:51:55.124239 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 07:51:55.124616 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Feb 9 07:51:55.125111 systemd-logind[1441]: Removed session 11. Feb 9 07:52:00.133371 systemd[1]: Started sshd@74-147.75.49.59:22-147.75.109.163:34714.service. Feb 9 07:52:00.134012 systemd[1]: Started sshd@75-147.75.49.59:22-198.20.246.131:44310.service. Feb 9 07:52:00.161584 sshd[4710]: Accepted publickey for core from 147.75.109.163 port 34714 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:00.162382 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:00.165127 systemd-logind[1441]: New session 12 of user core. Feb 9 07:52:00.165690 systemd[1]: Started session-12.scope. Feb 9 07:52:00.255671 sshd[4710]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:00.257345 systemd[1]: sshd@74-147.75.49.59:22-147.75.109.163:34714.service: Deactivated successfully. Feb 9 07:52:00.257843 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 07:52:00.258252 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Feb 9 07:52:00.258802 systemd-logind[1441]: Removed session 12. Feb 9 07:52:00.342628 sshd[4711]: Invalid user hasani from 198.20.246.131 port 44310 Feb 9 07:52:00.348622 sshd[4711]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:00.349583 sshd[4711]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:00.349664 sshd[4711]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:52:00.350517 sshd[4711]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:01.968020 sshd[4711]: Failed password for invalid user hasani from 198.20.246.131 port 44310 ssh2 Feb 9 07:52:02.593443 systemd[1]: Started sshd@76-147.75.49.59:22-61.177.172.160:21086.service. Feb 9 07:52:03.576321 sshd[4711]: Received disconnect from 198.20.246.131 port 44310:11: Bye Bye [preauth] Feb 9 07:52:03.576321 sshd[4711]: Disconnected from invalid user hasani 198.20.246.131 port 44310 [preauth] Feb 9 07:52:03.578788 systemd[1]: sshd@75-147.75.49.59:22-198.20.246.131:44310.service: Deactivated successfully. Feb 9 07:52:03.786662 sshd[4738]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 07:52:05.265703 systemd[1]: Started sshd@77-147.75.49.59:22-147.75.109.163:50408.service. Feb 9 07:52:05.295758 sshd[4742]: Accepted publickey for core from 147.75.109.163 port 50408 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:05.296623 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:05.299774 systemd-logind[1441]: New session 13 of user core. Feb 9 07:52:05.300367 systemd[1]: Started session-13.scope. Feb 9 07:52:05.389202 sshd[4742]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:05.390997 systemd[1]: sshd@77-147.75.49.59:22-147.75.109.163:50408.service: Deactivated successfully. Feb 9 07:52:05.391343 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 07:52:05.391766 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Feb 9 07:52:05.392331 systemd[1]: Started sshd@78-147.75.49.59:22-147.75.109.163:50416.service. Feb 9 07:52:05.392835 systemd-logind[1441]: Removed session 13. Feb 9 07:52:05.422248 sshd[4768]: Accepted publickey for core from 147.75.109.163 port 50416 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:05.423100 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:05.426168 systemd-logind[1441]: New session 14 of user core. Feb 9 07:52:05.426766 systemd[1]: Started session-14.scope. Feb 9 07:52:05.812516 sshd[4768]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:05.814437 systemd[1]: sshd@78-147.75.49.59:22-147.75.109.163:50416.service: Deactivated successfully. Feb 9 07:52:05.814801 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 07:52:05.815094 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Feb 9 07:52:05.815708 systemd[1]: Started sshd@79-147.75.49.59:22-147.75.109.163:50418.service. Feb 9 07:52:05.816147 systemd-logind[1441]: Removed session 14. Feb 9 07:52:05.843712 sshd[4792]: Accepted publickey for core from 147.75.109.163 port 50418 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:05.844533 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:05.846873 systemd-logind[1441]: New session 15 of user core. Feb 9 07:52:05.847362 systemd[1]: Started session-15.scope. Feb 9 07:52:05.970200 sshd[4792]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:05.971871 systemd[1]: sshd@79-147.75.49.59:22-147.75.109.163:50418.service: Deactivated successfully. Feb 9 07:52:05.972389 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 07:52:05.972927 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Feb 9 07:52:05.973477 systemd-logind[1441]: Removed session 15. Feb 9 07:52:06.483210 sshd[4738]: Failed password for root from 61.177.172.160 port 21086 ssh2 Feb 9 07:52:06.976238 systemd[1]: Started sshd@80-147.75.49.59:22-24.199.84.116:33746.service. Feb 9 07:52:07.414840 sshd[4818]: Invalid user m11 from 24.199.84.116 port 33746 Feb 9 07:52:07.421030 sshd[4818]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:07.422187 sshd[4818]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:07.422279 sshd[4818]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=24.199.84.116 Feb 9 07:52:07.423295 sshd[4818]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:07.692681 systemd[1]: Started sshd@81-147.75.49.59:22-139.59.127.178:48734.service. Feb 9 07:52:08.663035 sshd[4821]: Invalid user jleo from 139.59.127.178 port 48734 Feb 9 07:52:08.669003 sshd[4821]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:08.670095 sshd[4821]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:08.670184 sshd[4821]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.127.178 Feb 9 07:52:08.671250 sshd[4821]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:08.733535 systemd[1]: Started sshd@82-147.75.49.59:22-124.156.202.69:57170.service. Feb 9 07:52:09.336612 sshd[4818]: Failed password for invalid user m11 from 24.199.84.116 port 33746 ssh2 Feb 9 07:52:09.454614 sshd[4818]: Received disconnect from 24.199.84.116 port 33746:11: Bye Bye [preauth] Feb 9 07:52:09.454614 sshd[4818]: Disconnected from invalid user m11 24.199.84.116 port 33746 [preauth] Feb 9 07:52:09.457179 systemd[1]: sshd@80-147.75.49.59:22-24.199.84.116:33746.service: Deactivated successfully. Feb 9 07:52:09.789528 sshd[4824]: Invalid user chenwei from 124.156.202.69 port 57170 Feb 9 07:52:09.795610 sshd[4824]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:09.796580 sshd[4824]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:09.796663 sshd[4824]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.156.202.69 Feb 9 07:52:09.797524 sshd[4824]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:10.103774 sshd[4738]: Failed password for root from 61.177.172.160 port 21086 ssh2 Feb 9 07:52:10.155950 systemd[1]: Started sshd@83-147.75.49.59:22-51.77.245.237:35492.service. Feb 9 07:52:10.719686 sshd[4821]: Failed password for invalid user jleo from 139.59.127.178 port 48734 ssh2 Feb 9 07:52:10.975291 sshd[4828]: Invalid user chchen from 51.77.245.237 port 35492 Feb 9 07:52:10.976891 systemd[1]: Started sshd@84-147.75.49.59:22-147.75.109.163:50422.service. Feb 9 07:52:10.978761 sshd[4828]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:10.979352 sshd[4828]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:10.979410 sshd[4828]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:52:10.980132 sshd[4828]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:11.026364 sshd[4833]: Accepted publickey for core from 147.75.109.163 port 50422 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:11.027139 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:11.029395 systemd-logind[1441]: New session 16 of user core. Feb 9 07:52:11.029932 systemd[1]: Started session-16.scope. Feb 9 07:52:11.117379 sshd[4833]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:11.118953 systemd[1]: sshd@84-147.75.49.59:22-147.75.109.163:50422.service: Deactivated successfully. Feb 9 07:52:11.119447 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 07:52:11.119900 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Feb 9 07:52:11.120472 systemd-logind[1441]: Removed session 16. Feb 9 07:52:11.651151 sshd[4824]: Failed password for invalid user chenwei from 124.156.202.69 port 57170 ssh2 Feb 9 07:52:12.579959 sshd[4821]: Received disconnect from 139.59.127.178 port 48734:11: Bye Bye [preauth] Feb 9 07:52:12.579959 sshd[4821]: Disconnected from invalid user jleo 139.59.127.178 port 48734 [preauth] Feb 9 07:52:12.582573 systemd[1]: sshd@81-147.75.49.59:22-139.59.127.178:48734.service: Deactivated successfully. Feb 9 07:52:12.636746 sshd[4828]: Failed password for invalid user chchen from 51.77.245.237 port 35492 ssh2 Feb 9 07:52:13.325928 sshd[4824]: Received disconnect from 124.156.202.69 port 57170:11: Bye Bye [preauth] Feb 9 07:52:13.325928 sshd[4824]: Disconnected from invalid user chenwei 124.156.202.69 port 57170 [preauth] Feb 9 07:52:13.327667 systemd[1]: sshd@82-147.75.49.59:22-124.156.202.69:57170.service: Deactivated successfully. Feb 9 07:52:13.910928 sshd[4738]: Failed password for root from 61.177.172.160 port 21086 ssh2 Feb 9 07:52:14.295975 sshd[4828]: Received disconnect from 51.77.245.237 port 35492:11: Bye Bye [preauth] Feb 9 07:52:14.295975 sshd[4828]: Disconnected from invalid user chchen 51.77.245.237 port 35492 [preauth] Feb 9 07:52:14.296758 systemd[1]: sshd@83-147.75.49.59:22-51.77.245.237:35492.service: Deactivated successfully. Feb 9 07:52:14.541787 sshd[4738]: Received disconnect from 61.177.172.160 port 21086:11: [preauth] Feb 9 07:52:14.541787 sshd[4738]: Disconnected from authenticating user root 61.177.172.160 port 21086 [preauth] Feb 9 07:52:14.542330 sshd[4738]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 07:52:14.544338 systemd[1]: sshd@76-147.75.49.59:22-61.177.172.160:21086.service: Deactivated successfully. Feb 9 07:52:14.728606 systemd[1]: Started sshd@85-147.75.49.59:22-61.177.172.160:63879.service. Feb 9 07:52:15.874862 sshd[4865]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 07:52:16.127269 systemd[1]: Started sshd@86-147.75.49.59:22-147.75.109.163:58720.service. Feb 9 07:52:16.156256 sshd[4868]: Accepted publickey for core from 147.75.109.163 port 58720 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:16.157289 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:16.160257 systemd-logind[1441]: New session 17 of user core. Feb 9 07:52:16.160838 systemd[1]: Started session-17.scope. Feb 9 07:52:16.248579 sshd[4868]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:16.250090 systemd[1]: sshd@86-147.75.49.59:22-147.75.109.163:58720.service: Deactivated successfully. Feb 9 07:52:16.250520 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 07:52:16.250914 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Feb 9 07:52:16.251415 systemd-logind[1441]: Removed session 17. Feb 9 07:52:18.201074 systemd[1]: Started sshd@87-147.75.49.59:22-165.227.213.175:58098.service. Feb 9 07:52:18.551616 sshd[4865]: Failed password for root from 61.177.172.160 port 63879 ssh2 Feb 9 07:52:18.655637 sshd[4894]: Invalid user bithika from 165.227.213.175 port 58098 Feb 9 07:52:18.661688 sshd[4894]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:18.662820 sshd[4894]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:18.662913 sshd[4894]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:52:18.663951 sshd[4894]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:20.143385 sshd[4865]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:52:20.753026 sshd[4894]: Failed password for invalid user bithika from 165.227.213.175 port 58098 ssh2 Feb 9 07:52:21.258450 systemd[1]: Started sshd@88-147.75.49.59:22-147.75.109.163:58724.service. Feb 9 07:52:21.287383 sshd[4897]: Accepted publickey for core from 147.75.109.163 port 58724 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:21.288296 sshd[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:21.291230 systemd-logind[1441]: New session 18 of user core. Feb 9 07:52:21.291846 systemd[1]: Started session-18.scope. Feb 9 07:52:21.381169 sshd[4897]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:21.382602 systemd[1]: sshd@88-147.75.49.59:22-147.75.109.163:58724.service: Deactivated successfully. Feb 9 07:52:21.383002 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 07:52:21.383346 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Feb 9 07:52:21.383990 systemd-logind[1441]: Removed session 18. Feb 9 07:52:22.172435 sshd[4865]: Failed password for root from 61.177.172.160 port 63879 ssh2 Feb 9 07:52:22.480070 sshd[4894]: Received disconnect from 165.227.213.175 port 58098:11: Bye Bye [preauth] Feb 9 07:52:22.480070 sshd[4894]: Disconnected from invalid user bithika 165.227.213.175 port 58098 [preauth] Feb 9 07:52:22.482378 systemd[1]: sshd@87-147.75.49.59:22-165.227.213.175:58098.service: Deactivated successfully. Feb 9 07:52:23.992913 sshd[4865]: Failed password for root from 61.177.172.160 port 63879 ssh2 Feb 9 07:52:24.567754 sshd[4865]: Received disconnect from 61.177.172.160 port 63879:11: [preauth] Feb 9 07:52:24.567754 sshd[4865]: Disconnected from authenticating user root 61.177.172.160 port 63879 [preauth] Feb 9 07:52:24.568297 sshd[4865]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 07:52:24.570339 systemd[1]: sshd@85-147.75.49.59:22-61.177.172.160:63879.service: Deactivated successfully. Feb 9 07:52:24.724947 systemd[1]: Started sshd@89-147.75.49.59:22-61.177.172.160:45945.service. Feb 9 07:52:25.813996 sshd[4924]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 07:52:26.388114 systemd[1]: Started sshd@90-147.75.49.59:22-147.75.109.163:36430.service. Feb 9 07:52:26.417746 sshd[4927]: Accepted publickey for core from 147.75.109.163 port 36430 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:26.418632 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:26.421836 systemd-logind[1441]: New session 19 of user core. Feb 9 07:52:26.422407 systemd[1]: Started session-19.scope. Feb 9 07:52:26.509701 sshd[4927]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:26.511203 systemd[1]: sshd@90-147.75.49.59:22-147.75.109.163:36430.service: Deactivated successfully. Feb 9 07:52:26.511656 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 07:52:26.512047 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Feb 9 07:52:26.512452 systemd-logind[1441]: Removed session 19. Feb 9 07:52:27.192034 systemd[1]: Started sshd@91-147.75.49.59:22-43.142.87.223:57228.service. Feb 9 07:52:27.863157 sshd[4924]: Failed password for root from 61.177.172.160 port 45945 ssh2 Feb 9 07:52:28.872247 sshd[4955]: Invalid user shenlu from 43.142.87.223 port 57228 Feb 9 07:52:28.878585 sshd[4955]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:28.879540 sshd[4955]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:28.879627 sshd[4955]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.142.87.223 Feb 9 07:52:28.880571 sshd[4955]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:30.341697 sshd[4955]: Failed password for invalid user shenlu from 43.142.87.223 port 57228 ssh2 Feb 9 07:52:31.081394 sshd[4955]: Received disconnect from 43.142.87.223 port 57228:11: Bye Bye [preauth] Feb 9 07:52:31.081394 sshd[4955]: Disconnected from invalid user shenlu 43.142.87.223 port 57228 [preauth] Feb 9 07:52:31.084148 systemd[1]: sshd@91-147.75.49.59:22-43.142.87.223:57228.service: Deactivated successfully. Feb 9 07:52:31.520805 systemd[1]: Started sshd@92-147.75.49.59:22-147.75.109.163:36436.service. Feb 9 07:52:31.574729 sshd[4959]: Accepted publickey for core from 147.75.109.163 port 36436 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:31.575405 sshd[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:31.577961 systemd-logind[1441]: New session 20 of user core. Feb 9 07:52:31.578425 systemd[1]: Started session-20.scope. Feb 9 07:52:31.663658 sshd[4959]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:31.665098 systemd[1]: sshd@92-147.75.49.59:22-147.75.109.163:36436.service: Deactivated successfully. Feb 9 07:52:31.665522 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 07:52:31.665930 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Feb 9 07:52:31.666394 systemd-logind[1441]: Removed session 20. Feb 9 07:52:32.131867 sshd[4924]: Failed password for root from 61.177.172.160 port 45945 ssh2 Feb 9 07:52:32.144404 systemd[1]: Started sshd@93-147.75.49.59:22-54.37.228.73:51114.service. Feb 9 07:52:32.984415 sshd[4984]: Invalid user dada from 54.37.228.73 port 51114 Feb 9 07:52:32.990506 sshd[4984]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:32.991473 sshd[4984]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:32.991580 sshd[4984]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:52:32.992397 sshd[4984]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:34.670015 sshd[4984]: Failed password for invalid user dada from 54.37.228.73 port 51114 ssh2 Feb 9 07:52:35.480120 sshd[4984]: Received disconnect from 54.37.228.73 port 51114:11: Bye Bye [preauth] Feb 9 07:52:35.480120 sshd[4984]: Disconnected from invalid user dada 54.37.228.73 port 51114 [preauth] Feb 9 07:52:35.482580 systemd[1]: sshd@93-147.75.49.59:22-54.37.228.73:51114.service: Deactivated successfully. Feb 9 07:52:35.928062 sshd[4924]: Failed password for root from 61.177.172.160 port 45945 ssh2 Feb 9 07:52:36.524140 sshd[4924]: Received disconnect from 61.177.172.160 port 45945:11: [preauth] Feb 9 07:52:36.524140 sshd[4924]: Disconnected from authenticating user root 61.177.172.160 port 45945 [preauth] Feb 9 07:52:36.524703 sshd[4924]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 07:52:36.526743 systemd[1]: sshd@89-147.75.49.59:22-61.177.172.160:45945.service: Deactivated successfully. Feb 9 07:52:36.675194 systemd[1]: Started sshd@94-147.75.49.59:22-147.75.109.163:59650.service. Feb 9 07:52:36.707001 sshd[4989]: Accepted publickey for core from 147.75.109.163 port 59650 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:36.707756 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:36.710511 systemd-logind[1441]: New session 21 of user core. Feb 9 07:52:36.711060 systemd[1]: Started session-21.scope. Feb 9 07:52:36.798928 sshd[4989]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:36.800255 systemd[1]: sshd@94-147.75.49.59:22-147.75.109.163:59650.service: Deactivated successfully. Feb 9 07:52:36.800681 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 07:52:36.801088 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Feb 9 07:52:36.801578 systemd-logind[1441]: Removed session 21. Feb 9 07:52:40.414354 systemd[1]: Started sshd@95-147.75.49.59:22-103.157.114.194:56168.service. Feb 9 07:52:41.808380 systemd[1]: Started sshd@96-147.75.49.59:22-147.75.109.163:59658.service. Feb 9 07:52:41.837706 sshd[5016]: Accepted publickey for core from 147.75.109.163 port 59658 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:41.838560 sshd[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:41.841339 systemd-logind[1441]: New session 22 of user core. Feb 9 07:52:41.841938 systemd[1]: Started session-22.scope. Feb 9 07:52:41.932240 sshd[5016]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:41.933729 systemd[1]: sshd@96-147.75.49.59:22-147.75.109.163:59658.service: Deactivated successfully. Feb 9 07:52:41.934162 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 07:52:41.934460 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Feb 9 07:52:41.935107 systemd-logind[1441]: Removed session 22. Feb 9 07:52:41.997178 systemd[1]: Started sshd@97-147.75.49.59:22-124.220.81.132:32834.service. Feb 9 07:52:44.700116 systemd[1]: Started sshd@98-147.75.49.59:22-61.177.172.140:18235.service. Feb 9 07:52:45.107907 systemd[1]: Started sshd@99-147.75.49.59:22-46.102.130.37:46670.service. Feb 9 07:52:46.289422 sshd[5048]: Invalid user exportjf from 46.102.130.37 port 46670 Feb 9 07:52:46.295453 sshd[5048]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:46.296429 sshd[5048]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:46.296546 sshd[5048]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=46.102.130.37 Feb 9 07:52:46.297524 sshd[5048]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:46.943895 systemd[1]: Started sshd@100-147.75.49.59:22-147.75.109.163:45394.service. Feb 9 07:52:46.975894 sshd[5051]: Accepted publickey for core from 147.75.109.163 port 45394 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:46.976634 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:46.979243 systemd-logind[1441]: New session 23 of user core. Feb 9 07:52:46.979767 systemd[1]: Started session-23.scope. Feb 9 07:52:47.064719 sshd[5051]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:47.066249 systemd[1]: sshd@100-147.75.49.59:22-147.75.109.163:45394.service: Deactivated successfully. Feb 9 07:52:47.066718 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 07:52:47.067177 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Feb 9 07:52:47.067825 systemd-logind[1441]: Removed session 23. Feb 9 07:52:47.159650 sshd[5014]: Invalid user test from 103.157.114.194 port 56168 Feb 9 07:52:47.165641 sshd[5014]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:47.166615 sshd[5014]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:47.166704 sshd[5014]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.157.114.194 Feb 9 07:52:47.167762 sshd[5014]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:47.895163 sshd[5048]: Failed password for invalid user exportjf from 46.102.130.37 port 46670 ssh2 Feb 9 07:52:48.568902 sshd[5014]: Failed password for invalid user test from 103.157.114.194 port 56168 ssh2 Feb 9 07:52:48.849738 sshd[5048]: Received disconnect from 46.102.130.37 port 46670:11: Bye Bye [preauth] Feb 9 07:52:48.849738 sshd[5048]: Disconnected from invalid user exportjf 46.102.130.37 port 46670 [preauth] Feb 9 07:52:48.852196 systemd[1]: sshd@99-147.75.49.59:22-46.102.130.37:46670.service: Deactivated successfully. Feb 9 07:52:49.316667 sshd[5045]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:52:50.852617 sshd[5077]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:50.857742 sshd[5014]: Postponed keyboard-interactive for invalid user test from 103.157.114.194 port 56168 ssh2 [preauth] Feb 9 07:52:51.325759 sshd[5045]: Failed password for root from 61.177.172.140 port 18235 ssh2 Feb 9 07:52:52.074634 systemd[1]: Started sshd@101-147.75.49.59:22-147.75.109.163:45402.service. Feb 9 07:52:52.103227 sshd[5080]: Accepted publickey for core from 147.75.109.163 port 45402 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:52.104087 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:52.107235 systemd-logind[1441]: New session 24 of user core. Feb 9 07:52:52.107852 systemd[1]: Started session-24.scope. Feb 9 07:52:52.196729 sshd[5080]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:52.198312 systemd[1]: sshd@101-147.75.49.59:22-147.75.109.163:45402.service: Deactivated successfully. Feb 9 07:52:52.198802 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 07:52:52.199274 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Feb 9 07:52:52.199955 systemd-logind[1441]: Removed session 24. Feb 9 07:52:52.566289 sshd[5077]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:52.567312 sshd[5077]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:53.491192 sshd[5045]: Failed password for root from 61.177.172.140 port 18235 ssh2 Feb 9 07:52:54.716833 sshd[5014]: PAM: Permission denied for illegal user test from 103.157.114.194 Feb 9 07:52:54.717675 sshd[5014]: Failed keyboard-interactive/pam for invalid user test from 103.157.114.194 port 56168 ssh2 Feb 9 07:52:55.652119 sshd[5045]: Failed password for root from 61.177.172.140 port 18235 ssh2 Feb 9 07:52:56.065369 sshd[5014]: Connection closed by invalid user test 103.157.114.194 port 56168 [preauth] Feb 9 07:52:56.067941 systemd[1]: sshd@95-147.75.49.59:22-103.157.114.194:56168.service: Deactivated successfully. Feb 9 07:52:56.151754 systemd[1]: Started sshd@102-147.75.49.59:22-61.133.220.198:34552.service. Feb 9 07:52:56.300848 systemd[1]: Started sshd@103-147.75.49.59:22-198.20.246.131:35440.service. Feb 9 07:52:56.500969 sshd[5109]: Invalid user sampad from 198.20.246.131 port 35440 Feb 9 07:52:56.507085 sshd[5109]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:56.508073 sshd[5109]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:56.508162 sshd[5109]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:52:56.509181 sshd[5109]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:56.795528 sshd[5045]: Received disconnect from 61.177.172.140 port 18235:11: [preauth] Feb 9 07:52:56.795528 sshd[5045]: Disconnected from authenticating user root 61.177.172.140 port 18235 [preauth] Feb 9 07:52:56.795941 sshd[5045]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:52:56.798155 systemd[1]: sshd@98-147.75.49.59:22-61.177.172.140:18235.service: Deactivated successfully. Feb 9 07:52:56.969137 systemd[1]: Started sshd@104-147.75.49.59:22-61.177.172.140:49028.service. Feb 9 07:52:57.207989 systemd[1]: Started sshd@105-147.75.49.59:22-147.75.109.163:38848.service. Feb 9 07:52:57.240188 sshd[5119]: Accepted publickey for core from 147.75.109.163 port 38848 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:52:57.240909 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:52:57.243529 systemd-logind[1441]: New session 25 of user core. Feb 9 07:52:57.244100 systemd[1]: Started session-25.scope. Feb 9 07:52:57.369032 sshd[5119]: pam_unix(sshd:session): session closed for user core Feb 9 07:52:57.370603 systemd[1]: sshd@105-147.75.49.59:22-147.75.109.163:38848.service: Deactivated successfully. Feb 9 07:52:57.371048 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 07:52:57.371396 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Feb 9 07:52:57.372007 systemd-logind[1441]: Removed session 25. Feb 9 07:52:58.146891 sshd[5109]: Failed password for invalid user sampad from 198.20.246.131 port 35440 ssh2 Feb 9 07:52:58.860259 sshd[5109]: Received disconnect from 198.20.246.131 port 35440:11: Bye Bye [preauth] Feb 9 07:52:58.860259 sshd[5109]: Disconnected from invalid user sampad 198.20.246.131 port 35440 [preauth] Feb 9 07:52:58.862910 systemd[1]: sshd@103-147.75.49.59:22-198.20.246.131:35440.service: Deactivated successfully. Feb 9 07:52:58.988196 sshd[5107]: Invalid user test from 61.133.220.198 port 34552 Feb 9 07:52:58.994548 sshd[5107]: pam_faillock(sshd:auth): User unknown Feb 9 07:52:58.995539 sshd[5107]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:52:58.995622 sshd[5107]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.133.220.198 Feb 9 07:52:58.996475 sshd[5107]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:00.366672 sshd[5115]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:53:00.574096 sshd[5107]: Failed password for invalid user test from 61.133.220.198 port 34552 ssh2 Feb 9 07:53:01.416543 sshd[5144]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:01.421623 sshd[5107]: Postponed keyboard-interactive for invalid user test from 61.133.220.198 port 34552 ssh2 [preauth] Feb 9 07:53:02.219738 sshd[5115]: Failed password for root from 61.177.172.140 port 49028 ssh2 Feb 9 07:53:02.285242 sshd[5144]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:02.285581 sshd[5144]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:02.380641 systemd[1]: Started sshd@106-147.75.49.59:22-147.75.109.163:38862.service. Feb 9 07:53:02.412886 sshd[5146]: Accepted publickey for core from 147.75.109.163 port 38862 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:02.413648 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:02.416217 systemd-logind[1441]: New session 26 of user core. Feb 9 07:53:02.416791 systemd[1]: Started session-26.scope. Feb 9 07:53:02.510036 sshd[5146]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:02.511577 systemd[1]: sshd@106-147.75.49.59:22-147.75.109.163:38862.service: Deactivated successfully. Feb 9 07:53:02.511984 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 07:53:02.512331 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Feb 9 07:53:02.512921 systemd-logind[1441]: Removed session 26. Feb 9 07:53:03.976788 systemd[1]: Started sshd@107-147.75.49.59:22-51.77.245.237:54228.service. Feb 9 07:53:04.609780 sshd[5107]: PAM: Permission denied for illegal user test from 61.133.220.198 Feb 9 07:53:04.610737 sshd[5107]: Failed keyboard-interactive/pam for invalid user test from 61.133.220.198 port 34552 ssh2 Feb 9 07:53:04.710727 sshd[5115]: Failed password for root from 61.177.172.140 port 49028 ssh2 Feb 9 07:53:04.809315 sshd[5171]: Invalid user kernsp from 51.77.245.237 port 54228 Feb 9 07:53:04.815256 sshd[5171]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:04.816414 sshd[5171]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:04.816552 sshd[5171]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:53:04.817394 sshd[5171]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:05.245477 sshd[5107]: Connection closed by invalid user test 61.133.220.198 port 34552 [preauth] Feb 9 07:53:05.248254 systemd[1]: sshd@102-147.75.49.59:22-61.133.220.198:34552.service: Deactivated successfully. Feb 9 07:53:06.555036 sshd[5171]: Failed password for invalid user kernsp from 51.77.245.237 port 54228 ssh2 Feb 9 07:53:07.519200 systemd[1]: Started sshd@108-147.75.49.59:22-147.75.109.163:37340.service. Feb 9 07:53:07.548172 sshd[5175]: Accepted publickey for core from 147.75.109.163 port 37340 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:07.549048 sshd[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:07.552030 systemd-logind[1441]: New session 27 of user core. Feb 9 07:53:07.552614 systemd[1]: Started session-27.scope. Feb 9 07:53:07.641280 sshd[5175]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:07.642773 systemd[1]: sshd@108-147.75.49.59:22-147.75.109.163:37340.service: Deactivated successfully. Feb 9 07:53:07.643223 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 07:53:07.643687 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Feb 9 07:53:07.644295 systemd-logind[1441]: Removed session 27. Feb 9 07:53:08.031186 sshd[5171]: Received disconnect from 51.77.245.237 port 54228:11: Bye Bye [preauth] Feb 9 07:53:08.031186 sshd[5171]: Disconnected from invalid user kernsp 51.77.245.237 port 54228 [preauth] Feb 9 07:53:08.033682 systemd[1]: sshd@107-147.75.49.59:22-51.77.245.237:54228.service: Deactivated successfully. Feb 9 07:53:08.068453 systemd[1]: Started sshd@109-147.75.49.59:22-124.156.202.69:51228.service. Feb 9 07:53:08.503862 sshd[5115]: Failed password for root from 61.177.172.140 port 49028 ssh2 Feb 9 07:53:09.025851 sshd[5115]: Received disconnect from 61.177.172.140 port 49028:11: [preauth] Feb 9 07:53:09.025851 sshd[5115]: Disconnected from authenticating user root 61.177.172.140 port 49028 [preauth] Feb 9 07:53:09.026374 sshd[5115]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:53:09.028390 systemd[1]: sshd@104-147.75.49.59:22-61.177.172.140:49028.service: Deactivated successfully. Feb 9 07:53:09.071238 sshd[5202]: Invalid user jxw from 124.156.202.69 port 51228 Feb 9 07:53:09.077332 sshd[5202]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:09.078455 sshd[5202]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:09.078571 sshd[5202]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.156.202.69 Feb 9 07:53:09.079584 sshd[5202]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:09.209028 systemd[1]: Started sshd@110-147.75.49.59:22-43.140.221.64:40902.service. Feb 9 07:53:09.240548 systemd[1]: Started sshd@111-147.75.49.59:22-61.177.172.140:14282.service. Feb 9 07:53:09.899396 systemd[1]: Started sshd@112-147.75.49.59:22-165.227.213.175:39180.service. Feb 9 07:53:10.362518 sshd[5211]: Invalid user sampad from 165.227.213.175 port 39180 Feb 9 07:53:10.368895 sshd[5211]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:10.370090 sshd[5211]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:10.370181 sshd[5211]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:53:10.371273 sshd[5211]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:10.447833 sshd[5208]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:53:11.168869 sshd[5202]: Failed password for invalid user jxw from 124.156.202.69 port 51228 ssh2 Feb 9 07:53:11.597471 sshd[5211]: Failed password for invalid user sampad from 165.227.213.175 port 39180 ssh2 Feb 9 07:53:11.674165 sshd[5208]: Failed password for root from 61.177.172.140 port 14282 ssh2 Feb 9 07:53:12.650473 systemd[1]: Started sshd@113-147.75.49.59:22-147.75.109.163:37348.service. Feb 9 07:53:12.679987 sshd[5216]: Accepted publickey for core from 147.75.109.163 port 37348 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:12.680767 sshd[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:12.683455 systemd-logind[1441]: New session 28 of user core. Feb 9 07:53:12.684054 systemd[1]: Started session-28.scope. Feb 9 07:53:12.771853 sshd[5216]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:12.773309 systemd[1]: sshd@113-147.75.49.59:22-147.75.109.163:37348.service: Deactivated successfully. Feb 9 07:53:12.773644 sshd[5211]: Received disconnect from 165.227.213.175 port 39180:11: Bye Bye [preauth] Feb 9 07:53:12.773644 sshd[5211]: Disconnected from invalid user sampad 165.227.213.175 port 39180 [preauth] Feb 9 07:53:12.773773 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 07:53:12.774174 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. Feb 9 07:53:12.774240 systemd[1]: sshd@112-147.75.49.59:22-165.227.213.175:39180.service: Deactivated successfully. Feb 9 07:53:12.775074 systemd-logind[1441]: Removed session 28. Feb 9 07:53:13.227474 sshd[5202]: Received disconnect from 124.156.202.69 port 51228:11: Bye Bye [preauth] Feb 9 07:53:13.227474 sshd[5202]: Disconnected from invalid user jxw 124.156.202.69 port 51228 [preauth] Feb 9 07:53:13.229963 systemd[1]: sshd@109-147.75.49.59:22-124.156.202.69:51228.service: Deactivated successfully. Feb 9 07:53:14.852803 sshd[5208]: Failed password for root from 61.177.172.140 port 14282 ssh2 Feb 9 07:53:17.781993 systemd[1]: Started sshd@114-147.75.49.59:22-147.75.109.163:38814.service. Feb 9 07:53:17.828881 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 38814 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:17.832165 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:17.841080 systemd-logind[1441]: New session 29 of user core. Feb 9 07:53:17.841706 systemd[1]: Started session-29.scope. Feb 9 07:53:17.927395 sshd[5243]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:17.928949 systemd[1]: sshd@114-147.75.49.59:22-147.75.109.163:38814.service: Deactivated successfully. Feb 9 07:53:17.929393 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 07:53:17.929802 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. Feb 9 07:53:17.930352 systemd-logind[1441]: Removed session 29. Feb 9 07:53:18.682726 sshd[5208]: Failed password for root from 61.177.172.140 port 14282 ssh2 Feb 9 07:53:19.203290 sshd[5208]: Received disconnect from 61.177.172.140 port 14282:11: [preauth] Feb 9 07:53:19.203290 sshd[5208]: Disconnected from authenticating user root 61.177.172.140 port 14282 [preauth] Feb 9 07:53:19.203863 sshd[5208]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 07:53:19.205922 systemd[1]: sshd@111-147.75.49.59:22-61.177.172.140:14282.service: Deactivated successfully. Feb 9 07:53:20.146065 systemd[1]: Started sshd@115-147.75.49.59:22-185.248.23.63:42902.service. Feb 9 07:53:20.146795 systemd[1]: Started sshd@116-147.75.49.59:22-43.142.87.223:37224.service. Feb 9 07:53:21.028270 sshd[5270]: Invalid user activemq from 43.142.87.223 port 37224 Feb 9 07:53:21.034407 sshd[5270]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:21.035419 sshd[5270]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:21.035533 sshd[5270]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.142.87.223 Feb 9 07:53:21.036420 sshd[5270]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:21.079512 sshd[5269]: Invalid user exportjf from 185.248.23.63 port 42902 Feb 9 07:53:21.085548 sshd[5269]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:21.086546 sshd[5269]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:21.086635 sshd[5269]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.248.23.63 Feb 9 07:53:21.087550 sshd[5269]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:22.936786 systemd[1]: Started sshd@117-147.75.49.59:22-147.75.109.163:38820.service. Feb 9 07:53:22.965790 sshd[5274]: Accepted publickey for core from 147.75.109.163 port 38820 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:22.966806 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:22.969551 systemd-logind[1441]: New session 30 of user core. Feb 9 07:53:22.970219 systemd[1]: Started session-30.scope. Feb 9 07:53:23.058658 sshd[5274]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:23.060095 systemd[1]: sshd@117-147.75.49.59:22-147.75.109.163:38820.service: Deactivated successfully. Feb 9 07:53:23.060530 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 07:53:23.060948 systemd-logind[1441]: Session 30 logged out. Waiting for processes to exit. Feb 9 07:53:23.061426 systemd-logind[1441]: Removed session 30. Feb 9 07:53:23.105677 sshd[5270]: Failed password for invalid user activemq from 43.142.87.223 port 37224 ssh2 Feb 9 07:53:23.156758 sshd[5269]: Failed password for invalid user exportjf from 185.248.23.63 port 42902 ssh2 Feb 9 07:53:23.588429 sshd[5269]: Received disconnect from 185.248.23.63 port 42902:11: Bye Bye [preauth] Feb 9 07:53:23.588429 sshd[5269]: Disconnected from invalid user exportjf 185.248.23.63 port 42902 [preauth] Feb 9 07:53:23.590965 systemd[1]: sshd@115-147.75.49.59:22-185.248.23.63:42902.service: Deactivated successfully. Feb 9 07:53:24.526328 sshd[5270]: Received disconnect from 43.142.87.223 port 37224:11: Bye Bye [preauth] Feb 9 07:53:24.526328 sshd[5270]: Disconnected from invalid user activemq 43.142.87.223 port 37224 [preauth] Feb 9 07:53:24.528747 systemd[1]: sshd@116-147.75.49.59:22-43.142.87.223:37224.service: Deactivated successfully. Feb 9 07:53:28.068194 systemd[1]: Started sshd@118-147.75.49.59:22-147.75.109.163:59110.service. Feb 9 07:53:28.096883 sshd[5303]: Accepted publickey for core from 147.75.109.163 port 59110 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:28.097833 sshd[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:28.100829 systemd-logind[1441]: New session 31 of user core. Feb 9 07:53:28.101489 systemd[1]: Started session-31.scope. Feb 9 07:53:28.190374 sshd[5303]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:28.191799 systemd[1]: sshd@118-147.75.49.59:22-147.75.109.163:59110.service: Deactivated successfully. Feb 9 07:53:28.192241 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 07:53:28.192613 systemd-logind[1441]: Session 31 logged out. Waiting for processes to exit. Feb 9 07:53:28.193110 systemd-logind[1441]: Removed session 31. Feb 9 07:53:33.199315 systemd[1]: Started sshd@119-147.75.49.59:22-147.75.109.163:59116.service. Feb 9 07:53:33.228610 sshd[5328]: Accepted publickey for core from 147.75.109.163 port 59116 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:33.229443 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:33.232228 systemd-logind[1441]: New session 32 of user core. Feb 9 07:53:33.232853 systemd[1]: Started session-32.scope. Feb 9 07:53:33.322177 sshd[5328]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:33.323448 systemd[1]: sshd@119-147.75.49.59:22-147.75.109.163:59116.service: Deactivated successfully. Feb 9 07:53:33.323881 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 07:53:33.324215 systemd-logind[1441]: Session 32 logged out. Waiting for processes to exit. Feb 9 07:53:33.324673 systemd-logind[1441]: Removed session 32. Feb 9 07:53:33.814051 systemd[1]: Started sshd@120-147.75.49.59:22-141.98.11.90:30900.service. Feb 9 07:53:34.482663 systemd[1]: Started sshd@121-147.75.49.59:22-43.157.98.116:39478.service. Feb 9 07:53:34.889030 sshd[5353]: Invalid user admin from 141.98.11.90 port 30900 Feb 9 07:53:35.104772 sshd[5353]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:35.105902 sshd[5353]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:35.105989 sshd[5353]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.90 Feb 9 07:53:35.107017 sshd[5353]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:35.354108 sshd[5356]: Invalid user deli from 43.157.98.116 port 39478 Feb 9 07:53:35.360163 sshd[5356]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:35.361303 sshd[5356]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:35.361392 sshd[5356]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.98.116 Feb 9 07:53:35.362317 sshd[5356]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:37.100699 sshd[5353]: Failed password for invalid user admin from 141.98.11.90 port 30900 ssh2 Feb 9 07:53:37.355847 sshd[5356]: Failed password for invalid user deli from 43.157.98.116 port 39478 ssh2 Feb 9 07:53:37.962551 sshd[5353]: Connection closed by invalid user admin 141.98.11.90 port 30900 [preauth] Feb 9 07:53:37.965044 systemd[1]: sshd@120-147.75.49.59:22-141.98.11.90:30900.service: Deactivated successfully. Feb 9 07:53:38.328499 systemd[1]: Started sshd@122-147.75.49.59:22-147.75.109.163:54978.service. Feb 9 07:53:38.359811 sshd[5360]: Accepted publickey for core from 147.75.109.163 port 54978 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:38.360602 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:38.363083 systemd-logind[1441]: New session 33 of user core. Feb 9 07:53:38.363594 systemd[1]: Started session-33.scope. Feb 9 07:53:38.489659 sshd[5360]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:38.491083 systemd[1]: sshd@122-147.75.49.59:22-147.75.109.163:54978.service: Deactivated successfully. Feb 9 07:53:38.491540 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 07:53:38.491952 systemd-logind[1441]: Session 33 logged out. Waiting for processes to exit. Feb 9 07:53:38.492405 systemd-logind[1441]: Removed session 33. Feb 9 07:53:39.203803 sshd[5356]: Received disconnect from 43.157.98.116 port 39478:11: Bye Bye [preauth] Feb 9 07:53:39.203803 sshd[5356]: Disconnected from invalid user deli 43.157.98.116 port 39478 [preauth] Feb 9 07:53:39.206294 systemd[1]: sshd@121-147.75.49.59:22-43.157.98.116:39478.service: Deactivated successfully. Feb 9 07:53:43.499582 systemd[1]: Started sshd@123-147.75.49.59:22-147.75.109.163:54982.service. Feb 9 07:53:43.528501 sshd[5386]: Accepted publickey for core from 147.75.109.163 port 54982 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:43.529216 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:43.531378 systemd-logind[1441]: New session 34 of user core. Feb 9 07:53:43.533334 systemd[1]: Started session-34.scope. Feb 9 07:53:43.534019 systemd[1]: Started sshd@124-147.75.49.59:22-54.37.228.73:44480.service. Feb 9 07:53:43.621367 sshd[5386]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:43.622772 systemd[1]: sshd@123-147.75.49.59:22-147.75.109.163:54982.service: Deactivated successfully. Feb 9 07:53:43.623220 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 07:53:43.623518 systemd-logind[1441]: Session 34 logged out. Waiting for processes to exit. Feb 9 07:53:43.623968 systemd-logind[1441]: Removed session 34. Feb 9 07:53:44.339699 sshd[5389]: Invalid user akbarshamsi from 54.37.228.73 port 44480 Feb 9 07:53:44.345847 sshd[5389]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:44.346939 sshd[5389]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:44.347028 sshd[5389]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:53:44.347930 sshd[5389]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:46.240753 sshd[5389]: Failed password for invalid user akbarshamsi from 54.37.228.73 port 44480 ssh2 Feb 9 07:53:48.146151 sshd[5389]: Received disconnect from 54.37.228.73 port 44480:11: Bye Bye [preauth] Feb 9 07:53:48.146151 sshd[5389]: Disconnected from invalid user akbarshamsi 54.37.228.73 port 44480 [preauth] Feb 9 07:53:48.148606 systemd[1]: sshd@124-147.75.49.59:22-54.37.228.73:44480.service: Deactivated successfully. Feb 9 07:53:48.631883 systemd[1]: Started sshd@125-147.75.49.59:22-147.75.109.163:59292.service. Feb 9 07:53:48.663852 sshd[5415]: Accepted publickey for core from 147.75.109.163 port 59292 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:48.664553 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:48.666972 systemd-logind[1441]: New session 35 of user core. Feb 9 07:53:48.667454 systemd[1]: Started session-35.scope. Feb 9 07:53:48.754811 sshd[5415]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:48.756152 systemd[1]: sshd@125-147.75.49.59:22-147.75.109.163:59292.service: Deactivated successfully. Feb 9 07:53:48.756586 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 07:53:48.756953 systemd-logind[1441]: Session 35 logged out. Waiting for processes to exit. Feb 9 07:53:48.757388 systemd-logind[1441]: Removed session 35. Feb 9 07:53:51.896075 systemd[1]: Started sshd@126-147.75.49.59:22-198.20.246.131:54800.service. Feb 9 07:53:52.097291 sshd[5440]: Invalid user kernsp from 198.20.246.131 port 54800 Feb 9 07:53:52.103310 sshd[5440]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:52.104333 sshd[5440]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:52.104423 sshd[5440]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:53:52.105425 sshd[5440]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:53.763315 sshd[5440]: Failed password for invalid user kernsp from 198.20.246.131 port 54800 ssh2 Feb 9 07:53:53.765800 systemd[1]: Started sshd@127-147.75.49.59:22-147.75.109.163:59294.service. Feb 9 07:53:53.798592 sshd[5443]: Accepted publickey for core from 147.75.109.163 port 59294 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:53.799338 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:53.801915 systemd-logind[1441]: New session 36 of user core. Feb 9 07:53:53.802513 systemd[1]: Started session-36.scope. Feb 9 07:53:53.889473 sshd[5443]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:53.890871 systemd[1]: sshd@127-147.75.49.59:22-147.75.109.163:59294.service: Deactivated successfully. Feb 9 07:53:53.891291 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 07:53:53.891617 systemd-logind[1441]: Session 36 logged out. Waiting for processes to exit. Feb 9 07:53:53.892068 systemd-logind[1441]: Removed session 36. Feb 9 07:53:55.199190 sshd[5440]: Received disconnect from 198.20.246.131 port 54800:11: Bye Bye [preauth] Feb 9 07:53:55.199190 sshd[5440]: Disconnected from invalid user kernsp 198.20.246.131 port 54800 [preauth] Feb 9 07:53:55.201723 systemd[1]: sshd@126-147.75.49.59:22-198.20.246.131:54800.service: Deactivated successfully. Feb 9 07:53:56.305874 systemd[1]: Started sshd@128-147.75.49.59:22-51.77.245.237:44726.service. Feb 9 07:53:56.992368 systemd[1]: Started sshd@129-147.75.49.59:22-124.220.81.132:44782.service. Feb 9 07:53:57.132991 sshd[5470]: Invalid user babajoon from 51.77.245.237 port 44726 Feb 9 07:53:57.139265 sshd[5470]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:57.140411 sshd[5470]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:53:57.140531 sshd[5470]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:53:57.141458 sshd[5470]: pam_faillock(sshd:auth): User unknown Feb 9 07:53:58.901839 systemd[1]: Started sshd@130-147.75.49.59:22-147.75.109.163:60294.service. Feb 9 07:53:58.934072 sshd[5477]: Accepted publickey for core from 147.75.109.163 port 60294 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:58.934871 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:58.937268 systemd-logind[1441]: New session 37 of user core. Feb 9 07:53:58.937811 systemd[1]: Started session-37.scope. Feb 9 07:53:59.025475 sshd[5477]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:59.026950 systemd[1]: sshd@130-147.75.49.59:22-147.75.109.163:60294.service: Deactivated successfully. Feb 9 07:53:59.027417 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 07:53:59.027864 systemd-logind[1441]: Session 37 logged out. Waiting for processes to exit. Feb 9 07:53:59.028371 systemd-logind[1441]: Removed session 37. Feb 9 07:53:59.154896 sshd[5470]: Failed password for invalid user babajoon from 51.77.245.237 port 44726 ssh2 Feb 9 07:54:00.578830 sshd[5470]: Received disconnect from 51.77.245.237 port 44726:11: Bye Bye [preauth] Feb 9 07:54:00.578830 sshd[5470]: Disconnected from invalid user babajoon 51.77.245.237 port 44726 [preauth] Feb 9 07:54:00.581337 systemd[1]: sshd@128-147.75.49.59:22-51.77.245.237:44726.service: Deactivated successfully. Feb 9 07:54:01.155749 systemd[1]: Started sshd@131-147.75.49.59:22-165.227.213.175:35932.service. Feb 9 07:54:01.302027 systemd[1]: Started sshd@132-147.75.49.59:22-77.73.131.239:22690.service. Feb 9 07:54:01.594637 sshd[5503]: Invalid user agtag from 165.227.213.175 port 35932 Feb 9 07:54:01.600904 sshd[5503]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:01.601906 sshd[5503]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:01.601995 sshd[5503]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.213.175 Feb 9 07:54:01.603093 sshd[5503]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:02.239049 sshd[5506]: Invalid user sorme from 77.73.131.239 port 22690 Feb 9 07:54:02.245242 sshd[5506]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:02.246356 sshd[5506]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:02.246444 sshd[5506]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=77.73.131.239 Feb 9 07:54:02.247476 sshd[5506]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:03.832684 sshd[5503]: Failed password for invalid user agtag from 165.227.213.175 port 35932 ssh2 Feb 9 07:54:04.035062 systemd[1]: Started sshd@133-147.75.49.59:22-147.75.109.163:60308.service. Feb 9 07:54:04.099307 sshd[5509]: Accepted publickey for core from 147.75.109.163 port 60308 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:04.100690 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:04.105163 systemd-logind[1441]: New session 38 of user core. Feb 9 07:54:04.106338 systemd[1]: Started session-38.scope. Feb 9 07:54:04.195568 sshd[5509]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:04.197134 systemd[1]: sshd@133-147.75.49.59:22-147.75.109.163:60308.service: Deactivated successfully. Feb 9 07:54:04.197606 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 07:54:04.198034 systemd-logind[1441]: Session 38 logged out. Waiting for processes to exit. Feb 9 07:54:04.198574 systemd-logind[1441]: Removed session 38. Feb 9 07:54:04.612732 sshd[5506]: Failed password for invalid user sorme from 77.73.131.239 port 22690 ssh2 Feb 9 07:54:05.462883 sshd[5503]: Received disconnect from 165.227.213.175 port 35932:11: Bye Bye [preauth] Feb 9 07:54:05.462883 sshd[5503]: Disconnected from invalid user agtag 165.227.213.175 port 35932 [preauth] Feb 9 07:54:05.465346 systemd[1]: sshd@131-147.75.49.59:22-165.227.213.175:35932.service: Deactivated successfully. Feb 9 07:54:06.393083 sshd[5506]: Received disconnect from 77.73.131.239 port 22690:11: Bye Bye [preauth] Feb 9 07:54:06.393083 sshd[5506]: Disconnected from invalid user sorme 77.73.131.239 port 22690 [preauth] Feb 9 07:54:06.395610 systemd[1]: sshd@132-147.75.49.59:22-77.73.131.239:22690.service: Deactivated successfully. Feb 9 07:54:08.622819 systemd[1]: Started sshd@134-147.75.49.59:22-124.156.202.69:39062.service. Feb 9 07:54:09.207137 systemd[1]: Started sshd@135-147.75.49.59:22-147.75.109.163:54808.service. Feb 9 07:54:09.238890 sshd[5539]: Accepted publickey for core from 147.75.109.163 port 54808 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:09.239621 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:09.241998 systemd-logind[1441]: New session 39 of user core. Feb 9 07:54:09.242577 systemd[1]: Started session-39.scope. Feb 9 07:54:09.324119 sshd[5539]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:09.325666 systemd[1]: sshd@135-147.75.49.59:22-147.75.109.163:54808.service: Deactivated successfully. Feb 9 07:54:09.326018 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 07:54:09.326361 systemd-logind[1441]: Session 39 logged out. Waiting for processes to exit. Feb 9 07:54:09.326941 systemd[1]: Started sshd@136-147.75.49.59:22-147.75.109.163:54820.service. Feb 9 07:54:09.327341 systemd-logind[1441]: Removed session 39. Feb 9 07:54:09.354791 sshd[5564]: Accepted publickey for core from 147.75.109.163 port 54820 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:09.355485 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:09.357963 systemd-logind[1441]: New session 40 of user core. Feb 9 07:54:09.358403 systemd[1]: Started session-40.scope. Feb 9 07:54:09.630502 sshd[5536]: Invalid user cwboiler from 124.156.202.69 port 39062 Feb 9 07:54:09.637354 sshd[5536]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:09.638549 sshd[5536]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:09.638643 sshd[5536]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.156.202.69 Feb 9 07:54:09.639546 sshd[5536]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:10.140115 systemd[1]: Started sshd@137-147.75.49.59:22-43.153.53.166:40142.service. Feb 9 07:54:10.142153 systemd[1]: Started sshd@138-147.75.49.59:22-43.142.87.223:45448.service. Feb 9 07:54:10.249520 sshd[5585]: Invalid user bkm from 43.153.53.166 port 40142 Feb 9 07:54:10.255657 sshd[5585]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:10.256698 sshd[5585]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:10.256790 sshd[5585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.153.53.166 Feb 9 07:54:10.257889 sshd[5585]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:10.476317 sshd[5564]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:10.483397 systemd[1]: sshd@136-147.75.49.59:22-147.75.109.163:54820.service: Deactivated successfully. Feb 9 07:54:10.485192 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 07:54:10.487060 systemd-logind[1441]: Session 40 logged out. Waiting for processes to exit. Feb 9 07:54:10.489916 systemd[1]: Started sshd@139-147.75.49.59:22-147.75.109.163:54830.service. Feb 9 07:54:10.492599 systemd-logind[1441]: Removed session 40. Feb 9 07:54:10.579346 sshd[5591]: Accepted publickey for core from 147.75.109.163 port 54830 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:10.580436 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:10.583740 systemd-logind[1441]: New session 41 of user core. Feb 9 07:54:10.584497 systemd[1]: Started session-41.scope. Feb 9 07:54:11.374754 sshd[5591]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:11.377181 systemd[1]: sshd@139-147.75.49.59:22-147.75.109.163:54830.service: Deactivated successfully. Feb 9 07:54:11.377833 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 07:54:11.378390 systemd-logind[1441]: Session 41 logged out. Waiting for processes to exit. Feb 9 07:54:11.379531 systemd[1]: Started sshd@140-147.75.49.59:22-147.75.109.163:54838.service. Feb 9 07:54:11.380254 systemd-logind[1441]: Removed session 41. Feb 9 07:54:11.426540 sshd[5623]: Accepted publickey for core from 147.75.109.163 port 54838 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:11.427974 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:11.431745 systemd-logind[1441]: New session 42 of user core. Feb 9 07:54:11.432596 systemd[1]: Started session-42.scope. Feb 9 07:54:11.632632 sshd[5536]: Failed password for invalid user cwboiler from 124.156.202.69 port 39062 ssh2 Feb 9 07:54:11.657238 sshd[5623]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:11.659356 systemd[1]: sshd@140-147.75.49.59:22-147.75.109.163:54838.service: Deactivated successfully. Feb 9 07:54:11.659804 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 07:54:11.660187 systemd-logind[1441]: Session 42 logged out. Waiting for processes to exit. Feb 9 07:54:11.660980 systemd[1]: Started sshd@141-147.75.49.59:22-147.75.109.163:54850.service. Feb 9 07:54:11.661484 systemd-logind[1441]: Removed session 42. Feb 9 07:54:11.688949 sshd[5647]: Accepted publickey for core from 147.75.109.163 port 54850 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:11.689860 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:11.692101 systemd-logind[1441]: New session 43 of user core. Feb 9 07:54:11.692613 systemd[1]: Started session-43.scope. Feb 9 07:54:11.802578 sshd[5647]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:11.804073 systemd[1]: sshd@141-147.75.49.59:22-147.75.109.163:54850.service: Deactivated successfully. Feb 9 07:54:11.804547 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 07:54:11.804982 systemd-logind[1441]: Session 43 logged out. Waiting for processes to exit. Feb 9 07:54:11.805443 systemd-logind[1441]: Removed session 43. Feb 9 07:54:12.007680 sshd[5536]: Received disconnect from 124.156.202.69 port 39062:11: Bye Bye [preauth] Feb 9 07:54:12.007680 sshd[5536]: Disconnected from invalid user cwboiler 124.156.202.69 port 39062 [preauth] Feb 9 07:54:12.010133 systemd[1]: sshd@134-147.75.49.59:22-124.156.202.69:39062.service: Deactivated successfully. Feb 9 07:54:12.054195 sshd[5585]: Failed password for invalid user bkm from 43.153.53.166 port 40142 ssh2 Feb 9 07:54:12.387624 sshd[5585]: Received disconnect from 43.153.53.166 port 40142:11: Bye Bye [preauth] Feb 9 07:54:12.387624 sshd[5585]: Disconnected from invalid user bkm 43.153.53.166 port 40142 [preauth] Feb 9 07:54:12.390225 systemd[1]: sshd@137-147.75.49.59:22-43.153.53.166:40142.service: Deactivated successfully. Feb 9 07:54:13.307786 systemd[1]: Started sshd@142-147.75.49.59:22-91.93.63.184:50784.service. Feb 9 07:54:14.426578 sshd[5674]: Invalid user huhaihong from 91.93.63.184 port 50784 Feb 9 07:54:14.432952 sshd[5674]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:14.434160 sshd[5674]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:14.434253 sshd[5674]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.93.63.184 Feb 9 07:54:14.435202 sshd[5674]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:16.779753 sshd[5674]: Failed password for invalid user huhaihong from 91.93.63.184 port 50784 ssh2 Feb 9 07:54:16.811894 systemd[1]: Started sshd@143-147.75.49.59:22-147.75.109.163:51270.service. Feb 9 07:54:16.841581 sshd[5680]: Accepted publickey for core from 147.75.109.163 port 51270 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:16.842258 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:16.844436 systemd-logind[1441]: New session 44 of user core. Feb 9 07:54:16.845040 systemd[1]: Started session-44.scope. Feb 9 07:54:16.933616 sshd[5680]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:16.935011 systemd[1]: sshd@143-147.75.49.59:22-147.75.109.163:51270.service: Deactivated successfully. Feb 9 07:54:16.935472 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 07:54:16.935920 systemd-logind[1441]: Session 44 logged out. Waiting for processes to exit. Feb 9 07:54:16.936383 systemd-logind[1441]: Removed session 44. Feb 9 07:54:17.508236 sshd[5674]: Received disconnect from 91.93.63.184 port 50784:11: Bye Bye [preauth] Feb 9 07:54:17.508236 sshd[5674]: Disconnected from invalid user huhaihong 91.93.63.184 port 50784 [preauth] Feb 9 07:54:17.510807 systemd[1]: sshd@142-147.75.49.59:22-91.93.63.184:50784.service: Deactivated successfully. Feb 9 07:54:21.945089 systemd[1]: Started sshd@144-147.75.49.59:22-147.75.109.163:51278.service. Feb 9 07:54:21.977214 sshd[5709]: Accepted publickey for core from 147.75.109.163 port 51278 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:21.980789 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:21.991772 systemd-logind[1441]: New session 45 of user core. Feb 9 07:54:21.994434 systemd[1]: Started session-45.scope. Feb 9 07:54:22.092589 sshd[5709]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:22.094016 systemd[1]: sshd@144-147.75.49.59:22-147.75.109.163:51278.service: Deactivated successfully. Feb 9 07:54:22.094453 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 07:54:22.094882 systemd-logind[1441]: Session 45 logged out. Waiting for processes to exit. Feb 9 07:54:22.095362 systemd-logind[1441]: Removed session 45. Feb 9 07:54:23.302356 systemd[1]: Started sshd@145-147.75.49.59:22-185.248.23.63:43667.service. Feb 9 07:54:24.200017 sshd[5732]: Invalid user mpds from 185.248.23.63 port 43667 Feb 9 07:54:24.206017 sshd[5732]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:24.207020 sshd[5732]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:24.207109 sshd[5732]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.248.23.63 Feb 9 07:54:24.208084 sshd[5732]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:25.926210 sshd[5732]: Failed password for invalid user mpds from 185.248.23.63 port 43667 ssh2 Feb 9 07:54:27.085456 sshd[5732]: Received disconnect from 185.248.23.63 port 43667:11: Bye Bye [preauth] Feb 9 07:54:27.085456 sshd[5732]: Disconnected from invalid user mpds 185.248.23.63 port 43667 [preauth] Feb 9 07:54:27.087945 systemd[1]: sshd@145-147.75.49.59:22-185.248.23.63:43667.service: Deactivated successfully. Feb 9 07:54:27.103986 systemd[1]: Started sshd@146-147.75.49.59:22-147.75.109.163:51110.service. Feb 9 07:54:27.135812 sshd[5738]: Accepted publickey for core from 147.75.109.163 port 51110 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:27.136515 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:27.139006 systemd-logind[1441]: New session 46 of user core. Feb 9 07:54:27.139475 systemd[1]: Started session-46.scope. Feb 9 07:54:27.225970 sshd[5738]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:27.227351 systemd[1]: sshd@146-147.75.49.59:22-147.75.109.163:51110.service: Deactivated successfully. Feb 9 07:54:27.227819 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 07:54:27.228237 systemd-logind[1441]: Session 46 logged out. Waiting for processes to exit. Feb 9 07:54:27.228868 systemd-logind[1441]: Removed session 46. Feb 9 07:54:30.447828 systemd[1]: Started sshd@147-147.75.49.59:22-43.157.98.116:58578.service. Feb 9 07:54:31.335345 sshd[5763]: Invalid user juha from 43.157.98.116 port 58578 Feb 9 07:54:31.341602 sshd[5763]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:31.342613 sshd[5763]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:31.342703 sshd[5763]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.98.116 Feb 9 07:54:31.343789 sshd[5763]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:32.236113 systemd[1]: Started sshd@148-147.75.49.59:22-147.75.109.163:51112.service. Feb 9 07:54:32.264178 sshd[5766]: Accepted publickey for core from 147.75.109.163 port 51112 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:32.264981 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:32.267836 systemd-logind[1441]: New session 47 of user core. Feb 9 07:54:32.268462 systemd[1]: Started session-47.scope. Feb 9 07:54:32.355731 sshd[5766]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:32.357570 systemd[1]: sshd@148-147.75.49.59:22-147.75.109.163:51112.service: Deactivated successfully. Feb 9 07:54:32.357920 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 07:54:32.358227 systemd-logind[1441]: Session 47 logged out. Waiting for processes to exit. Feb 9 07:54:32.358852 systemd[1]: Started sshd@149-147.75.49.59:22-147.75.109.163:51118.service. Feb 9 07:54:32.359192 systemd-logind[1441]: Removed session 47. Feb 9 07:54:32.386679 sshd[5789]: Accepted publickey for core from 147.75.109.163 port 51118 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:32.387376 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:32.389806 systemd-logind[1441]: New session 48 of user core. Feb 9 07:54:32.390312 systemd[1]: Started session-48.scope. Feb 9 07:54:33.356887 sshd[5763]: Failed password for invalid user juha from 43.157.98.116 port 58578 ssh2 Feb 9 07:54:33.753550 env[1453]: time="2024-02-09T07:54:33.753282730Z" level=info msg="StopContainer for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" with timeout 30 (s)" Feb 9 07:54:33.754437 env[1453]: time="2024-02-09T07:54:33.754034982Z" level=info msg="Stop container \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" with signal terminated" Feb 9 07:54:33.790149 systemd[1]: cri-containerd-eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191.scope: Deactivated successfully. Feb 9 07:54:33.790790 systemd[1]: cri-containerd-eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191.scope: Consumed 4.155s CPU time. Feb 9 07:54:33.807820 env[1453]: time="2024-02-09T07:54:33.807747603Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 07:54:33.815231 env[1453]: time="2024-02-09T07:54:33.815188465Z" level=info msg="StopContainer for \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\" with timeout 2 (s)" Feb 9 07:54:33.815546 env[1453]: time="2024-02-09T07:54:33.815513076Z" level=info msg="Stop container \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\" with signal terminated" Feb 9 07:54:33.822379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191-rootfs.mount: Deactivated successfully. Feb 9 07:54:33.829404 env[1453]: time="2024-02-09T07:54:33.829361862Z" level=info msg="shim disconnected" id=eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191 Feb 9 07:54:33.829538 env[1453]: time="2024-02-09T07:54:33.829406017Z" level=warning msg="cleaning up after shim disconnected" id=eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191 namespace=k8s.io Feb 9 07:54:33.829538 env[1453]: time="2024-02-09T07:54:33.829417641Z" level=info msg="cleaning up dead shim" Feb 9 07:54:33.833740 systemd-networkd[1301]: lxc_health: Link DOWN Feb 9 07:54:33.833745 systemd-networkd[1301]: lxc_health: Lost carrier Feb 9 07:54:33.835867 env[1453]: time="2024-02-09T07:54:33.835809466Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5857 runtime=io.containerd.runc.v2\n" Feb 9 07:54:33.836980 env[1453]: time="2024-02-09T07:54:33.836927308Z" level=info msg="StopContainer for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" returns successfully" Feb 9 07:54:33.837521 env[1453]: time="2024-02-09T07:54:33.837493351Z" level=info msg="StopPodSandbox for \"0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600\"" Feb 9 07:54:33.837618 env[1453]: time="2024-02-09T07:54:33.837560055Z" level=info msg="Container to stop \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:33.839391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600-shm.mount: Deactivated successfully. Feb 9 07:54:33.855060 systemd[1]: cri-containerd-0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600.scope: Deactivated successfully. Feb 9 07:54:33.883719 env[1453]: time="2024-02-09T07:54:33.883623841Z" level=info msg="shim disconnected" id=0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600 Feb 9 07:54:33.883719 env[1453]: time="2024-02-09T07:54:33.883686435Z" level=warning msg="cleaning up after shim disconnected" id=0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600 namespace=k8s.io Feb 9 07:54:33.883719 env[1453]: time="2024-02-09T07:54:33.883705176Z" level=info msg="cleaning up dead shim" Feb 9 07:54:33.883793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600-rootfs.mount: Deactivated successfully. Feb 9 07:54:33.890838 systemd[1]: cri-containerd-c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc.scope: Deactivated successfully. Feb 9 07:54:33.891058 systemd[1]: cri-containerd-c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc.scope: Consumed 17.356s CPU time. Feb 9 07:54:33.901759 env[1453]: time="2024-02-09T07:54:33.901722734Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5890 runtime=io.containerd.runc.v2\n" Feb 9 07:54:33.902008 env[1453]: time="2024-02-09T07:54:33.901983424Z" level=info msg="TearDown network for sandbox \"0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600\" successfully" Feb 9 07:54:33.902008 env[1453]: time="2024-02-09T07:54:33.902004861Z" level=info msg="StopPodSandbox for \"0b39a639351d9a7b35a43a2c850616bd0ba5df4a70504e47af4e16db68032600\" returns successfully" Feb 9 07:54:33.907451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc-rootfs.mount: Deactivated successfully. Feb 9 07:54:33.926257 env[1453]: time="2024-02-09T07:54:33.926174343Z" level=info msg="shim disconnected" id=c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc Feb 9 07:54:33.926257 env[1453]: time="2024-02-09T07:54:33.926227477Z" level=warning msg="cleaning up after shim disconnected" id=c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc namespace=k8s.io Feb 9 07:54:33.926257 env[1453]: time="2024-02-09T07:54:33.926243793Z" level=info msg="cleaning up dead shim" Feb 9 07:54:33.950281 env[1453]: time="2024-02-09T07:54:33.950194697Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5914 runtime=io.containerd.runc.v2\n" Feb 9 07:54:33.952327 env[1453]: time="2024-02-09T07:54:33.952223182Z" level=info msg="StopContainer for \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\" returns successfully" Feb 9 07:54:33.953321 env[1453]: time="2024-02-09T07:54:33.953186836Z" level=info msg="StopPodSandbox for \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\"" Feb 9 07:54:33.953592 env[1453]: time="2024-02-09T07:54:33.953346966Z" level=info msg="Container to stop \"f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:33.953592 env[1453]: time="2024-02-09T07:54:33.953403751Z" level=info msg="Container to stop \"4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:33.953592 env[1453]: time="2024-02-09T07:54:33.953448830Z" level=info msg="Container to stop \"d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:33.953592 env[1453]: time="2024-02-09T07:54:33.953518500Z" level=info msg="Container to stop \"372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:33.953592 env[1453]: time="2024-02-09T07:54:33.953562870Z" level=info msg="Container to stop \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:33.968129 systemd[1]: cri-containerd-1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837.scope: Deactivated successfully. Feb 9 07:54:33.990493 kubelet[2531]: I0209 07:54:33.990424 2531 scope.go:117] "RemoveContainer" containerID="eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191" Feb 9 07:54:33.993706 env[1453]: time="2024-02-09T07:54:33.993571014Z" level=info msg="RemoveContainer for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\"" Feb 9 07:54:33.998860 env[1453]: time="2024-02-09T07:54:33.998739973Z" level=info msg="RemoveContainer for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" returns successfully" Feb 9 07:54:33.999260 kubelet[2531]: I0209 07:54:33.999209 2531 scope.go:117] "RemoveContainer" containerID="eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191" Feb 9 07:54:33.999904 env[1453]: time="2024-02-09T07:54:33.999735068Z" level=error msg="ContainerStatus for \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\": not found" Feb 9 07:54:34.000255 kubelet[2531]: E0209 07:54:34.000193 2531 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\": not found" containerID="eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191" Feb 9 07:54:34.000429 kubelet[2531]: I0209 07:54:34.000376 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191"} err="failed to get container status \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\": rpc error: code = NotFound desc = an error occurred when try to find container \"eeb1f9f4f79b925d052f6d57032b29837d44de41d54d7ba83b5a79449bbd0191\": not found" Feb 9 07:54:34.022004 env[1453]: time="2024-02-09T07:54:34.021774099Z" level=info msg="shim disconnected" id=1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837 Feb 9 07:54:34.022004 env[1453]: time="2024-02-09T07:54:34.021905802Z" level=warning msg="cleaning up after shim disconnected" id=1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837 namespace=k8s.io Feb 9 07:54:34.022004 env[1453]: time="2024-02-09T07:54:34.021951431Z" level=info msg="cleaning up dead shim" Feb 9 07:54:34.029693 kubelet[2531]: I0209 07:54:34.029606 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33210bc5-a004-4860-b871-9d6a1a8e52e6-cilium-config-path\") pod \"33210bc5-a004-4860-b871-9d6a1a8e52e6\" (UID: \"33210bc5-a004-4860-b871-9d6a1a8e52e6\") " Feb 9 07:54:34.030056 kubelet[2531]: I0209 07:54:34.029720 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-645r2\" (UniqueName: \"kubernetes.io/projected/33210bc5-a004-4860-b871-9d6a1a8e52e6-kube-api-access-645r2\") pod \"33210bc5-a004-4860-b871-9d6a1a8e52e6\" (UID: \"33210bc5-a004-4860-b871-9d6a1a8e52e6\") " Feb 9 07:54:34.034964 kubelet[2531]: I0209 07:54:34.034853 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33210bc5-a004-4860-b871-9d6a1a8e52e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33210bc5-a004-4860-b871-9d6a1a8e52e6" (UID: "33210bc5-a004-4860-b871-9d6a1a8e52e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 07:54:34.036430 kubelet[2531]: I0209 07:54:34.036354 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33210bc5-a004-4860-b871-9d6a1a8e52e6-kube-api-access-645r2" (OuterVolumeSpecName: "kube-api-access-645r2") pod "33210bc5-a004-4860-b871-9d6a1a8e52e6" (UID: "33210bc5-a004-4860-b871-9d6a1a8e52e6"). InnerVolumeSpecName "kube-api-access-645r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 07:54:34.040289 env[1453]: time="2024-02-09T07:54:34.040174535Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5946 runtime=io.containerd.runc.v2\n" Feb 9 07:54:34.040969 env[1453]: time="2024-02-09T07:54:34.040854128Z" level=info msg="TearDown network for sandbox \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" successfully" Feb 9 07:54:34.040969 env[1453]: time="2024-02-09T07:54:34.040911203Z" level=info msg="StopPodSandbox for \"1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837\" returns successfully" Feb 9 07:54:34.130678 kubelet[2531]: I0209 07:54:34.130593 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-kernel\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.131064 kubelet[2531]: I0209 07:54:34.130717 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gf8g\" (UniqueName: \"kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-kube-api-access-7gf8g\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.131064 kubelet[2531]: I0209 07:54:34.130739 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.131064 kubelet[2531]: I0209 07:54:34.130831 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hostproc" (OuterVolumeSpecName: "hostproc") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.131064 kubelet[2531]: I0209 07:54:34.130781 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hostproc\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.131064 kubelet[2531]: I0209 07:54:34.130989 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-lib-modules\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.132056 kubelet[2531]: I0209 07:54:34.131078 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.132056 kubelet[2531]: I0209 07:54:34.131109 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-etc-cni-netd\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.132056 kubelet[2531]: I0209 07:54:34.131167 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.132056 kubelet[2531]: I0209 07:54:34.131232 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-cgroup\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.132056 kubelet[2531]: I0209 07:54:34.131294 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cni-path\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.132935 kubelet[2531]: I0209 07:54:34.131282 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.132935 kubelet[2531]: I0209 07:54:34.131383 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-config-path\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.132935 kubelet[2531]: I0209 07:54:34.131373 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cni-path" (OuterVolumeSpecName: "cni-path") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.132935 kubelet[2531]: I0209 07:54:34.131469 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-run\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.132935 kubelet[2531]: I0209 07:54:34.131578 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-bpf-maps\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.133536 kubelet[2531]: I0209 07:54:34.131571 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.133536 kubelet[2531]: I0209 07:54:34.131651 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0f72a64-a72d-459f-82cf-ae0dec9b054d-clustermesh-secrets\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.133536 kubelet[2531]: I0209 07:54:34.131669 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.133536 kubelet[2531]: I0209 07:54:34.131717 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hubble-tls\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.133536 kubelet[2531]: I0209 07:54:34.131774 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-xtables-lock\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.133536 kubelet[2531]: I0209 07:54:34.131827 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-net\") pod \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\" (UID: \"c0f72a64-a72d-459f-82cf-ae0dec9b054d\") " Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.131919 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-cgroup\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.131957 2531 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cni-path\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.131919 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.131988 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-run\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.132028 2531 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-bpf-maps\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.132014 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:34.134191 kubelet[2531]: I0209 07:54:34.132065 2531 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-645r2\" (UniqueName: \"kubernetes.io/projected/33210bc5-a004-4860-b871-9d6a1a8e52e6-kube-api-access-645r2\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134955 kubelet[2531]: I0209 07:54:34.132112 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33210bc5-a004-4860-b871-9d6a1a8e52e6-cilium-config-path\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134955 kubelet[2531]: I0209 07:54:34.132145 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134955 kubelet[2531]: I0209 07:54:34.132176 2531 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hostproc\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134955 kubelet[2531]: I0209 07:54:34.132206 2531 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-etc-cni-netd\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.134955 kubelet[2531]: I0209 07:54:34.132240 2531 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-lib-modules\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.136577 kubelet[2531]: I0209 07:54:34.136508 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 07:54:34.137752 kubelet[2531]: I0209 07:54:34.137640 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-kube-api-access-7gf8g" (OuterVolumeSpecName: "kube-api-access-7gf8g") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "kube-api-access-7gf8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 07:54:34.138017 kubelet[2531]: I0209 07:54:34.137963 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0f72a64-a72d-459f-82cf-ae0dec9b054d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 07:54:34.138416 kubelet[2531]: I0209 07:54:34.138341 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c0f72a64-a72d-459f-82cf-ae0dec9b054d" (UID: "c0f72a64-a72d-459f-82cf-ae0dec9b054d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 07:54:34.233138 kubelet[2531]: I0209 07:54:34.233038 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0f72a64-a72d-459f-82cf-ae0dec9b054d-cilium-config-path\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.233138 kubelet[2531]: I0209 07:54:34.233119 2531 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0f72a64-a72d-459f-82cf-ae0dec9b054d-clustermesh-secrets\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.233138 kubelet[2531]: I0209 07:54:34.233159 2531 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-hubble-tls\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.233705 kubelet[2531]: I0209 07:54:34.233194 2531 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-xtables-lock\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.233705 kubelet[2531]: I0209 07:54:34.233226 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0f72a64-a72d-459f-82cf-ae0dec9b054d-host-proc-sys-net\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.233705 kubelet[2531]: I0209 07:54:34.233260 2531 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7gf8g\" (UniqueName: \"kubernetes.io/projected/c0f72a64-a72d-459f-82cf-ae0dec9b054d-kube-api-access-7gf8g\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:34.300140 systemd[1]: Removed slice kubepods-besteffort-pod33210bc5_a004_4860_b871_9d6a1a8e52e6.slice. Feb 9 07:54:34.300407 systemd[1]: kubepods-besteffort-pod33210bc5_a004_4860_b871_9d6a1a8e52e6.slice: Consumed 4.187s CPU time. Feb 9 07:54:34.734680 sshd[5763]: Received disconnect from 43.157.98.116 port 58578:11: Bye Bye [preauth] Feb 9 07:54:34.734680 sshd[5763]: Disconnected from invalid user juha 43.157.98.116 port 58578 [preauth] Feb 9 07:54:34.737544 systemd[1]: sshd@147-147.75.49.59:22-43.157.98.116:58578.service: Deactivated successfully. Feb 9 07:54:34.788845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837-rootfs.mount: Deactivated successfully. Feb 9 07:54:34.789105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f3c39daad1b99bcc687a62cd8b14996f440e2257190aff508bd95cadc307837-shm.mount: Deactivated successfully. Feb 9 07:54:34.789311 systemd[1]: var-lib-kubelet-pods-c0f72a64\x2da72d\x2d459f\x2d82cf\x2dae0dec9b054d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7gf8g.mount: Deactivated successfully. Feb 9 07:54:34.789520 systemd[1]: var-lib-kubelet-pods-c0f72a64\x2da72d\x2d459f\x2d82cf\x2dae0dec9b054d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 07:54:34.789696 systemd[1]: var-lib-kubelet-pods-c0f72a64\x2da72d\x2d459f\x2d82cf\x2dae0dec9b054d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 07:54:34.789809 systemd[1]: var-lib-kubelet-pods-33210bc5\x2da004\x2d4860\x2db871\x2d9d6a1a8e52e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d645r2.mount: Deactivated successfully. Feb 9 07:54:34.929929 kubelet[2531]: I0209 07:54:34.929831 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="33210bc5-a004-4860-b871-9d6a1a8e52e6" path="/var/lib/kubelet/pods/33210bc5-a004-4860-b871-9d6a1a8e52e6/volumes" Feb 9 07:54:34.934881 systemd[1]: Removed slice kubepods-burstable-podc0f72a64_a72d_459f_82cf_ae0dec9b054d.slice. Feb 9 07:54:34.934936 systemd[1]: kubepods-burstable-podc0f72a64_a72d_459f_82cf_ae0dec9b054d.slice: Consumed 17.427s CPU time. Feb 9 07:54:35.000897 kubelet[2531]: I0209 07:54:35.000694 2531 scope.go:117] "RemoveContainer" containerID="c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc" Feb 9 07:54:35.003496 env[1453]: time="2024-02-09T07:54:35.003394157Z" level=info msg="RemoveContainer for \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\"" Feb 9 07:54:35.007633 env[1453]: time="2024-02-09T07:54:35.007526835Z" level=info msg="RemoveContainer for \"c1a0b52a1709b87042cee6c56e2b8ba83d5b3c3c78af67464ff4b84bf6c11fbc\" returns successfully" Feb 9 07:54:35.008001 kubelet[2531]: I0209 07:54:35.007941 2531 scope.go:117] "RemoveContainer" containerID="372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff" Feb 9 07:54:35.010911 env[1453]: time="2024-02-09T07:54:35.010820208Z" level=info msg="RemoveContainer for \"372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff\"" Feb 9 07:54:35.015083 env[1453]: time="2024-02-09T07:54:35.014976691Z" level=info msg="RemoveContainer for \"372d5f13975d0c8fefea4a5019d00af370a574671ad7af8ece26f5895d8fd0ff\" returns successfully" Feb 9 07:54:35.015416 kubelet[2531]: I0209 07:54:35.015377 2531 scope.go:117] "RemoveContainer" containerID="d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205" Feb 9 07:54:35.018055 env[1453]: time="2024-02-09T07:54:35.017966906Z" level=info msg="RemoveContainer for \"d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205\"" Feb 9 07:54:35.022458 env[1453]: time="2024-02-09T07:54:35.022378050Z" level=info msg="RemoveContainer for \"d738c25fbfc20165d361b737d7ecc7e834a1e625a338d8ba75bbbab045a21205\" returns successfully" Feb 9 07:54:35.022827 kubelet[2531]: I0209 07:54:35.022776 2531 scope.go:117] "RemoveContainer" containerID="4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050" Feb 9 07:54:35.025492 env[1453]: time="2024-02-09T07:54:35.025403752Z" level=info msg="RemoveContainer for \"4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050\"" Feb 9 07:54:35.029592 env[1453]: time="2024-02-09T07:54:35.029467460Z" level=info msg="RemoveContainer for \"4f076625e93131468731cafba99c3043ae4b9be265b892dc5619ca95416ff050\" returns successfully" Feb 9 07:54:35.029910 kubelet[2531]: I0209 07:54:35.029828 2531 scope.go:117] "RemoveContainer" containerID="f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03" Feb 9 07:54:35.032259 env[1453]: time="2024-02-09T07:54:35.032160372Z" level=info msg="RemoveContainer for \"f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03\"" Feb 9 07:54:35.035941 env[1453]: time="2024-02-09T07:54:35.035836551Z" level=info msg="RemoveContainer for \"f92d942bbec895424927273d01c535b702892dff2b2d7aaac9a1951fec1b6f03\" returns successfully" Feb 9 07:54:35.699430 sshd[5789]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:35.706789 systemd[1]: sshd@149-147.75.49.59:22-147.75.109.163:51118.service: Deactivated successfully. Feb 9 07:54:35.707243 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 07:54:35.707788 systemd-logind[1441]: Session 48 logged out. Waiting for processes to exit. Feb 9 07:54:35.708388 systemd[1]: Started sshd@150-147.75.49.59:22-147.75.109.163:36018.service. Feb 9 07:54:35.708986 systemd-logind[1441]: Removed session 48. Feb 9 07:54:35.737353 sshd[5966]: Accepted publickey for core from 147.75.109.163 port 36018 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:35.738142 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:35.741005 systemd-logind[1441]: New session 49 of user core. Feb 9 07:54:35.741527 systemd[1]: Started session-49.scope. Feb 9 07:54:36.099760 sshd[5966]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:36.102294 systemd[1]: sshd@150-147.75.49.59:22-147.75.109.163:36018.service: Deactivated successfully. Feb 9 07:54:36.102806 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 07:54:36.103195 systemd-logind[1441]: Session 49 logged out. Waiting for processes to exit. Feb 9 07:54:36.104223 systemd[1]: Started sshd@151-147.75.49.59:22-147.75.109.163:36028.service. Feb 9 07:54:36.104742 systemd-logind[1441]: Removed session 49. Feb 9 07:54:36.106227 kubelet[2531]: I0209 07:54:36.106212 2531 topology_manager.go:215] "Topology Admit Handler" podUID="1076afb8-c940-42cf-bac0-fcb53e4781dc" podNamespace="kube-system" podName="cilium-kwqms" Feb 9 07:54:36.106464 kubelet[2531]: E0209 07:54:36.106251 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="33210bc5-a004-4860-b871-9d6a1a8e52e6" containerName="cilium-operator" Feb 9 07:54:36.106464 kubelet[2531]: E0209 07:54:36.106256 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" containerName="apply-sysctl-overwrites" Feb 9 07:54:36.106464 kubelet[2531]: E0209 07:54:36.106260 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" containerName="mount-bpf-fs" Feb 9 07:54:36.106464 kubelet[2531]: E0209 07:54:36.106264 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" containerName="clean-cilium-state" Feb 9 07:54:36.106464 kubelet[2531]: E0209 07:54:36.106268 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" containerName="mount-cgroup" Feb 9 07:54:36.106464 kubelet[2531]: E0209 07:54:36.106272 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" containerName="cilium-agent" Feb 9 07:54:36.106464 kubelet[2531]: I0209 07:54:36.106286 2531 memory_manager.go:346] "RemoveStaleState removing state" podUID="33210bc5-a004-4860-b871-9d6a1a8e52e6" containerName="cilium-operator" Feb 9 07:54:36.106464 kubelet[2531]: I0209 07:54:36.106297 2531 memory_manager.go:346] "RemoveStaleState removing state" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" containerName="cilium-agent" Feb 9 07:54:36.110215 systemd[1]: Created slice kubepods-burstable-pod1076afb8_c940_42cf_bac0_fcb53e4781dc.slice. Feb 9 07:54:36.134776 sshd[5989]: Accepted publickey for core from 147.75.109.163 port 36028 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:36.135587 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:36.138029 systemd-logind[1441]: New session 50 of user core. Feb 9 07:54:36.138506 systemd[1]: Started session-50.scope. Feb 9 07:54:36.147033 kubelet[2531]: I0209 07:54:36.146988 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-bpf-maps\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147033 kubelet[2531]: I0209 07:54:36.147009 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-hostproc\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147097 kubelet[2531]: I0209 07:54:36.147056 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-kernel\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147097 kubelet[2531]: I0209 07:54:36.147075 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-config-path\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147097 kubelet[2531]: I0209 07:54:36.147087 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-ipsec-secrets\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147156 kubelet[2531]: I0209 07:54:36.147099 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-lib-modules\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147156 kubelet[2531]: I0209 07:54:36.147110 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-run\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147156 kubelet[2531]: I0209 07:54:36.147139 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-etc-cni-netd\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147211 kubelet[2531]: I0209 07:54:36.147157 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-clustermesh-secrets\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147211 kubelet[2531]: I0209 07:54:36.147176 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-hubble-tls\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147247 kubelet[2531]: I0209 07:54:36.147215 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cni-path\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147247 kubelet[2531]: I0209 07:54:36.147233 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-xtables-lock\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147283 kubelet[2531]: I0209 07:54:36.147253 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-net\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147283 kubelet[2531]: I0209 07:54:36.147271 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-cgroup\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.147283 kubelet[2531]: I0209 07:54:36.147282 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxbnd\" (UniqueName: \"kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-kube-api-access-gxbnd\") pod \"cilium-kwqms\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " pod="kube-system/cilium-kwqms" Feb 9 07:54:36.275214 sshd[5989]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:36.277105 systemd[1]: sshd@151-147.75.49.59:22-147.75.109.163:36028.service: Deactivated successfully. Feb 9 07:54:36.277608 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 07:54:36.278083 systemd-logind[1441]: Session 50 logged out. Waiting for processes to exit. Feb 9 07:54:36.278707 systemd[1]: Started sshd@152-147.75.49.59:22-147.75.109.163:36032.service. Feb 9 07:54:36.279059 systemd-logind[1441]: Removed session 50. Feb 9 07:54:36.281886 env[1453]: time="2024-02-09T07:54:36.281862894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwqms,Uid:1076afb8-c940-42cf-bac0-fcb53e4781dc,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:36.287091 env[1453]: time="2024-02-09T07:54:36.287029197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:36.287091 env[1453]: time="2024-02-09T07:54:36.287051149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:36.287091 env[1453]: time="2024-02-09T07:54:36.287058081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:36.287210 env[1453]: time="2024-02-09T07:54:36.287120138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124 pid=6027 runtime=io.containerd.runc.v2 Feb 9 07:54:36.306469 systemd[1]: Started cri-containerd-47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124.scope. Feb 9 07:54:36.307593 sshd[6018]: Accepted publickey for core from 147.75.109.163 port 36032 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:54:36.308302 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:54:36.310252 systemd-logind[1441]: New session 51 of user core. Feb 9 07:54:36.310787 systemd[1]: Started session-51.scope. Feb 9 07:54:36.329813 env[1453]: time="2024-02-09T07:54:36.329786030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwqms,Uid:1076afb8-c940-42cf-bac0-fcb53e4781dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124\"" Feb 9 07:54:36.331147 env[1453]: time="2024-02-09T07:54:36.331106226Z" level=info msg="CreateContainer within sandbox \"47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 07:54:36.336458 env[1453]: time="2024-02-09T07:54:36.336410895Z" level=info msg="CreateContainer within sandbox \"47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\"" Feb 9 07:54:36.336622 env[1453]: time="2024-02-09T07:54:36.336607251Z" level=info msg="StartContainer for \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\"" Feb 9 07:54:36.344977 systemd[1]: Started cri-containerd-2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70.scope. Feb 9 07:54:36.351784 systemd[1]: cri-containerd-2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70.scope: Deactivated successfully. Feb 9 07:54:36.351988 systemd[1]: Stopped cri-containerd-2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70.scope. Feb 9 07:54:36.369289 env[1453]: time="2024-02-09T07:54:36.369244697Z" level=info msg="shim disconnected" id=2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70 Feb 9 07:54:36.369289 env[1453]: time="2024-02-09T07:54:36.369290090Z" level=warning msg="cleaning up after shim disconnected" id=2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70 namespace=k8s.io Feb 9 07:54:36.369523 env[1453]: time="2024-02-09T07:54:36.369301264Z" level=info msg="cleaning up dead shim" Feb 9 07:54:36.386795 env[1453]: time="2024-02-09T07:54:36.386726537Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6098 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T07:54:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 07:54:36.387011 env[1453]: time="2024-02-09T07:54:36.386906357Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Feb 9 07:54:36.387140 env[1453]: time="2024-02-09T07:54:36.387078218Z" level=error msg="Failed to pipe stdout of container \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\"" error="reading from a closed fifo" Feb 9 07:54:36.387140 env[1453]: time="2024-02-09T07:54:36.387113955Z" level=error msg="Failed to pipe stderr of container \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\"" error="reading from a closed fifo" Feb 9 07:54:36.387700 env[1453]: time="2024-02-09T07:54:36.387668108Z" level=error msg="StartContainer for \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 07:54:36.387919 kubelet[2531]: E0209 07:54:36.387872 2531 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70" Feb 9 07:54:36.387998 kubelet[2531]: E0209 07:54:36.387968 2531 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 07:54:36.387998 kubelet[2531]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 07:54:36.387998 kubelet[2531]: rm /hostbin/cilium-mount Feb 9 07:54:36.388087 kubelet[2531]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gxbnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kwqms_kube-system(1076afb8-c940-42cf-bac0-fcb53e4781dc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 07:54:36.388087 kubelet[2531]: E0209 07:54:36.388002 2531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kwqms" podUID="1076afb8-c940-42cf-bac0-fcb53e4781dc" Feb 9 07:54:36.657917 kubelet[2531]: E0209 07:54:36.657710 2531 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 07:54:36.930266 kubelet[2531]: I0209 07:54:36.930093 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c0f72a64-a72d-459f-82cf-ae0dec9b054d" path="/var/lib/kubelet/pods/c0f72a64-a72d-459f-82cf-ae0dec9b054d/volumes" Feb 9 07:54:37.012077 env[1453]: time="2024-02-09T07:54:37.011945808Z" level=info msg="StopPodSandbox for \"47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124\"" Feb 9 07:54:37.012389 env[1453]: time="2024-02-09T07:54:37.012091230Z" level=info msg="Container to stop \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 07:54:37.032850 systemd[1]: cri-containerd-47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124.scope: Deactivated successfully. Feb 9 07:54:37.072375 env[1453]: time="2024-02-09T07:54:37.072268492Z" level=info msg="shim disconnected" id=47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124 Feb 9 07:54:37.072375 env[1453]: time="2024-02-09T07:54:37.072353885Z" level=warning msg="cleaning up after shim disconnected" id=47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124 namespace=k8s.io Feb 9 07:54:37.072671 env[1453]: time="2024-02-09T07:54:37.072379345Z" level=info msg="cleaning up dead shim" Feb 9 07:54:37.082997 env[1453]: time="2024-02-09T07:54:37.082893031Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6133 runtime=io.containerd.runc.v2\n" Feb 9 07:54:37.083551 env[1453]: time="2024-02-09T07:54:37.083446967Z" level=info msg="TearDown network for sandbox \"47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124\" successfully" Feb 9 07:54:37.083551 env[1453]: time="2024-02-09T07:54:37.083512704Z" level=info msg="StopPodSandbox for \"47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124\" returns successfully" Feb 9 07:54:37.153362 kubelet[2531]: I0209 07:54:37.153251 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-hostproc\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.153362 kubelet[2531]: I0209 07:54:37.153372 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-ipsec-secrets\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153379 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-hostproc" (OuterVolumeSpecName: "hostproc") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153440 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-kernel\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153526 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxbnd\" (UniqueName: \"kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-kube-api-access-gxbnd\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153586 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-xtables-lock\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153574 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153640 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-bpf-maps\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153703 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-config-path\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153694 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153768 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-hubble-tls\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153769 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153828 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-cgroup\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153889 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-run\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153885 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153944 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cni-path\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.154817 kubelet[2531]: I0209 07:54:37.153944 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154001 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-lib-modules\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154003 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cni-path" (OuterVolumeSpecName: "cni-path") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154061 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-clustermesh-secrets\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154116 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-etc-cni-netd\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154106 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154174 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-net\") pod \"1076afb8-c940-42cf-bac0-fcb53e4781dc\" (UID: \"1076afb8-c940-42cf-bac0-fcb53e4781dc\") " Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154204 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154261 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-cgroup\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154306 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-run\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154311 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154343 2531 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-lib-modules\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154387 2531 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-cni-path\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154416 2531 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-hostproc\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154447 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154478 2531 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-bpf-maps\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.157642 kubelet[2531]: I0209 07:54:37.154526 2531 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-xtables-lock\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.159868 kubelet[2531]: I0209 07:54:37.159831 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 07:54:37.159868 kubelet[2531]: I0209 07:54:37.159842 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 07:54:37.159934 kubelet[2531]: I0209 07:54:37.159875 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-kube-api-access-gxbnd" (OuterVolumeSpecName: "kube-api-access-gxbnd") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "kube-api-access-gxbnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 07:54:37.159976 kubelet[2531]: I0209 07:54:37.159965 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 07:54:37.160164 kubelet[2531]: I0209 07:54:37.160148 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1076afb8-c940-42cf-bac0-fcb53e4781dc" (UID: "1076afb8-c940-42cf-bac0-fcb53e4781dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 07:54:37.186283 systemd[1]: Started sshd@153-147.75.49.59:22-139.59.127.178:47956.service. Feb 9 07:54:37.254893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124-rootfs.mount: Deactivated successfully. Feb 9 07:54:37.255189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47232ee6e3aea4199f0439ce1e5c962f2ae7fee5e520e54900a169c9d7eec124-shm.mount: Deactivated successfully. Feb 9 07:54:37.255393 systemd[1]: var-lib-kubelet-pods-1076afb8\x2dc940\x2d42cf\x2dbac0\x2dfcb53e4781dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxbnd.mount: Deactivated successfully. Feb 9 07:54:37.255591 systemd[1]: var-lib-kubelet-pods-1076afb8\x2dc940\x2d42cf\x2dbac0\x2dfcb53e4781dc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 07:54:37.255762 systemd[1]: var-lib-kubelet-pods-1076afb8\x2dc940\x2d42cf\x2dbac0\x2dfcb53e4781dc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 07:54:37.256257 kubelet[2531]: I0209 07:54:37.255881 2531 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-clustermesh-secrets\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.256257 kubelet[2531]: I0209 07:54:37.255976 2531 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-etc-cni-netd\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.256257 kubelet[2531]: I0209 07:54:37.256041 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1076afb8-c940-42cf-bac0-fcb53e4781dc-host-proc-sys-net\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.256257 kubelet[2531]: I0209 07:54:37.256104 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.256257 kubelet[2531]: I0209 07:54:37.256167 2531 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gxbnd\" (UniqueName: \"kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-kube-api-access-gxbnd\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.256257 kubelet[2531]: I0209 07:54:37.256225 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1076afb8-c940-42cf-bac0-fcb53e4781dc-cilium-config-path\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:37.255934 systemd[1]: var-lib-kubelet-pods-1076afb8\x2dc940\x2d42cf\x2dbac0\x2dfcb53e4781dc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 07:54:37.257152 kubelet[2531]: I0209 07:54:37.256278 2531 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1076afb8-c940-42cf-bac0-fcb53e4781dc-hubble-tls\") on node \"ci-3510.3.2-a-29c32a4854\" DevicePath \"\"" Feb 9 07:54:38.017763 kubelet[2531]: I0209 07:54:38.017693 2531 scope.go:117] "RemoveContainer" containerID="2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70" Feb 9 07:54:38.020319 env[1453]: time="2024-02-09T07:54:38.020240192Z" level=info msg="RemoveContainer for \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\"" Feb 9 07:54:38.023921 env[1453]: time="2024-02-09T07:54:38.023906787Z" level=info msg="RemoveContainer for \"2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70\" returns successfully" Feb 9 07:54:38.024931 systemd[1]: Removed slice kubepods-burstable-pod1076afb8_c940_42cf_bac0_fcb53e4781dc.slice. Feb 9 07:54:38.039877 kubelet[2531]: I0209 07:54:38.039855 2531 topology_manager.go:215] "Topology Admit Handler" podUID="cbd09cfc-5fa1-40df-b47e-ad97b0098b2f" podNamespace="kube-system" podName="cilium-5hkfp" Feb 9 07:54:38.040014 kubelet[2531]: E0209 07:54:38.039891 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1076afb8-c940-42cf-bac0-fcb53e4781dc" containerName="mount-cgroup" Feb 9 07:54:38.040014 kubelet[2531]: I0209 07:54:38.039908 2531 memory_manager.go:346] "RemoveStaleState removing state" podUID="1076afb8-c940-42cf-bac0-fcb53e4781dc" containerName="mount-cgroup" Feb 9 07:54:38.050645 systemd[1]: Created slice kubepods-burstable-podcbd09cfc_5fa1_40df_b47e_ad97b0098b2f.slice. Feb 9 07:54:38.163338 kubelet[2531]: I0209 07:54:38.163261 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-cilium-ipsec-secrets\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.163399 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-lib-modules\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.163632 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-host-proc-sys-kernel\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.163869 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-cilium-config-path\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.164019 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-hubble-tls\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.164155 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt22p\" (UniqueName: \"kubernetes.io/projected/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-kube-api-access-jt22p\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.164283 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-cilium-cgroup\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.164462 kubelet[2531]: I0209 07:54:38.164417 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-xtables-lock\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.164566 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-cilium-run\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.164634 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-etc-cni-netd\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.164776 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-host-proc-sys-net\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.164961 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-cni-path\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.165105 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-bpf-maps\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.165218 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-hostproc\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.165523 kubelet[2531]: I0209 07:54:38.165286 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cbd09cfc-5fa1-40df-b47e-ad97b0098b2f-clustermesh-secrets\") pod \"cilium-5hkfp\" (UID: \"cbd09cfc-5fa1-40df-b47e-ad97b0098b2f\") " pod="kube-system/cilium-5hkfp" Feb 9 07:54:38.211594 sshd[6150]: Invalid user czech from 139.59.127.178 port 47956 Feb 9 07:54:38.214298 sshd[6150]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:38.214737 sshd[6150]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:38.214775 sshd[6150]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=139.59.127.178 Feb 9 07:54:38.215130 sshd[6150]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:38.353140 env[1453]: time="2024-02-09T07:54:38.353082147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hkfp,Uid:cbd09cfc-5fa1-40df-b47e-ad97b0098b2f,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:38.360390 env[1453]: time="2024-02-09T07:54:38.360330011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:38.360390 env[1453]: time="2024-02-09T07:54:38.360354428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:38.360390 env[1453]: time="2024-02-09T07:54:38.360363861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:38.360538 env[1453]: time="2024-02-09T07:54:38.360441131Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b pid=6163 runtime=io.containerd.runc.v2 Feb 9 07:54:38.380525 systemd[1]: Started cri-containerd-dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b.scope. Feb 9 07:54:38.397217 env[1453]: time="2024-02-09T07:54:38.397146188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hkfp,Uid:cbd09cfc-5fa1-40df-b47e-ad97b0098b2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\"" Feb 9 07:54:38.399454 env[1453]: time="2024-02-09T07:54:38.399420623Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 07:54:38.406996 env[1453]: time="2024-02-09T07:54:38.406928535Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c\"" Feb 9 07:54:38.407392 env[1453]: time="2024-02-09T07:54:38.407358090Z" level=info msg="StartContainer for \"07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c\"" Feb 9 07:54:38.446532 systemd[1]: Started cri-containerd-07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c.scope. Feb 9 07:54:38.503756 env[1453]: time="2024-02-09T07:54:38.503622720Z" level=info msg="StartContainer for \"07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c\" returns successfully" Feb 9 07:54:38.525674 systemd[1]: cri-containerd-07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c.scope: Deactivated successfully. Feb 9 07:54:38.573929 env[1453]: time="2024-02-09T07:54:38.573787381Z" level=info msg="shim disconnected" id=07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c Feb 9 07:54:38.573929 env[1453]: time="2024-02-09T07:54:38.573894184Z" level=warning msg="cleaning up after shim disconnected" id=07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c namespace=k8s.io Feb 9 07:54:38.573929 env[1453]: time="2024-02-09T07:54:38.573924820Z" level=info msg="cleaning up dead shim" Feb 9 07:54:38.604092 env[1453]: time="2024-02-09T07:54:38.603910848Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6248 runtime=io.containerd.runc.v2\n" Feb 9 07:54:38.930722 kubelet[2531]: I0209 07:54:38.930535 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1076afb8-c940-42cf-bac0-fcb53e4781dc" path="/var/lib/kubelet/pods/1076afb8-c940-42cf-bac0-fcb53e4781dc/volumes" Feb 9 07:54:39.033106 env[1453]: time="2024-02-09T07:54:39.033077390Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 07:54:39.037075 env[1453]: time="2024-02-09T07:54:39.037027903Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b\"" Feb 9 07:54:39.037416 env[1453]: time="2024-02-09T07:54:39.037398396Z" level=info msg="StartContainer for \"c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b\"" Feb 9 07:54:39.055760 kubelet[2531]: I0209 07:54:39.055738 2531 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-29c32a4854" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T07:54:39Z","lastTransitionTime":"2024-02-09T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 07:54:39.057663 systemd[1]: Started cri-containerd-c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b.scope. Feb 9 07:54:39.072312 env[1453]: time="2024-02-09T07:54:39.072284306Z" level=info msg="StartContainer for \"c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b\" returns successfully" Feb 9 07:54:39.076006 systemd[1]: cri-containerd-c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b.scope: Deactivated successfully. Feb 9 07:54:39.103050 env[1453]: time="2024-02-09T07:54:39.102996309Z" level=info msg="shim disconnected" id=c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b Feb 9 07:54:39.103050 env[1453]: time="2024-02-09T07:54:39.103049748Z" level=warning msg="cleaning up after shim disconnected" id=c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b namespace=k8s.io Feb 9 07:54:39.103263 env[1453]: time="2024-02-09T07:54:39.103062114Z" level=info msg="cleaning up dead shim" Feb 9 07:54:39.124919 env[1453]: time="2024-02-09T07:54:39.124760273Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6307 runtime=io.containerd.runc.v2\n" Feb 9 07:54:39.476220 kubelet[2531]: W0209 07:54:39.476101 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1076afb8_c940_42cf_bac0_fcb53e4781dc.slice/cri-containerd-2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70.scope WatchSource:0}: container "2fbc8ea757cc7403ff719333265a95ecf78b871c0a14dd7379928ef7d3198c70" in namespace "k8s.io": not found Feb 9 07:54:40.038911 env[1453]: time="2024-02-09T07:54:40.038795916Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 07:54:40.050365 env[1453]: time="2024-02-09T07:54:40.050345750Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3\"" Feb 9 07:54:40.050770 env[1453]: time="2024-02-09T07:54:40.050693340Z" level=info msg="StartContainer for \"6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3\"" Feb 9 07:54:40.051184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684337062.mount: Deactivated successfully. Feb 9 07:54:40.072420 systemd[1]: Started cri-containerd-6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3.scope. Feb 9 07:54:40.101376 env[1453]: time="2024-02-09T07:54:40.101311528Z" level=info msg="StartContainer for \"6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3\" returns successfully" Feb 9 07:54:40.103665 systemd[1]: cri-containerd-6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3.scope: Deactivated successfully. Feb 9 07:54:40.157154 env[1453]: time="2024-02-09T07:54:40.156979485Z" level=info msg="shim disconnected" id=6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3 Feb 9 07:54:40.157586 env[1453]: time="2024-02-09T07:54:40.157143744Z" level=warning msg="cleaning up after shim disconnected" id=6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3 namespace=k8s.io Feb 9 07:54:40.157586 env[1453]: time="2024-02-09T07:54:40.157182923Z" level=info msg="cleaning up dead shim" Feb 9 07:54:40.182565 env[1453]: time="2024-02-09T07:54:40.182514864Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6363 runtime=io.containerd.runc.v2\n" Feb 9 07:54:40.188559 sshd[6150]: Failed password for invalid user czech from 139.59.127.178 port 47956 ssh2 Feb 9 07:54:40.273818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3-rootfs.mount: Deactivated successfully. Feb 9 07:54:41.047343 env[1453]: time="2024-02-09T07:54:41.047203318Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 07:54:41.059456 env[1453]: time="2024-02-09T07:54:41.059404031Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1\"" Feb 9 07:54:41.059825 env[1453]: time="2024-02-09T07:54:41.059768497Z" level=info msg="StartContainer for \"d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1\"" Feb 9 07:54:41.068401 systemd[1]: Started cri-containerd-d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1.scope. Feb 9 07:54:41.101084 env[1453]: time="2024-02-09T07:54:41.100957254Z" level=info msg="StartContainer for \"d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1\" returns successfully" Feb 9 07:54:41.101710 systemd[1]: cri-containerd-d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1.scope: Deactivated successfully. Feb 9 07:54:41.157902 env[1453]: time="2024-02-09T07:54:41.157795584Z" level=info msg="shim disconnected" id=d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1 Feb 9 07:54:41.158330 env[1453]: time="2024-02-09T07:54:41.157905301Z" level=warning msg="cleaning up after shim disconnected" id=d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1 namespace=k8s.io Feb 9 07:54:41.158330 env[1453]: time="2024-02-09T07:54:41.157937771Z" level=info msg="cleaning up dead shim" Feb 9 07:54:41.187011 env[1453]: time="2024-02-09T07:54:41.186874184Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6418 runtime=io.containerd.runc.v2\n" Feb 9 07:54:41.277940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1-rootfs.mount: Deactivated successfully. Feb 9 07:54:41.513168 sshd[6150]: Received disconnect from 139.59.127.178 port 47956:11: Bye Bye [preauth] Feb 9 07:54:41.513168 sshd[6150]: Disconnected from invalid user czech 139.59.127.178 port 47956 [preauth] Feb 9 07:54:41.515650 systemd[1]: sshd@153-147.75.49.59:22-139.59.127.178:47956.service: Deactivated successfully. Feb 9 07:54:41.659425 kubelet[2531]: E0209 07:54:41.659322 2531 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 07:54:42.002644 sshd[5042]: Timeout before authentication for 124.220.81.132 port 32834 Feb 9 07:54:42.004074 systemd[1]: sshd@97-147.75.49.59:22-124.220.81.132:32834.service: Deactivated successfully. Feb 9 07:54:42.056750 env[1453]: time="2024-02-09T07:54:42.056625584Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 07:54:42.075161 env[1453]: time="2024-02-09T07:54:42.075069393Z" level=info msg="CreateContainer within sandbox \"dbe041e9c9eeeeaf994a0f034f96bca576d60df72909409ebc3736ad4918753b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3236abe5117848f3283f868d91c738d4a3e931837b70613420924e7e3747bf6\"" Feb 9 07:54:42.075973 env[1453]: time="2024-02-09T07:54:42.075946733Z" level=info msg="StartContainer for \"f3236abe5117848f3283f868d91c738d4a3e931837b70613420924e7e3747bf6\"" Feb 9 07:54:42.097552 systemd[1]: Started cri-containerd-f3236abe5117848f3283f868d91c738d4a3e931837b70613420924e7e3747bf6.scope. Feb 9 07:54:42.122969 env[1453]: time="2024-02-09T07:54:42.122914140Z" level=info msg="StartContainer for \"f3236abe5117848f3283f868d91c738d4a3e931837b70613420924e7e3747bf6\" returns successfully" Feb 9 07:54:42.282491 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 07:54:42.590980 kubelet[2531]: W0209 07:54:42.590870 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd09cfc_5fa1_40df_b47e_ad97b0098b2f.slice/cri-containerd-07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c.scope WatchSource:0}: task 07bc7bed0e2b5f21576d923e27c8d54a631d993c592fad08cbbf77216e67388c not found: not found Feb 9 07:54:43.095034 kubelet[2531]: I0209 07:54:43.094956 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5hkfp" podStartSLOduration=5.094866326 podCreationTimestamp="2024-02-09 07:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:54:43.094692909 +0000 UTC m=+2372.224991539" watchObservedRunningTime="2024-02-09 07:54:43.094866326 +0000 UTC m=+2372.225164956" Feb 9 07:54:45.218249 systemd-networkd[1301]: lxc_health: Link UP Feb 9 07:54:45.239306 systemd-networkd[1301]: lxc_health: Gained carrier Feb 9 07:54:45.239494 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 07:54:45.699951 kubelet[2531]: W0209 07:54:45.699895 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd09cfc_5fa1_40df_b47e_ad97b0098b2f.slice/cri-containerd-c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b.scope WatchSource:0}: task c6bfd8593e9f6ae61ce2502f95982228bfb3152bfbce768128813a4ea1d6ae8b not found: not found Feb 9 07:54:47.139626 systemd-networkd[1301]: lxc_health: Gained IPv6LL Feb 9 07:54:48.804102 kubelet[2531]: W0209 07:54:48.804045 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd09cfc_5fa1_40df_b47e_ad97b0098b2f.slice/cri-containerd-6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3.scope WatchSource:0}: task 6ee68558b9fa321e2f7753cbe55665a38addb9719bab011676aac82fd493f5f3 not found: not found Feb 9 07:54:49.580705 systemd[1]: Started sshd@154-147.75.49.59:22-198.20.246.131:45928.service. Feb 9 07:54:49.665110 systemd[1]: Started sshd@155-147.75.49.59:22-43.140.221.64:47208.service. Feb 9 07:54:49.788066 sshd[7224]: Invalid user aylin from 198.20.246.131 port 45928 Feb 9 07:54:49.793705 sshd[7224]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:49.794775 sshd[7224]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:49.794859 sshd[7224]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.246.131 Feb 9 07:54:49.795802 sshd[7224]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:50.234135 systemd[1]: Started sshd@156-147.75.49.59:22-51.77.245.237:35230.service. Feb 9 07:54:50.984194 sshd[6018]: pam_unix(sshd:session): session closed for user core Feb 9 07:54:50.986760 systemd[1]: sshd@152-147.75.49.59:22-147.75.109.163:36032.service: Deactivated successfully. Feb 9 07:54:50.987692 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 07:54:50.988424 systemd-logind[1441]: Session 51 logged out. Waiting for processes to exit. Feb 9 07:54:50.989324 systemd-logind[1441]: Removed session 51. Feb 9 07:54:51.061471 sshd[7229]: Invalid user nazi from 51.77.245.237 port 35230 Feb 9 07:54:51.067625 sshd[7229]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:51.068789 sshd[7229]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:51.068881 sshd[7229]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.77.245.237 Feb 9 07:54:51.069906 sshd[7229]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:51.265014 systemd[1]: Started sshd@157-147.75.49.59:22-54.37.228.73:58154.service. Feb 9 07:54:51.912661 kubelet[2531]: W0209 07:54:51.912551 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd09cfc_5fa1_40df_b47e_ad97b0098b2f.slice/cri-containerd-d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1.scope WatchSource:0}: task d6e9ed879d69a30ce461704c2cbf2a48d2a645245323a5233bf3bb900de70ce1 not found: not found Feb 9 07:54:51.944789 sshd[7224]: Failed password for invalid user aylin from 198.20.246.131 port 45928 ssh2 Feb 9 07:54:52.082262 sshd[7262]: Invalid user dadaham from 54.37.228.73 port 58154 Feb 9 07:54:52.088462 sshd[7262]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:52.089447 sshd[7262]: pam_unix(sshd:auth): check pass; user unknown Feb 9 07:54:52.089564 sshd[7262]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=54.37.228.73 Feb 9 07:54:52.090569 sshd[7262]: pam_faillock(sshd:auth): User unknown Feb 9 07:54:52.600328 systemd[1]: Started sshd@158-147.75.49.59:22-165.227.213.175:44086.service.