Feb 9 22:49:51.549659 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 22:49:51.549672 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 22:49:51.549679 kernel: BIOS-provided physical RAM map: Feb 9 22:49:51.549683 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 22:49:51.549686 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 22:49:51.549690 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 22:49:51.549694 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 22:49:51.549698 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 22:49:51.549702 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000082589fff] usable Feb 9 22:49:51.549706 kernel: BIOS-e820: [mem 0x000000008258a000-0x000000008258afff] ACPI NVS Feb 9 22:49:51.549710 kernel: BIOS-e820: [mem 0x000000008258b000-0x000000008258bfff] reserved Feb 9 22:49:51.549714 kernel: BIOS-e820: [mem 0x000000008258c000-0x000000008afccfff] usable Feb 9 22:49:51.549718 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 9 22:49:51.549722 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 9 22:49:51.549727 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 9 22:49:51.549732 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 9 22:49:51.549736 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 9 22:49:51.549741 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 9 22:49:51.549745 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 22:49:51.549749 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 22:49:51.549753 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 22:49:51.549757 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 22:49:51.549761 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 22:49:51.549765 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 9 22:49:51.549769 kernel: NX (Execute Disable) protection: active Feb 9 22:49:51.549773 kernel: SMBIOS 3.2.1 present. Feb 9 22:49:51.549778 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 9 22:49:51.549783 kernel: tsc: Detected 3400.000 MHz processor Feb 9 22:49:51.549787 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 22:49:51.549791 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 22:49:51.549796 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 22:49:51.549800 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 9 22:49:51.549804 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 22:49:51.549809 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 9 22:49:51.549813 kernel: Using GB pages for direct mapping Feb 9 22:49:51.549817 kernel: ACPI: Early table checksum verification disabled Feb 9 22:49:51.549822 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 22:49:51.549827 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 22:49:51.549831 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 9 22:49:51.549835 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 22:49:51.549842 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 9 22:49:51.549846 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 9 22:49:51.549852 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 9 22:49:51.549856 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 22:49:51.549861 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 22:49:51.549866 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 22:49:51.549870 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 22:49:51.549875 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 22:49:51.549880 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 22:49:51.549884 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 22:49:51.549890 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 22:49:51.549894 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 22:49:51.549899 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 22:49:51.549904 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 22:49:51.549908 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 22:49:51.549913 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 22:49:51.549917 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 22:49:51.549922 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 22:49:51.549927 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 22:49:51.549932 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 9 22:49:51.549937 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 22:49:51.549941 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 22:49:51.549946 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 22:49:51.549950 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 9 22:49:51.549955 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 22:49:51.549960 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 22:49:51.549964 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 22:49:51.549970 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 22:49:51.549974 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 22:49:51.549979 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 9 22:49:51.549984 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 9 22:49:51.549988 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 9 22:49:51.549993 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 9 22:49:51.549998 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 9 22:49:51.550002 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 9 22:49:51.550008 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 9 22:49:51.550012 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 9 22:49:51.550017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 9 22:49:51.550021 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 9 22:49:51.550026 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 9 22:49:51.550031 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 9 22:49:51.550035 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 9 22:49:51.550040 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 9 22:49:51.550044 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 9 22:49:51.550050 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 9 22:49:51.550054 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 9 22:49:51.550059 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 9 22:49:51.550063 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 9 22:49:51.550068 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 9 22:49:51.550073 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 9 22:49:51.550077 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 9 22:49:51.550082 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 9 22:49:51.550086 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 9 22:49:51.550092 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 9 22:49:51.550096 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 9 22:49:51.550101 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 9 22:49:51.550106 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 9 22:49:51.550110 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 9 22:49:51.550115 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 9 22:49:51.550119 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 9 22:49:51.550124 kernel: No NUMA configuration found Feb 9 22:49:51.550129 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 9 22:49:51.550134 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 9 22:49:51.550139 kernel: Zone ranges: Feb 9 22:49:51.550144 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 22:49:51.550148 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 22:49:51.550153 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 9 22:49:51.550157 kernel: Movable zone start for each node Feb 9 22:49:51.550162 kernel: Early memory node ranges Feb 9 22:49:51.550167 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 22:49:51.550171 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 22:49:51.550176 kernel: node 0: [mem 0x0000000040400000-0x0000000082589fff] Feb 9 22:49:51.550181 kernel: node 0: [mem 0x000000008258c000-0x000000008afccfff] Feb 9 22:49:51.550186 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 9 22:49:51.550190 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 9 22:49:51.550195 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 9 22:49:51.550200 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 9 22:49:51.550204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 22:49:51.550212 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 22:49:51.550218 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 22:49:51.550223 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 22:49:51.550228 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 9 22:49:51.550234 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 9 22:49:51.550239 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 9 22:49:51.550244 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 9 22:49:51.550249 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 22:49:51.550254 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 22:49:51.550259 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 22:49:51.550264 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 22:49:51.550270 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 22:49:51.550275 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 22:49:51.550279 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 22:49:51.550284 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 22:49:51.550289 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 22:49:51.550294 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 22:49:51.550299 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 22:49:51.550304 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 22:49:51.550309 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 22:49:51.550315 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 22:49:51.550320 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 22:49:51.550325 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 22:49:51.550329 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 22:49:51.550334 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 22:49:51.550339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 22:49:51.550344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 22:49:51.550349 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 22:49:51.550354 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 22:49:51.550360 kernel: TSC deadline timer available Feb 9 22:49:51.550365 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 22:49:51.550370 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 9 22:49:51.550375 kernel: Booting paravirtualized kernel on bare hardware Feb 9 22:49:51.550380 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 22:49:51.550385 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 22:49:51.550390 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 22:49:51.550395 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 22:49:51.550400 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 22:49:51.550405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 9 22:49:51.550413 kernel: Policy zone: Normal Feb 9 22:49:51.550437 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 22:49:51.550443 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 22:49:51.550448 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 22:49:51.550453 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 22:49:51.550473 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 22:49:51.550478 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 22:49:51.550484 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 22:49:51.550489 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 22:49:51.550494 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 22:49:51.550499 kernel: rcu: Hierarchical RCU implementation. Feb 9 22:49:51.550504 kernel: rcu: RCU event tracing is enabled. Feb 9 22:49:51.550510 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 22:49:51.550515 kernel: Rude variant of Tasks RCU enabled. Feb 9 22:49:51.550520 kernel: Tracing variant of Tasks RCU enabled. Feb 9 22:49:51.550525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 22:49:51.550530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 22:49:51.550535 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 22:49:51.550540 kernel: random: crng init done Feb 9 22:49:51.550545 kernel: Console: colour dummy device 80x25 Feb 9 22:49:51.550550 kernel: printk: console [tty0] enabled Feb 9 22:49:51.550555 kernel: printk: console [ttyS1] enabled Feb 9 22:49:51.550560 kernel: ACPI: Core revision 20210730 Feb 9 22:49:51.550565 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 9 22:49:51.550570 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 22:49:51.550576 kernel: DMAR: Host address width 39 Feb 9 22:49:51.550580 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 22:49:51.550586 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 22:49:51.550591 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 9 22:49:51.550596 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 9 22:49:51.550601 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 22:49:51.550605 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 22:49:51.550610 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 22:49:51.550615 kernel: x2apic enabled Feb 9 22:49:51.550621 kernel: Switched APIC routing to cluster x2apic. Feb 9 22:49:51.550626 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 22:49:51.550631 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 22:49:51.550636 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 22:49:51.550641 kernel: process: using mwait in idle threads Feb 9 22:49:51.550646 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 22:49:51.550651 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 22:49:51.550655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 22:49:51.550660 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 22:49:51.550666 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 22:49:51.550671 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 22:49:51.550676 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 22:49:51.550681 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 22:49:51.550685 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 22:49:51.550690 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 22:49:51.550695 kernel: TAA: Mitigation: TSX disabled Feb 9 22:49:51.550700 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 22:49:51.550705 kernel: SRBDS: Mitigation: Microcode Feb 9 22:49:51.550710 kernel: GDS: Vulnerable: No microcode Feb 9 22:49:51.550714 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 22:49:51.550720 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 22:49:51.550725 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 22:49:51.550730 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 22:49:51.550735 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 22:49:51.550740 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 22:49:51.550744 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 22:49:51.550749 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 22:49:51.550754 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 22:49:51.550759 kernel: Freeing SMP alternatives memory: 32K Feb 9 22:49:51.550764 kernel: pid_max: default: 32768 minimum: 301 Feb 9 22:49:51.550769 kernel: LSM: Security Framework initializing Feb 9 22:49:51.550773 kernel: SELinux: Initializing. Feb 9 22:49:51.550779 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 22:49:51.550784 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 22:49:51.550789 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 22:49:51.550794 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 22:49:51.550798 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 22:49:51.550803 kernel: ... version: 4 Feb 9 22:49:51.550808 kernel: ... bit width: 48 Feb 9 22:49:51.550813 kernel: ... generic registers: 4 Feb 9 22:49:51.550818 kernel: ... value mask: 0000ffffffffffff Feb 9 22:49:51.550823 kernel: ... max period: 00007fffffffffff Feb 9 22:49:51.550829 kernel: ... fixed-purpose events: 3 Feb 9 22:49:51.550834 kernel: ... event mask: 000000070000000f Feb 9 22:49:51.550839 kernel: signal: max sigframe size: 2032 Feb 9 22:49:51.550844 kernel: rcu: Hierarchical SRCU implementation. Feb 9 22:49:51.550849 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 22:49:51.550853 kernel: smp: Bringing up secondary CPUs ... Feb 9 22:49:51.550858 kernel: x86: Booting SMP configuration: Feb 9 22:49:51.550863 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 22:49:51.550868 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 22:49:51.550874 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 22:49:51.550879 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 22:49:51.550884 kernel: smpboot: Max logical packages: 1 Feb 9 22:49:51.550889 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 22:49:51.550894 kernel: devtmpfs: initialized Feb 9 22:49:51.550899 kernel: x86/mm: Memory block size: 128MB Feb 9 22:49:51.550904 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8258a000-0x8258afff] (4096 bytes) Feb 9 22:49:51.550909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 9 22:49:51.550915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 22:49:51.550920 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 22:49:51.550925 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 22:49:51.550930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 22:49:51.550935 kernel: audit: initializing netlink subsys (disabled) Feb 9 22:49:51.550940 kernel: audit: type=2000 audit(1707518986.040:1): state=initialized audit_enabled=0 res=1 Feb 9 22:49:51.550944 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 22:49:51.550949 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 22:49:51.550954 kernel: cpuidle: using governor menu Feb 9 22:49:51.550960 kernel: ACPI: bus type PCI registered Feb 9 22:49:51.550965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 22:49:51.550970 kernel: dca service started, version 1.12.1 Feb 9 22:49:51.550975 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 22:49:51.550980 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 22:49:51.550985 kernel: PCI: Using configuration type 1 for base access Feb 9 22:49:51.550990 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 22:49:51.550995 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 22:49:51.551000 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 22:49:51.551005 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 22:49:51.551010 kernel: ACPI: Added _OSI(Module Device) Feb 9 22:49:51.551015 kernel: ACPI: Added _OSI(Processor Device) Feb 9 22:49:51.551020 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 22:49:51.551025 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 22:49:51.551030 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 22:49:51.551035 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 22:49:51.551040 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 22:49:51.551045 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 22:49:51.551051 kernel: ACPI: Dynamic OEM Table Load: Feb 9 22:49:51.551056 kernel: ACPI: SSDT 0xFFFF8C5740212100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 22:49:51.551061 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 22:49:51.551066 kernel: ACPI: Dynamic OEM Table Load: Feb 9 22:49:51.551071 kernel: ACPI: SSDT 0xFFFF8C5741AE7C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 22:49:51.551076 kernel: ACPI: Dynamic OEM Table Load: Feb 9 22:49:51.551080 kernel: ACPI: SSDT 0xFFFF8C5741A5C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 22:49:51.551085 kernel: ACPI: Dynamic OEM Table Load: Feb 9 22:49:51.551090 kernel: ACPI: SSDT 0xFFFF8C5741A5B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 22:49:51.551095 kernel: ACPI: Dynamic OEM Table Load: Feb 9 22:49:51.551100 kernel: ACPI: SSDT 0xFFFF8C5740149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 22:49:51.551105 kernel: ACPI: Dynamic OEM Table Load: Feb 9 22:49:51.551110 kernel: ACPI: SSDT 0xFFFF8C5741AE5C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 22:49:51.551115 kernel: ACPI: Interpreter enabled Feb 9 22:49:51.551120 kernel: ACPI: PM: (supports S0 S5) Feb 9 22:49:51.551125 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 22:49:51.551130 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 22:49:51.551135 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 22:49:51.551140 kernel: HEST: Table parsing has been initialized. Feb 9 22:49:51.551146 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 22:49:51.551151 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 22:49:51.551156 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 22:49:51.551161 kernel: ACPI: PM: Power Resource [USBC] Feb 9 22:49:51.551166 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 22:49:51.551170 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 22:49:51.551175 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 22:49:51.551180 kernel: ACPI: PM: Power Resource [WRST] Feb 9 22:49:51.551185 kernel: ACPI: PM: Power Resource [FN00] Feb 9 22:49:51.551191 kernel: ACPI: PM: Power Resource [FN01] Feb 9 22:49:51.551196 kernel: ACPI: PM: Power Resource [FN02] Feb 9 22:49:51.551201 kernel: ACPI: PM: Power Resource [FN03] Feb 9 22:49:51.551205 kernel: ACPI: PM: Power Resource [FN04] Feb 9 22:49:51.551210 kernel: ACPI: PM: Power Resource [PIN] Feb 9 22:49:51.551215 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 22:49:51.551278 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 22:49:51.551322 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 22:49:51.551365 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 22:49:51.551372 kernel: PCI host bridge to bus 0000:00 Feb 9 22:49:51.551434 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 22:49:51.551488 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 22:49:51.551524 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 22:49:51.551560 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 9 22:49:51.551596 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 22:49:51.551633 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 22:49:51.551683 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 22:49:51.551733 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 22:49:51.551777 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.551822 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 22:49:51.551865 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 9 22:49:51.551911 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 22:49:51.551953 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 9 22:49:51.551998 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 22:49:51.552039 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 9 22:49:51.552082 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 22:49:51.552128 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 22:49:51.552172 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 9 22:49:51.552212 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 9 22:49:51.552257 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 22:49:51.552297 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 22:49:51.552343 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 22:49:51.552385 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 22:49:51.552454 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 22:49:51.552512 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 9 22:49:51.552553 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 22:49:51.552597 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 22:49:51.552637 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 9 22:49:51.552678 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 22:49:51.552722 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 22:49:51.552764 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 9 22:49:51.552804 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 22:49:51.552847 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 22:49:51.552889 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 9 22:49:51.552928 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 9 22:49:51.552968 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 9 22:49:51.553008 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 9 22:49:51.553055 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 9 22:49:51.553097 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 9 22:49:51.553138 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 22:49:51.553183 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 22:49:51.553225 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.553270 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 22:49:51.553312 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.553361 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 22:49:51.553402 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.553484 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 22:49:51.553527 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.553572 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 9 22:49:51.553616 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.553660 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 22:49:51.553702 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 22:49:51.553749 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 22:49:51.553795 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 22:49:51.553837 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 9 22:49:51.553879 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 22:49:51.553927 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 22:49:51.553968 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 22:49:51.554015 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 22:49:51.554060 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 22:49:51.554104 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 9 22:49:51.554146 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 9 22:49:51.554189 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 22:49:51.554231 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 22:49:51.554278 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 22:49:51.554321 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 22:49:51.554365 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 9 22:49:51.554408 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 9 22:49:51.554500 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 22:49:51.554544 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 22:49:51.554585 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 22:49:51.554627 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 22:49:51.554668 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 22:49:51.554710 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 22:49:51.554760 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 9 22:49:51.554803 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 9 22:49:51.554845 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 22:49:51.554888 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 9 22:49:51.554930 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.554972 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 22:49:51.555013 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 22:49:51.555056 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 22:49:51.555104 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 22:49:51.555148 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 9 22:49:51.555225 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 22:49:51.555290 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 9 22:49:51.555333 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 22:49:51.555374 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 22:49:51.555433 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 22:49:51.555496 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 22:49:51.555538 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 22:49:51.555586 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 22:49:51.555629 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 9 22:49:51.555672 kernel: pci 0000:06:00.0: supports D1 D2 Feb 9 22:49:51.555715 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 22:49:51.555757 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 22:49:51.555798 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 22:49:51.555843 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 22:49:51.555890 kernel: pci_bus 0000:07: extended config space not accessible Feb 9 22:49:51.555939 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 22:49:51.555985 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 9 22:49:51.556031 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 9 22:49:51.556075 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 22:49:51.556120 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 22:49:51.556167 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 22:49:51.556212 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 22:49:51.556257 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 22:49:51.556299 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 22:49:51.556343 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 22:49:51.556351 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 22:49:51.556356 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 22:49:51.556363 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 22:49:51.556368 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 22:49:51.556374 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 22:49:51.556379 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 22:49:51.556384 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 22:49:51.556389 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 22:49:51.556395 kernel: iommu: Default domain type: Translated Feb 9 22:49:51.556400 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 22:49:51.556482 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 9 22:49:51.556530 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 22:49:51.556575 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 9 22:49:51.556583 kernel: vgaarb: loaded Feb 9 22:49:51.556588 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 22:49:51.556593 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 22:49:51.556599 kernel: PTP clock support registered Feb 9 22:49:51.556604 kernel: PCI: Using ACPI for IRQ routing Feb 9 22:49:51.556610 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 22:49:51.556615 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 22:49:51.556621 kernel: e820: reserve RAM buffer [mem 0x8258a000-0x83ffffff] Feb 9 22:49:51.556626 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 9 22:49:51.556631 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 9 22:49:51.556636 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 9 22:49:51.556642 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 9 22:49:51.556647 kernel: clocksource: Switched to clocksource tsc-early Feb 9 22:49:51.556652 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 22:49:51.556657 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 22:49:51.556663 kernel: pnp: PnP ACPI init Feb 9 22:49:51.556707 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 22:49:51.556748 kernel: pnp 00:02: [dma 0 disabled] Feb 9 22:49:51.556789 kernel: pnp 00:03: [dma 0 disabled] Feb 9 22:49:51.556830 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 22:49:51.556869 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 22:49:51.556908 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 22:49:51.556950 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 22:49:51.556988 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 22:49:51.557024 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 22:49:51.557063 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 22:49:51.557099 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 22:49:51.557135 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 22:49:51.557174 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 22:49:51.557213 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 22:49:51.557252 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 22:49:51.557289 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 22:49:51.557327 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 22:49:51.557364 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 22:49:51.557401 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 22:49:51.557462 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 22:49:51.557521 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 22:49:51.557561 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 22:49:51.557569 kernel: pnp: PnP ACPI: found 10 devices Feb 9 22:49:51.557575 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 22:49:51.557580 kernel: NET: Registered PF_INET protocol family Feb 9 22:49:51.557586 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 22:49:51.557591 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 22:49:51.557596 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 22:49:51.557603 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 22:49:51.557608 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 22:49:51.557614 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 22:49:51.557619 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 22:49:51.557624 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 22:49:51.557629 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 22:49:51.557635 kernel: NET: Registered PF_XDP protocol family Feb 9 22:49:51.557676 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 9 22:49:51.557720 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 9 22:49:51.557761 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 9 22:49:51.557805 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 22:49:51.557848 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 22:49:51.557891 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 22:49:51.557933 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 22:49:51.557975 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 22:49:51.558016 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 22:49:51.558060 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 22:49:51.558101 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 22:49:51.558141 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 22:49:51.558183 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 22:49:51.558224 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 22:49:51.558268 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 22:49:51.558310 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 22:49:51.558352 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 22:49:51.558393 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 22:49:51.558480 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 22:49:51.558523 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 22:49:51.558566 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 22:49:51.558608 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 22:49:51.558649 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 22:49:51.558693 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 22:49:51.558730 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 22:49:51.558766 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 22:49:51.558803 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 22:49:51.558838 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 22:49:51.558874 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 9 22:49:51.558909 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 22:49:51.558952 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 9 22:49:51.558993 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 22:49:51.559037 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 9 22:49:51.559076 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 9 22:49:51.559117 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 9 22:49:51.559155 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 9 22:49:51.559199 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 9 22:49:51.559239 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 9 22:49:51.559279 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 22:49:51.559319 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 9 22:49:51.559327 kernel: PCI: CLS 64 bytes, default 64 Feb 9 22:49:51.559332 kernel: DMAR: No ATSR found Feb 9 22:49:51.559338 kernel: DMAR: No SATC found Feb 9 22:49:51.559343 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 22:49:51.559383 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 22:49:51.559451 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 22:49:51.559512 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 9 22:49:51.559554 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 9 22:49:51.559596 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 9 22:49:51.559636 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 9 22:49:51.559678 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 9 22:49:51.559718 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 9 22:49:51.559759 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 9 22:49:51.559801 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 9 22:49:51.559842 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 9 22:49:51.559884 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 9 22:49:51.559924 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 9 22:49:51.559966 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 9 22:49:51.560006 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 9 22:49:51.560048 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 9 22:49:51.560088 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 9 22:49:51.560131 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 9 22:49:51.560172 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 9 22:49:51.560212 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 9 22:49:51.560255 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 9 22:49:51.560297 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 9 22:49:51.560341 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 9 22:49:51.560383 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 9 22:49:51.560455 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 22:49:51.560521 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 9 22:49:51.560566 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 9 22:49:51.560574 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 22:49:51.560579 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 22:49:51.560585 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 9 22:49:51.560590 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 9 22:49:51.560595 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 22:49:51.560601 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 22:49:51.560607 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 22:49:51.560650 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 22:49:51.560659 kernel: Initialise system trusted keyrings Feb 9 22:49:51.560664 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 22:49:51.560669 kernel: Key type asymmetric registered Feb 9 22:49:51.560675 kernel: Asymmetric key parser 'x509' registered Feb 9 22:49:51.560680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 22:49:51.560685 kernel: io scheduler mq-deadline registered Feb 9 22:49:51.560692 kernel: io scheduler kyber registered Feb 9 22:49:51.560697 kernel: io scheduler bfq registered Feb 9 22:49:51.560738 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 9 22:49:51.560780 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 9 22:49:51.560821 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 9 22:49:51.560863 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 9 22:49:51.560904 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 9 22:49:51.560945 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 9 22:49:51.560993 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 22:49:51.561001 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 22:49:51.561007 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 22:49:51.561012 kernel: pstore: Registered erst as persistent store backend Feb 9 22:49:51.561018 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 22:49:51.561023 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 22:49:51.561028 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 22:49:51.561034 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 22:49:51.561040 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 9 22:49:51.561084 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 22:49:51.561093 kernel: i8042: PNP: No PS/2 controller found. Feb 9 22:49:51.561129 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 22:49:51.561168 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 22:49:51.561206 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T22:49:50 UTC (1707518990) Feb 9 22:49:51.561243 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 22:49:51.561251 kernel: fail to initialize ptp_kvm Feb 9 22:49:51.561257 kernel: intel_pstate: Intel P-state driver initializing Feb 9 22:49:51.561263 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 22:49:51.561268 kernel: intel_pstate: HWP enabled Feb 9 22:49:51.561273 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 22:49:51.561278 kernel: vesafb: scrolling: redraw Feb 9 22:49:51.561284 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 22:49:51.561289 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000e39239a4, using 768k, total 768k Feb 9 22:49:51.561295 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 22:49:51.561300 kernel: fb0: VESA VGA frame buffer device Feb 9 22:49:51.561306 kernel: NET: Registered PF_INET6 protocol family Feb 9 22:49:51.561311 kernel: Segment Routing with IPv6 Feb 9 22:49:51.561316 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 22:49:51.561322 kernel: NET: Registered PF_PACKET protocol family Feb 9 22:49:51.561327 kernel: Key type dns_resolver registered Feb 9 22:49:51.561332 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 22:49:51.561337 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 22:49:51.561343 kernel: IPI shorthand broadcast: enabled Feb 9 22:49:51.561348 kernel: sched_clock: Marking stable (1677643524, 1339689542)->(4438061347, -1420728281) Feb 9 22:49:51.561354 kernel: registered taskstats version 1 Feb 9 22:49:51.561359 kernel: Loading compiled-in X.509 certificates Feb 9 22:49:51.561365 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 22:49:51.561370 kernel: Key type .fscrypt registered Feb 9 22:49:51.561375 kernel: Key type fscrypt-provisioning registered Feb 9 22:49:51.561380 kernel: pstore: Using crash dump compression: deflate Feb 9 22:49:51.561385 kernel: ima: Allocated hash algorithm: sha1 Feb 9 22:49:51.561391 kernel: ima: No architecture policies found Feb 9 22:49:51.561396 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 22:49:51.561402 kernel: Write protecting the kernel read-only data: 28672k Feb 9 22:49:51.561407 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 22:49:51.561436 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 22:49:51.561442 kernel: Run /init as init process Feb 9 22:49:51.561447 kernel: with arguments: Feb 9 22:49:51.561452 kernel: /init Feb 9 22:49:51.561458 kernel: with environment: Feb 9 22:49:51.561482 kernel: HOME=/ Feb 9 22:49:51.561487 kernel: TERM=linux Feb 9 22:49:51.561493 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 22:49:51.561500 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 22:49:51.561507 systemd[1]: Detected architecture x86-64. Feb 9 22:49:51.561512 systemd[1]: Running in initrd. Feb 9 22:49:51.561518 systemd[1]: No hostname configured, using default hostname. Feb 9 22:49:51.561523 systemd[1]: Hostname set to . Feb 9 22:49:51.561529 systemd[1]: Initializing machine ID from random generator. Feb 9 22:49:51.561535 systemd[1]: Queued start job for default target initrd.target. Feb 9 22:49:51.561541 systemd[1]: Started systemd-ask-password-console.path. Feb 9 22:49:51.561546 systemd[1]: Reached target cryptsetup.target. Feb 9 22:49:51.561552 systemd[1]: Reached target paths.target. Feb 9 22:49:51.561557 systemd[1]: Reached target slices.target. Feb 9 22:49:51.561563 systemd[1]: Reached target swap.target. Feb 9 22:49:51.561568 systemd[1]: Reached target timers.target. Feb 9 22:49:51.561573 systemd[1]: Listening on iscsid.socket. Feb 9 22:49:51.561580 systemd[1]: Listening on iscsiuio.socket. Feb 9 22:49:51.561586 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 22:49:51.561591 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 22:49:51.561597 systemd[1]: Listening on systemd-journald.socket. Feb 9 22:49:51.561602 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 9 22:49:51.561608 systemd[1]: Listening on systemd-networkd.socket. Feb 9 22:49:51.561613 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 9 22:49:51.561619 kernel: clocksource: Switched to clocksource tsc Feb 9 22:49:51.561625 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 22:49:51.561630 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 22:49:51.561636 systemd[1]: Reached target sockets.target. Feb 9 22:49:51.561642 systemd[1]: Starting kmod-static-nodes.service... Feb 9 22:49:51.561647 systemd[1]: Finished network-cleanup.service. Feb 9 22:49:51.561653 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 22:49:51.561658 systemd[1]: Starting systemd-journald.service... Feb 9 22:49:51.561664 systemd[1]: Starting systemd-modules-load.service... Feb 9 22:49:51.561671 systemd-journald[267]: Journal started Feb 9 22:49:51.561697 systemd-journald[267]: Runtime Journal (/run/log/journal/cefa084eafd043f9aac9c60605244eff) is 8.0M, max 640.1M, 632.1M free. Feb 9 22:49:51.564002 systemd-modules-load[268]: Inserted module 'overlay' Feb 9 22:49:51.570000 audit: BPF prog-id=6 op=LOAD Feb 9 22:49:51.588451 kernel: audit: type=1334 audit(1707518991.570:2): prog-id=6 op=LOAD Feb 9 22:49:51.588466 systemd[1]: Starting systemd-resolved.service... Feb 9 22:49:51.637457 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 22:49:51.637472 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 22:49:51.669448 kernel: Bridge firewalling registered Feb 9 22:49:51.669464 systemd[1]: Started systemd-journald.service. Feb 9 22:49:51.683602 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 9 22:49:51.732368 kernel: audit: type=1130 audit(1707518991.691:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.689998 systemd-resolved[270]: Positive Trust Anchors: Feb 9 22:49:51.808364 kernel: SCSI subsystem initialized Feb 9 22:49:51.808377 kernel: audit: type=1130 audit(1707518991.744:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.808387 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 22:49:51.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.690004 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 22:49:51.909265 kernel: device-mapper: uevent: version 1.0.3 Feb 9 22:49:51.909300 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 22:49:51.909308 kernel: audit: type=1130 audit(1707518991.866:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.690023 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 22:49:51.983660 kernel: audit: type=1130 audit(1707518991.917:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.691546 systemd-resolved[270]: Defaulting to hostname 'linux'. Feb 9 22:49:51.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.691683 systemd[1]: Finished kmod-static-nodes.service. Feb 9 22:49:52.092060 kernel: audit: type=1130 audit(1707518991.992:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.092072 kernel: audit: type=1130 audit(1707518992.045:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:51.744544 systemd[1]: Started systemd-resolved.service. Feb 9 22:49:51.866596 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 22:49:51.909581 systemd-modules-load[268]: Inserted module 'dm_multipath' Feb 9 22:49:51.917715 systemd[1]: Finished systemd-modules-load.service. Feb 9 22:49:51.992754 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 22:49:52.045700 systemd[1]: Reached target nss-lookup.target. Feb 9 22:49:52.101000 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 22:49:52.120929 systemd[1]: Starting systemd-sysctl.service... Feb 9 22:49:52.121217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 22:49:52.124015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 22:49:52.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.124758 systemd[1]: Finished systemd-sysctl.service. Feb 9 22:49:52.173547 kernel: audit: type=1130 audit(1707518992.123:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.186756 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 22:49:52.249557 kernel: audit: type=1130 audit(1707518992.186:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.236086 systemd[1]: Starting dracut-cmdline.service... Feb 9 22:49:52.264518 dracut-cmdline[293]: dracut-dracut-053 Feb 9 22:49:52.264518 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 22:49:52.264518 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 22:49:52.349503 kernel: Loading iSCSI transport class v2.0-870. Feb 9 22:49:52.349518 kernel: iscsi: registered transport (tcp) Feb 9 22:49:52.349526 kernel: iscsi: registered transport (qla4xxx) Feb 9 22:49:52.373992 kernel: QLogic iSCSI HBA Driver Feb 9 22:49:52.390859 systemd[1]: Finished dracut-cmdline.service. Feb 9 22:49:52.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:52.400111 systemd[1]: Starting dracut-pre-udev.service... Feb 9 22:49:52.458469 kernel: raid6: avx2x4 gen() 35945 MB/s Feb 9 22:49:52.493470 kernel: raid6: avx2x4 xor() 17465 MB/s Feb 9 22:49:52.528469 kernel: raid6: avx2x2 gen() 54916 MB/s Feb 9 22:49:52.563470 kernel: raid6: avx2x2 xor() 32779 MB/s Feb 9 22:49:52.598443 kernel: raid6: avx2x1 gen() 46186 MB/s Feb 9 22:49:52.632463 kernel: raid6: avx2x1 xor() 28483 MB/s Feb 9 22:49:52.666453 kernel: raid6: sse2x4 gen() 21782 MB/s Feb 9 22:49:52.700475 kernel: raid6: sse2x4 xor() 11958 MB/s Feb 9 22:49:52.734445 kernel: raid6: sse2x2 gen() 22108 MB/s Feb 9 22:49:52.768473 kernel: raid6: sse2x2 xor() 13717 MB/s Feb 9 22:49:52.802480 kernel: raid6: sse2x1 gen() 18660 MB/s Feb 9 22:49:52.853973 kernel: raid6: sse2x1 xor() 9101 MB/s Feb 9 22:49:52.853988 kernel: raid6: using algorithm avx2x2 gen() 54916 MB/s Feb 9 22:49:52.853996 kernel: raid6: .... xor() 32779 MB/s, rmw enabled Feb 9 22:49:52.872021 kernel: raid6: using avx2x2 recovery algorithm Feb 9 22:49:52.917416 kernel: xor: automatically using best checksumming function avx Feb 9 22:49:52.996450 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 22:49:53.001917 systemd[1]: Finished dracut-pre-udev.service. Feb 9 22:49:53.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:53.011000 audit: BPF prog-id=7 op=LOAD Feb 9 22:49:53.011000 audit: BPF prog-id=8 op=LOAD Feb 9 22:49:53.012419 systemd[1]: Starting systemd-udevd.service... Feb 9 22:49:53.021003 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 9 22:49:53.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:53.025752 systemd[1]: Started systemd-udevd.service. Feb 9 22:49:53.065547 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 9 22:49:53.042007 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 22:49:53.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:53.067374 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 22:49:53.082720 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 22:49:53.161533 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 22:49:53.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:53.188420 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 22:49:53.190422 kernel: libata version 3.00 loaded. Feb 9 22:49:53.207417 kernel: ACPI: bus type USB registered Feb 9 22:49:53.243217 kernel: usbcore: registered new interface driver usbfs Feb 9 22:49:53.243256 kernel: usbcore: registered new interface driver hub Feb 9 22:49:53.243270 kernel: usbcore: registered new device driver usb Feb 9 22:49:53.261416 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 22:49:53.261440 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 22:49:53.311952 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 22:49:53.312416 kernel: AES CTR mode by8 optimization enabled Feb 9 22:49:53.312433 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 22:49:53.339430 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 22:49:53.339796 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 9 22:49:53.340014 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 9 22:49:53.340243 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 22:49:53.353420 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 22:49:53.353538 kernel: pps pps0: new PPS source ptp0 Feb 9 22:49:53.353600 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 9 22:49:53.353663 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 22:49:53.353721 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b6 Feb 9 22:49:53.353779 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 9 22:49:53.353839 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 22:49:53.363414 kernel: scsi host0: ahci Feb 9 22:49:53.363549 kernel: scsi host1: ahci Feb 9 22:49:53.363674 kernel: scsi host2: ahci Feb 9 22:49:53.363832 kernel: scsi host3: ahci Feb 9 22:49:53.363957 kernel: scsi host4: ahci Feb 9 22:49:53.364057 kernel: scsi host5: ahci Feb 9 22:49:53.364117 kernel: scsi host6: ahci Feb 9 22:49:53.364199 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Feb 9 22:49:53.364209 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Feb 9 22:49:53.364217 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Feb 9 22:49:53.364225 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Feb 9 22:49:53.364232 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Feb 9 22:49:53.364241 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Feb 9 22:49:53.364250 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Feb 9 22:49:53.384316 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 22:49:53.404417 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 22:49:53.436917 kernel: pps pps1: new PPS source ptp1 Feb 9 22:49:53.437014 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 22:49:53.437093 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 9 22:49:53.468396 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 22:49:53.468475 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 22:49:53.500078 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 22:49:53.500188 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b7 Feb 9 22:49:53.528857 kernel: hub 1-0:1.0: USB hub found Feb 9 22:49:53.540248 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 9 22:49:53.540348 kernel: hub 1-0:1.0: 16 ports detected Feb 9 22:49:53.551129 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 22:49:53.571522 kernel: hub 2-0:1.0: USB hub found Feb 9 22:49:53.672486 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 22:49:53.672561 kernel: hub 2-0:1.0: 10 ports detected Feb 9 22:49:53.672620 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 22:49:53.672629 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 22:49:53.673421 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 22:49:53.673438 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 22:49:53.673446 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 22:49:53.673452 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 22:49:53.673462 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 22:49:53.674415 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 22:49:53.677471 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 22:49:53.677486 kernel: ata2.00: Features: NCQ-prio Feb 9 22:49:53.677494 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 22:49:53.677501 kernel: ata1.00: Features: NCQ-prio Feb 9 22:49:53.681470 kernel: ata2.00: configured for UDMA/133 Feb 9 22:49:53.682453 kernel: ata1.00: configured for UDMA/133 Feb 9 22:49:53.682468 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 22:49:53.683446 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 22:49:53.685416 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 22:49:53.700416 kernel: usb: port power management may be unreliable Feb 9 22:49:53.700432 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 22:49:53.727416 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 9 22:49:53.901463 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 22:49:54.068490 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 22:49:54.178469 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 22:49:54.193141 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 9 22:49:54.193226 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 22:49:54.193235 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 22:49:54.193309 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 22:49:54.193371 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 9 22:49:54.193434 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 9 22:49:54.193488 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 22:49:54.193544 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 22:49:54.193598 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 22:49:54.205464 kernel: hub 1-14:1.0: USB hub found Feb 9 22:49:54.205546 kernel: hub 1-14:1.0: 4 ports detected Feb 9 22:49:54.238849 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 22:49:54.238864 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 9 22:49:54.238933 kernel: GPT:9289727 != 937703087 Feb 9 22:49:54.252520 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 9 22:49:54.269280 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 22:49:54.269295 kernel: GPT:9289727 != 937703087 Feb 9 22:49:54.269303 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 22:49:54.269309 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 22:49:54.286054 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 22:49:54.286133 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 22:49:54.306459 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 22:49:54.306474 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 9 22:49:54.306545 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 22:49:54.490790 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 22:49:54.490819 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 9 22:49:54.491458 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 22:49:54.491475 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 22:49:54.491482 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 9 22:49:54.587458 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 22:49:54.626699 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 22:49:54.695676 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (652) Feb 9 22:49:54.695691 kernel: port_module: 9 callbacks suppressed Feb 9 22:49:54.695699 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 9 22:49:54.713416 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 22:49:54.713431 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 22:49:54.730052 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 22:49:54.762448 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 22:49:54.801574 kernel: usbcore: registered new interface driver usbhid Feb 9 22:49:54.801584 kernel: usbhid: USB HID core driver Feb 9 22:49:54.801592 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 22:49:54.769186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 22:49:54.816615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 22:49:54.845445 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 22:49:54.845478 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 22:49:54.820079 systemd[1]: Starting disk-uuid.service... Feb 9 22:49:55.013691 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 22:49:55.013828 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 22:49:55.013843 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 22:49:55.013855 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 22:49:55.013866 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 22:49:55.013947 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 22:49:55.014053 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 9 22:49:55.014199 disk-uuid[683]: Primary Header is updated. Feb 9 22:49:55.014199 disk-uuid[683]: Secondary Entries is updated. Feb 9 22:49:55.014199 disk-uuid[683]: Secondary Header is updated. Feb 9 22:49:55.074498 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 9 22:49:55.875531 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 22:49:55.895197 disk-uuid[684]: The operation has completed successfully. Feb 9 22:49:55.903619 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 22:49:55.930908 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 22:49:56.028300 kernel: audit: type=1130 audit(1707518995.937:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.028315 kernel: audit: type=1131 audit(1707518995.937:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:55.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:55.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:55.930948 systemd[1]: Finished disk-uuid.service. Feb 9 22:49:56.058510 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 22:49:55.941065 systemd[1]: Starting verity-setup.service... Feb 9 22:49:56.106240 systemd[1]: Found device dev-mapper-usr.device. Feb 9 22:49:56.117506 systemd[1]: Mounting sysusr-usr.mount... Feb 9 22:49:56.129042 systemd[1]: Finished verity-setup.service. Feb 9 22:49:56.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.200419 kernel: audit: type=1130 audit(1707518996.143:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.258425 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 22:49:56.258659 systemd[1]: Mounted sysusr-usr.mount. Feb 9 22:49:56.265731 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 22:49:56.266125 systemd[1]: Starting ignition-setup.service... Feb 9 22:49:56.358434 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 22:49:56.358477 kernel: BTRFS info (device sdb6): using free space tree Feb 9 22:49:56.358499 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 22:49:56.358506 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 22:49:56.304579 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 22:49:56.366874 systemd[1]: Finished ignition-setup.service. Feb 9 22:49:56.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.383762 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 22:49:56.490514 kernel: audit: type=1130 audit(1707518996.383:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.490530 kernel: audit: type=1130 audit(1707518996.440:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.441050 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 22:49:56.521685 kernel: audit: type=1334 audit(1707518996.498:24): prog-id=9 op=LOAD Feb 9 22:49:56.498000 audit: BPF prog-id=9 op=LOAD Feb 9 22:49:56.499288 systemd[1]: Starting systemd-networkd.service... Feb 9 22:49:56.536506 systemd-networkd[878]: lo: Link UP Feb 9 22:49:56.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.536509 systemd-networkd[878]: lo: Gained carrier Feb 9 22:49:56.611547 kernel: audit: type=1130 audit(1707518996.545:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.559251 ignition[867]: Ignition 2.14.0 Feb 9 22:49:56.536790 systemd-networkd[878]: Enumeration completed Feb 9 22:49:56.559255 ignition[867]: Stage: fetch-offline Feb 9 22:49:56.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.536858 systemd[1]: Started systemd-networkd.service. Feb 9 22:49:56.773496 kernel: audit: type=1130 audit(1707518996.638:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.773509 kernel: audit: type=1130 audit(1707518996.698:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.773517 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 22:49:56.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.559280 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 22:49:56.808517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 9 22:49:56.537495 systemd-networkd[878]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 22:49:56.559293 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 22:49:56.545749 systemd[1]: Reached target network.target. Feb 9 22:49:56.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.567387 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 22:49:56.858546 iscsid[906]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 22:49:56.858546 iscsid[906]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 22:49:56.858546 iscsid[906]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 22:49:56.858546 iscsid[906]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 22:49:56.858546 iscsid[906]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 22:49:56.858546 iscsid[906]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 22:49:56.858546 iscsid[906]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 22:49:57.027589 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 22:49:56.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:57.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:49:56.590150 unknown[867]: fetched base config from "system" Feb 9 22:49:56.567492 ignition[867]: parsed url from cmdline: "" Feb 9 22:49:56.590154 unknown[867]: fetched user config from "system" Feb 9 22:49:56.567494 ignition[867]: no config URL provided Feb 9 22:49:56.605117 systemd[1]: Starting iscsiuio.service... Feb 9 22:49:56.567497 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 22:49:56.619684 systemd[1]: Started iscsiuio.service. Feb 9 22:49:56.567524 ignition[867]: parsing config with SHA512: 140cb43be0e2af67a5148eaf1c0f170bb4d9488f3d119d5f091a20da4ed4d012b2ad391135a7ee527c7f43cb9bbb2675a56b1b6ea97087c440f48629ed7afb85 Feb 9 22:49:56.638685 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 22:49:56.590548 ignition[867]: fetch-offline: fetch-offline passed Feb 9 22:49:56.698639 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 22:49:56.590551 ignition[867]: POST message to Packet Timeline Feb 9 22:49:56.699074 systemd[1]: Starting ignition-kargs.service... Feb 9 22:49:56.590555 ignition[867]: POST Status error: resource requires networking Feb 9 22:49:56.774743 systemd-networkd[878]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 22:49:56.590586 ignition[867]: Ignition finished successfully Feb 9 22:49:56.788015 systemd[1]: Starting iscsid.service... Feb 9 22:49:56.777656 ignition[896]: Ignition 2.14.0 Feb 9 22:49:56.815515 systemd[1]: Started iscsid.service. Feb 9 22:49:56.777660 ignition[896]: Stage: kargs Feb 9 22:49:56.834923 systemd[1]: Starting dracut-initqueue.service... Feb 9 22:49:56.777714 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 22:49:56.848538 systemd[1]: Finished dracut-initqueue.service. Feb 9 22:49:56.777723 ignition[896]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 22:49:56.866517 systemd[1]: Reached target remote-fs-pre.target. Feb 9 22:49:56.779018 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 22:49:56.885639 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 22:49:56.780790 ignition[896]: kargs: kargs passed Feb 9 22:49:56.919675 systemd[1]: Reached target remote-fs.target. Feb 9 22:49:56.780793 ignition[896]: POST message to Packet Timeline Feb 9 22:49:56.961172 systemd[1]: Starting dracut-pre-mount.service... Feb 9 22:49:56.780803 ignition[896]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 22:49:56.971751 systemd[1]: Finished dracut-pre-mount.service. Feb 9 22:49:56.788607 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34138->[::1]:53: read: connection refused Feb 9 22:49:57.017395 systemd-networkd[878]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 22:49:56.989117 ignition[896]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 22:49:57.045821 systemd-networkd[878]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 22:49:56.989603 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58898->[::1]:53: read: connection refused Feb 9 22:49:57.077747 systemd-networkd[878]: enp1s0f1np1: Link UP Feb 9 22:49:57.078338 systemd-networkd[878]: enp1s0f1np1: Gained carrier Feb 9 22:49:57.091933 systemd-networkd[878]: enp1s0f0np0: Link UP Feb 9 22:49:57.092467 systemd-networkd[878]: eno2: Link UP Feb 9 22:49:57.092948 systemd-networkd[878]: eno1: Link UP Feb 9 22:49:57.390147 ignition[896]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 22:49:57.391220 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57404->[::1]:53: read: connection refused Feb 9 22:49:57.802205 systemd-networkd[878]: enp1s0f0np0: Gained carrier Feb 9 22:49:57.810554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 9 22:49:57.838829 systemd-networkd[878]: enp1s0f0np0: DHCPv4 address 147.75.49.127/31, gateway 147.75.49.126 acquired from 145.40.83.140 Feb 9 22:49:58.191695 ignition[896]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 22:49:58.193246 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54524->[::1]:53: read: connection refused Feb 9 22:49:58.867008 systemd-networkd[878]: enp1s0f1np1: Gained IPv6LL Feb 9 22:49:59.443010 systemd-networkd[878]: enp1s0f0np0: Gained IPv6LL Feb 9 22:49:59.794782 ignition[896]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 22:49:59.796086 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53265->[::1]:53: read: connection refused Feb 9 22:50:02.999402 ignition[896]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 22:50:03.038933 ignition[896]: GET result: OK Feb 9 22:50:03.260786 ignition[896]: Ignition finished successfully Feb 9 22:50:03.263528 systemd[1]: Finished ignition-kargs.service. Feb 9 22:50:03.352871 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 22:50:03.352909 kernel: audit: type=1130 audit(1707519003.275:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:03.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:03.285204 ignition[925]: Ignition 2.14.0 Feb 9 22:50:03.277741 systemd[1]: Starting ignition-disks.service... Feb 9 22:50:03.285207 ignition[925]: Stage: disks Feb 9 22:50:03.285266 ignition[925]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 22:50:03.285276 ignition[925]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 22:50:03.287359 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 22:50:03.288115 ignition[925]: disks: disks passed Feb 9 22:50:03.288118 ignition[925]: POST message to Packet Timeline Feb 9 22:50:03.288128 ignition[925]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 22:50:03.312543 ignition[925]: GET result: OK Feb 9 22:50:03.592878 ignition[925]: Ignition finished successfully Feb 9 22:50:03.595745 systemd[1]: Finished ignition-disks.service. Feb 9 22:50:03.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:03.608949 systemd[1]: Reached target initrd-root-device.target. Feb 9 22:50:03.695661 kernel: audit: type=1130 audit(1707519003.608:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:03.681602 systemd[1]: Reached target local-fs-pre.target. Feb 9 22:50:03.681642 systemd[1]: Reached target local-fs.target. Feb 9 22:50:03.703641 systemd[1]: Reached target sysinit.target. Feb 9 22:50:03.717604 systemd[1]: Reached target basic.target. Feb 9 22:50:03.718228 systemd[1]: Starting systemd-fsck-root.service... Feb 9 22:50:03.755726 systemd-fsck[940]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 22:50:03.767094 systemd[1]: Finished systemd-fsck-root.service. Feb 9 22:50:03.858992 kernel: audit: type=1130 audit(1707519003.775:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:03.859007 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 22:50:03.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:03.777481 systemd[1]: Mounting sysroot.mount... Feb 9 22:50:03.866066 systemd[1]: Mounted sysroot.mount. Feb 9 22:50:03.880663 systemd[1]: Reached target initrd-root-fs.target. Feb 9 22:50:03.902589 systemd[1]: Mounting sysroot-usr.mount... Feb 9 22:50:03.910259 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 22:50:03.927306 systemd[1]: Starting flatcar-static-network.service... Feb 9 22:50:03.941681 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 22:50:03.941774 systemd[1]: Reached target ignition-diskful.target. Feb 9 22:50:03.961573 systemd[1]: Mounted sysroot-usr.mount. Feb 9 22:50:03.985865 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 22:50:04.130536 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (951) Feb 9 22:50:04.130554 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 22:50:04.130562 kernel: BTRFS info (device sdb6): using free space tree Feb 9 22:50:04.130569 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 22:50:04.130580 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 22:50:03.998284 systemd[1]: Starting initrd-setup-root.service... Feb 9 22:50:04.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.192409 coreos-metadata[948]: Feb 09 22:50:04.065 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 22:50:04.192409 coreos-metadata[948]: Feb 09 22:50:04.088 INFO Fetch successful Feb 9 22:50:04.376656 kernel: audit: type=1130 audit(1707519004.138:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.376668 kernel: audit: type=1130 audit(1707519004.200:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.376678 kernel: audit: type=1131 audit(1707519004.200:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.376685 kernel: audit: type=1130 audit(1707519004.319:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.376741 coreos-metadata[947]: Feb 09 22:50:04.066 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 22:50:04.376741 coreos-metadata[947]: Feb 09 22:50:04.144 INFO Fetch successful Feb 9 22:50:04.376741 coreos-metadata[947]: Feb 09 22:50:04.169 INFO wrote hostname ci-3510.3.2-a-e9037c933d to /sysroot/etc/hostname Feb 9 22:50:04.426466 initrd-setup-root[958]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 22:50:04.058398 systemd[1]: Finished initrd-setup-root.service. Feb 9 22:50:04.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.478700 initrd-setup-root[966]: cut: /sysroot/etc/group: No such file or directory Feb 9 22:50:04.516646 kernel: audit: type=1130 audit(1707519004.450:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.139751 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 22:50:04.527663 initrd-setup-root[974]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 22:50:04.139793 systemd[1]: Finished flatcar-static-network.service. Feb 9 22:50:04.548010 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 22:50:04.200766 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 22:50:04.566679 ignition[1025]: INFO : Ignition 2.14.0 Feb 9 22:50:04.566679 ignition[1025]: INFO : Stage: mount Feb 9 22:50:04.566679 ignition[1025]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 22:50:04.566679 ignition[1025]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 22:50:04.566679 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 22:50:04.566679 ignition[1025]: INFO : mount: mount passed Feb 9 22:50:04.566679 ignition[1025]: INFO : POST message to Packet Timeline Feb 9 22:50:04.566679 ignition[1025]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 22:50:04.566679 ignition[1025]: INFO : GET result: OK Feb 9 22:50:04.319663 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 22:50:04.386005 systemd[1]: Starting ignition-mount.service... Feb 9 22:50:04.413991 systemd[1]: Starting sysroot-boot.service... Feb 9 22:50:04.434522 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 22:50:04.434581 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 22:50:04.435442 systemd[1]: Finished sysroot-boot.service. Feb 9 22:50:04.837553 ignition[1025]: INFO : Ignition finished successfully Feb 9 22:50:04.840136 systemd[1]: Finished ignition-mount.service. Feb 9 22:50:04.911472 kernel: audit: type=1130 audit(1707519004.853:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:04.855770 systemd[1]: Starting ignition-files.service... Feb 9 22:50:04.920218 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 22:50:04.982763 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1040) Feb 9 22:50:04.982779 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 22:50:04.982787 kernel: BTRFS info (device sdb6): using free space tree Feb 9 22:50:05.005973 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 22:50:05.054447 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 22:50:05.056343 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 22:50:05.072510 ignition[1059]: INFO : Ignition 2.14.0 Feb 9 22:50:05.072510 ignition[1059]: INFO : Stage: files Feb 9 22:50:05.072510 ignition[1059]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 22:50:05.072510 ignition[1059]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 22:50:05.072510 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 22:50:05.072510 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 9 22:50:05.076902 unknown[1059]: wrote ssh authorized keys file for user: core Feb 9 22:50:05.149599 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 22:50:05.149599 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 22:50:05.149599 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 22:50:05.149599 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 22:50:05.149599 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 22:50:05.149599 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 22:50:05.149599 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 22:50:05.448839 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 22:50:05.523696 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 22:50:05.523696 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 22:50:05.556720 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 22:50:05.556720 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 22:50:05.556720 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 22:50:06.015459 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 22:50:06.128498 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 22:50:06.154649 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 22:50:06.154649 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 22:50:06.154649 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 22:50:06.535360 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 22:50:06.626552 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 22:50:06.626552 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 22:50:06.670715 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 22:50:06.670715 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 22:50:06.886311 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 22:50:07.052852 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 22:50:07.078670 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 22:50:07.078670 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 22:50:07.078670 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 22:50:07.128568 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 22:50:07.528875 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 22:50:07.528875 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 22:50:07.571660 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 22:50:07.571660 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 22:50:07.603526 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 22:50:07.763228 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 22:50:07.763228 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 22:50:07.805660 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 22:50:07.805660 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 22:50:07.805660 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 22:50:07.805660 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 22:50:08.228237 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 22:50:08.269568 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 22:50:08.296545 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2478214811" Feb 9 22:50:08.296545 ignition[1059]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2478214811": device or resource busy Feb 9 22:50:08.296545 ignition[1059]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2478214811", trying btrfs: device or resource busy Feb 9 22:50:08.601903 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1066) Feb 9 22:50:08.601918 kernel: audit: type=1130 audit(1707519008.543:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.601954 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2478214811" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2478214811" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem2478214811" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem2478214811" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(16): [started] processing unit "packet-phone-home.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(16): [finished] processing unit "packet-phone-home.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(17): [started] processing unit "containerd.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(17): [finished] processing unit "containerd.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 9 22:50:08.601954 ignition[1059]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 9 22:50:09.329083 kernel: audit: type=1130 audit(1707519008.666:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.329104 kernel: audit: type=1130 audit(1707519008.734:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.329114 kernel: audit: type=1131 audit(1707519008.734:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.329122 kernel: audit: type=1130 audit(1707519008.895:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.329134 kernel: audit: type=1131 audit(1707519008.895:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.329143 kernel: audit: type=1130 audit(1707519009.089:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.329151 kernel: audit: type=1131 audit(1707519009.259:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.533668 systemd[1]: Finished ignition-files.service. Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(22): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(22): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 22:50:09.342643 ignition[1059]: INFO : files: files passed Feb 9 22:50:09.875804 kernel: audit: type=1131 audit(1707519009.572:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.875893 kernel: audit: type=1131 audit(1707519009.665:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.549294 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 22:50:09.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.895250 ignition[1059]: INFO : POST message to Packet Timeline Feb 9 22:50:09.895250 ignition[1059]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 22:50:09.895250 ignition[1059]: INFO : GET result: OK Feb 9 22:50:09.895250 ignition[1059]: INFO : Ignition finished successfully Feb 9 22:50:09.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.610671 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 22:50:09.971742 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 22:50:09.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.610985 systemd[1]: Starting ignition-quench.service... Feb 9 22:50:10.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.650798 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 22:50:08.666899 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 22:50:10.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.666978 systemd[1]: Finished ignition-quench.service. Feb 9 22:50:10.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.734674 systemd[1]: Reached target ignition-complete.target. Feb 9 22:50:08.857030 systemd[1]: Starting initrd-parse-etc.service... Feb 9 22:50:08.875368 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 22:50:10.097819 ignition[1109]: INFO : Ignition 2.14.0 Feb 9 22:50:10.097819 ignition[1109]: INFO : Stage: umount Feb 9 22:50:10.097819 ignition[1109]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 22:50:10.097819 ignition[1109]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 22:50:10.097819 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 22:50:10.097819 ignition[1109]: INFO : umount: umount passed Feb 9 22:50:10.097819 ignition[1109]: INFO : POST message to Packet Timeline Feb 9 22:50:10.097819 ignition[1109]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 22:50:10.097819 ignition[1109]: INFO : GET result: OK Feb 9 22:50:10.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:08.875441 systemd[1]: Finished initrd-parse-etc.service. Feb 9 22:50:10.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.278935 ignition[1109]: INFO : Ignition finished successfully Feb 9 22:50:10.293000 audit: BPF prog-id=6 op=UNLOAD Feb 9 22:50:08.895938 systemd[1]: Reached target initrd-fs.target. Feb 9 22:50:10.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.027646 systemd[1]: Reached target initrd.target. Feb 9 22:50:10.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.027704 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 22:50:10.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.028071 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 22:50:09.062777 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 22:50:10.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.091894 systemd[1]: Starting initrd-cleanup.service... Feb 9 22:50:09.158322 systemd[1]: Stopped target nss-lookup.target. Feb 9 22:50:09.185649 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 22:50:09.212771 systemd[1]: Stopped target timers.target. Feb 9 22:50:10.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.238996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 22:50:10.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.239265 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 22:50:10.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.260304 systemd[1]: Stopped target initrd.target. Feb 9 22:50:09.335657 systemd[1]: Stopped target basic.target. Feb 9 22:50:10.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.349658 systemd[1]: Stopped target ignition-complete.target. Feb 9 22:50:10.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.366784 systemd[1]: Stopped target ignition-diskful.target. Feb 9 22:50:10.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.391798 systemd[1]: Stopped target initrd-root-device.target. Feb 9 22:50:10.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:10.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.418808 systemd[1]: Stopped target remote-fs.target. Feb 9 22:50:09.439014 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 22:50:09.465038 systemd[1]: Stopped target sysinit.target. Feb 9 22:50:09.490042 systemd[1]: Stopped target local-fs.target. Feb 9 22:50:09.510115 systemd[1]: Stopped target local-fs-pre.target. Feb 9 22:50:09.530993 systemd[1]: Stopped target swap.target. Feb 9 22:50:09.551010 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 22:50:09.551378 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 22:50:09.573239 systemd[1]: Stopped target cryptsetup.target. Feb 9 22:50:09.650689 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 22:50:09.650768 systemd[1]: Stopped dracut-initqueue.service. Feb 9 22:50:10.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:09.665842 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 22:50:09.665916 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 22:50:10.651000 audit: BPF prog-id=5 op=UNLOAD Feb 9 22:50:10.651000 audit: BPF prog-id=4 op=UNLOAD Feb 9 22:50:10.651000 audit: BPF prog-id=3 op=UNLOAD Feb 9 22:50:10.652000 audit: BPF prog-id=8 op=UNLOAD Feb 9 22:50:10.652000 audit: BPF prog-id=7 op=UNLOAD Feb 9 22:50:09.735742 systemd[1]: Stopped target paths.target. Feb 9 22:50:09.749783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 22:50:10.693807 iscsid[906]: iscsid shutting down. Feb 9 22:50:09.753639 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 22:50:09.768781 systemd[1]: Stopped target slices.target. Feb 9 22:50:09.796828 systemd[1]: Stopped target sockets.target. Feb 9 22:50:09.818831 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 22:50:09.818996 systemd[1]: Closed iscsid.socket. Feb 9 22:50:09.842193 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 22:50:09.842583 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 22:50:09.868124 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 22:50:09.868495 systemd[1]: Stopped ignition-files.service. Feb 9 22:50:09.884078 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 22:50:09.884450 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 22:50:09.905176 systemd[1]: Stopping ignition-mount.service... Feb 9 22:50:09.911632 systemd[1]: Stopping iscsiuio.service... Feb 9 22:50:09.928220 systemd[1]: Stopping sysroot-boot.service... Feb 9 22:50:09.954674 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 22:50:09.955174 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 22:50:09.980242 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 22:50:09.980618 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 22:50:10.010056 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 22:50:10.011988 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 22:50:10.012234 systemd[1]: Stopped iscsiuio.service. Feb 9 22:50:10.027113 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 22:50:10.027337 systemd[1]: Stopped sysroot-boot.service. Feb 9 22:50:10.043062 systemd[1]: Stopped target network.target. Feb 9 22:50:10.056850 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 22:50:10.056953 systemd[1]: Closed iscsiuio.socket. Feb 9 22:50:10.071993 systemd[1]: Stopping systemd-networkd.service... Feb 9 22:50:10.079587 systemd-networkd[878]: enp1s0f1np1: DHCPv6 lease lost Feb 9 22:50:10.087778 systemd-networkd[878]: enp1s0f0np0: DHCPv6 lease lost Feb 9 22:50:10.695481 systemd-journald[267]: Failed to send stream file descriptor to service manager: Connection refused Feb 9 22:50:10.695504 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Feb 9 22:50:10.694000 audit: BPF prog-id=9 op=UNLOAD Feb 9 22:50:10.088089 systemd[1]: Stopping systemd-resolved.service... Feb 9 22:50:10.106612 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 22:50:10.106852 systemd[1]: Stopped systemd-resolved.service. Feb 9 22:50:10.124036 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 22:50:10.124351 systemd[1]: Stopped systemd-networkd.service. Feb 9 22:50:10.139386 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 22:50:10.139619 systemd[1]: Finished initrd-cleanup.service. Feb 9 22:50:10.157718 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 22:50:10.157767 systemd[1]: Stopped ignition-mount.service. Feb 9 22:50:10.187545 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 22:50:10.187583 systemd[1]: Closed systemd-networkd.socket. Feb 9 22:50:10.205748 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 22:50:10.205888 systemd[1]: Stopped ignition-disks.service. Feb 9 22:50:10.221776 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 22:50:10.221907 systemd[1]: Stopped ignition-kargs.service. Feb 9 22:50:10.237877 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 22:50:10.238020 systemd[1]: Stopped ignition-setup.service. Feb 9 22:50:10.255849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 22:50:10.256000 systemd[1]: Stopped initrd-setup-root.service. Feb 9 22:50:10.273570 systemd[1]: Stopping network-cleanup.service... Feb 9 22:50:10.285631 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 22:50:10.285799 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 22:50:10.301839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 22:50:10.301983 systemd[1]: Stopped systemd-sysctl.service. Feb 9 22:50:10.317070 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 22:50:10.317211 systemd[1]: Stopped systemd-modules-load.service. Feb 9 22:50:10.332069 systemd[1]: Stopping systemd-udevd.service... Feb 9 22:50:10.353504 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 22:50:10.355028 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 22:50:10.355358 systemd[1]: Stopped systemd-udevd.service. Feb 9 22:50:10.368386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 22:50:10.368524 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 22:50:10.380739 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 22:50:10.380834 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 22:50:10.396674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 22:50:10.396798 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 22:50:10.418775 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 22:50:10.418820 systemd[1]: Stopped dracut-cmdline.service. Feb 9 22:50:10.433663 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 22:50:10.433717 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 22:50:10.450635 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 22:50:10.464499 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 22:50:10.464528 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 22:50:10.478610 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 22:50:10.478632 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 22:50:10.493509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 22:50:10.493531 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 22:50:10.510079 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 22:50:10.510322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 22:50:10.510360 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 22:50:10.599568 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 22:50:10.599812 systemd[1]: Stopped network-cleanup.service. Feb 9 22:50:10.617104 systemd[1]: Reached target initrd-switch-root.target. Feb 9 22:50:10.633643 systemd[1]: Starting initrd-switch-root.service... Feb 9 22:50:10.649825 systemd[1]: Switching root. Feb 9 22:50:10.696574 systemd-journald[267]: Journal stopped Feb 9 22:50:14.463670 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 22:50:14.463684 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 22:50:14.463693 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 22:50:14.463699 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 22:50:14.463704 kernel: SELinux: policy capability open_perms=1 Feb 9 22:50:14.463710 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 22:50:14.463716 kernel: SELinux: policy capability always_check_network=0 Feb 9 22:50:14.463721 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 22:50:14.463726 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 22:50:14.463733 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 22:50:14.463738 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 22:50:14.463744 systemd[1]: Successfully loaded SELinux policy in 319.526ms. Feb 9 22:50:14.463751 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.980ms. Feb 9 22:50:14.463758 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 22:50:14.463766 systemd[1]: Detected architecture x86-64. Feb 9 22:50:14.463772 systemd[1]: Detected first boot. Feb 9 22:50:14.463778 systemd[1]: Hostname set to . Feb 9 22:50:14.463784 systemd[1]: Initializing machine ID from random generator. Feb 9 22:50:14.463790 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 22:50:14.463796 systemd[1]: Populated /etc with preset unit settings. Feb 9 22:50:14.463802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 22:50:14.463810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 22:50:14.463817 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 22:50:14.463823 systemd[1]: Queued start job for default target multi-user.target. Feb 9 22:50:14.463830 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 22:50:14.463836 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 22:50:14.463843 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 22:50:14.463850 systemd[1]: Created slice system-getty.slice. Feb 9 22:50:14.463856 systemd[1]: Created slice system-modprobe.slice. Feb 9 22:50:14.463862 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 22:50:14.463868 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 22:50:14.463875 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 22:50:14.463881 systemd[1]: Created slice user.slice. Feb 9 22:50:14.463887 systemd[1]: Started systemd-ask-password-console.path. Feb 9 22:50:14.463893 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 22:50:14.463899 systemd[1]: Set up automount boot.automount. Feb 9 22:50:14.463906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 22:50:14.463912 systemd[1]: Reached target integritysetup.target. Feb 9 22:50:14.463918 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 22:50:14.463925 systemd[1]: Reached target remote-fs.target. Feb 9 22:50:14.463933 systemd[1]: Reached target slices.target. Feb 9 22:50:14.463939 systemd[1]: Reached target swap.target. Feb 9 22:50:14.463945 systemd[1]: Reached target torcx.target. Feb 9 22:50:14.463952 systemd[1]: Reached target veritysetup.target. Feb 9 22:50:14.463959 systemd[1]: Listening on systemd-coredump.socket. Feb 9 22:50:14.463966 systemd[1]: Listening on systemd-initctl.socket. Feb 9 22:50:14.463973 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 22:50:14.463979 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 9 22:50:14.463985 kernel: audit: type=1400 audit(1707519013.716:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 22:50:14.463992 kernel: audit: type=1335 audit(1707519013.716:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 22:50:14.463998 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 22:50:14.464004 systemd[1]: Listening on systemd-journald.socket. Feb 9 22:50:14.464012 systemd[1]: Listening on systemd-networkd.socket. Feb 9 22:50:14.464018 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 22:50:14.464025 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 22:50:14.464031 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 22:50:14.464039 systemd[1]: Mounting dev-hugepages.mount... Feb 9 22:50:14.464045 systemd[1]: Mounting dev-mqueue.mount... Feb 9 22:50:14.464052 systemd[1]: Mounting media.mount... Feb 9 22:50:14.464058 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 22:50:14.464065 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 22:50:14.464071 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 22:50:14.464078 systemd[1]: Mounting tmp.mount... Feb 9 22:50:14.464084 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 22:50:14.464091 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 22:50:14.464099 systemd[1]: Starting kmod-static-nodes.service... Feb 9 22:50:14.464106 systemd[1]: Starting modprobe@configfs.service... Feb 9 22:50:14.464112 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 22:50:14.464119 systemd[1]: Starting modprobe@drm.service... Feb 9 22:50:14.464125 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 22:50:14.464132 systemd[1]: Starting modprobe@fuse.service... Feb 9 22:50:14.464138 kernel: fuse: init (API version 7.34) Feb 9 22:50:14.464144 systemd[1]: Starting modprobe@loop.service... Feb 9 22:50:14.464151 kernel: loop: module loaded Feb 9 22:50:14.464158 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 22:50:14.464165 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 22:50:14.464171 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 22:50:14.464177 systemd[1]: Starting systemd-journald.service... Feb 9 22:50:14.464184 systemd[1]: Starting systemd-modules-load.service... Feb 9 22:50:14.464190 kernel: audit: type=1305 audit(1707519014.460:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 22:50:14.464199 systemd-journald[1298]: Journal started Feb 9 22:50:14.464224 systemd-journald[1298]: Runtime Journal (/run/log/journal/4f01d4de726348e4a70db8629e2a8e14) is 8.0M, max 640.1M, 632.1M free. Feb 9 22:50:13.716000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 22:50:13.716000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 22:50:14.460000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 22:50:14.460000 audit[1298]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff25e550a0 a2=4000 a3=7fff25e5513c items=0 ppid=1 pid=1298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 22:50:14.577102 kernel: audit: type=1300 audit(1707519014.460:94): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff25e550a0 a2=4000 a3=7fff25e5513c items=0 ppid=1 pid=1298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 22:50:14.577122 systemd[1]: Starting systemd-network-generator.service... Feb 9 22:50:14.577134 kernel: audit: type=1327 audit(1707519014.460:94): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 22:50:14.460000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 22:50:14.645614 systemd[1]: Starting systemd-remount-fs.service... Feb 9 22:50:14.670418 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 22:50:14.714445 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 22:50:14.734604 systemd[1]: Started systemd-journald.service. Feb 9 22:50:14.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.742147 systemd[1]: Mounted dev-hugepages.mount. Feb 9 22:50:14.790611 kernel: audit: type=1130 audit(1707519014.741:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.796676 systemd[1]: Mounted dev-mqueue.mount. Feb 9 22:50:14.803665 systemd[1]: Mounted media.mount. Feb 9 22:50:14.810648 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 22:50:14.819669 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 22:50:14.828636 systemd[1]: Mounted tmp.mount. Feb 9 22:50:14.835763 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 22:50:14.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.844766 systemd[1]: Finished kmod-static-nodes.service. Feb 9 22:50:14.893575 kernel: audit: type=1130 audit(1707519014.844:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.900730 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 22:50:14.900807 systemd[1]: Finished modprobe@configfs.service. Feb 9 22:50:14.950602 kernel: audit: type=1130 audit(1707519014.900:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:14.958868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 22:50:14.958977 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 22:50:14.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.010468 kernel: audit: type=1130 audit(1707519014.958:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.010517 kernel: audit: type=1131 audit(1707519014.958:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.069776 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 22:50:15.069864 systemd[1]: Finished modprobe@drm.service. Feb 9 22:50:15.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.078777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 22:50:15.078866 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 22:50:15.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.087776 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 22:50:15.087862 systemd[1]: Finished modprobe@fuse.service. Feb 9 22:50:15.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.096806 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 22:50:15.096900 systemd[1]: Finished modprobe@loop.service. Feb 9 22:50:15.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.105797 systemd[1]: Finished systemd-modules-load.service. Feb 9 22:50:15.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.114747 systemd[1]: Finished systemd-network-generator.service. Feb 9 22:50:15.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.123784 systemd[1]: Finished systemd-remount-fs.service. Feb 9 22:50:15.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.131843 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 22:50:15.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.141997 systemd[1]: Reached target network-pre.target. Feb 9 22:50:15.152216 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 22:50:15.162680 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 22:50:15.169647 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 22:50:15.170572 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 22:50:15.178100 systemd[1]: Starting systemd-journal-flush.service... Feb 9 22:50:15.181623 systemd-journald[1298]: Time spent on flushing to /var/log/journal/4f01d4de726348e4a70db8629e2a8e14 is 14.793ms for 1558 entries. Feb 9 22:50:15.181623 systemd-journald[1298]: System Journal (/var/log/journal/4f01d4de726348e4a70db8629e2a8e14) is 8.0M, max 195.6M, 187.6M free. Feb 9 22:50:15.228731 systemd-journald[1298]: Received client request to flush runtime journal. Feb 9 22:50:15.194535 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 22:50:15.195050 systemd[1]: Starting systemd-random-seed.service... Feb 9 22:50:15.212534 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 22:50:15.213113 systemd[1]: Starting systemd-sysctl.service... Feb 9 22:50:15.220073 systemd[1]: Starting systemd-sysusers.service... Feb 9 22:50:15.227102 systemd[1]: Starting systemd-udev-settle.service... Feb 9 22:50:15.234921 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 22:50:15.243542 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 22:50:15.251674 systemd[1]: Finished systemd-journal-flush.service. Feb 9 22:50:15.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.259682 systemd[1]: Finished systemd-random-seed.service. Feb 9 22:50:15.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.267783 systemd[1]: Finished systemd-sysctl.service. Feb 9 22:50:15.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.275671 systemd[1]: Finished systemd-sysusers.service. Feb 9 22:50:15.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.285191 systemd[1]: Reached target first-boot-complete.target. Feb 9 22:50:15.294456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 22:50:15.304103 udevadm[1327]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 22:50:15.313020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 22:50:15.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.478095 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 22:50:15.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.487307 systemd[1]: Starting systemd-udevd.service... Feb 9 22:50:15.498842 systemd-udevd[1336]: Using default interface naming scheme 'v252'. Feb 9 22:50:15.516269 systemd[1]: Started systemd-udevd.service. Feb 9 22:50:15.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.526812 systemd[1]: Found device dev-ttyS1.device. Feb 9 22:50:15.548981 systemd[1]: Starting systemd-networkd.service... Feb 9 22:50:15.573338 systemd[1]: Starting systemd-userdbd.service... Feb 9 22:50:15.574991 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 22:50:15.575036 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 22:50:15.575055 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 22:50:15.575418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 22:50:15.575445 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1352) Feb 9 22:50:15.646420 kernel: IPMI message handler: version 39.2 Feb 9 22:50:15.668461 kernel: ACPI: button: Power Button [PWRF] Feb 9 22:50:15.673568 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 22:50:15.562000 audit[1402]: AVC avc: denied { confidentiality } for pid=1402 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 22:50:15.562000 audit[1402]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fc3db186010 a1=4d8bc a2=7fc3dce37bc5 a3=5 items=42 ppid=1336 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 22:50:15.562000 audit: CWD cwd="/" Feb 9 22:50:15.562000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=1 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=2 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=3 name=(null) inode=25363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=4 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=5 name=(null) inode=25364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=6 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=7 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=8 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=9 name=(null) inode=25366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=10 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=11 name=(null) inode=25367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=12 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=13 name=(null) inode=25368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=14 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=15 name=(null) inode=25369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=16 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=17 name=(null) inode=25370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=18 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=19 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=20 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=21 name=(null) inode=25372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=22 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=23 name=(null) inode=25373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=24 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=25 name=(null) inode=25374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=26 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=27 name=(null) inode=25375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=28 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=29 name=(null) inode=25376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=30 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=31 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=32 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=33 name=(null) inode=25378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=34 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=35 name=(null) inode=25379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=36 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=37 name=(null) inode=25380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=38 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=39 name=(null) inode=25381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=40 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PATH item=41 name=(null) inode=25382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 22:50:15.562000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 22:50:15.733423 kernel: ipmi device interface Feb 9 22:50:15.733981 systemd[1]: Started systemd-userdbd.service. Feb 9 22:50:15.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:15.778160 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 22:50:15.778327 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 22:50:15.801420 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 22:50:15.801545 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 22:50:15.801742 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 22:50:15.897811 kernel: ipmi_si: IPMI System Interface driver Feb 9 22:50:15.897849 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 22:50:15.897938 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 22:50:15.943135 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 22:50:15.943164 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 22:50:15.985423 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 22:50:15.985490 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 22:50:16.059760 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 22:50:16.059884 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 22:50:16.059898 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 22:50:16.152308 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 9 22:50:16.152409 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 22:50:16.152489 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 9 22:50:16.174415 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 9 22:50:16.237222 systemd-networkd[1409]: bond0: netdev ready Feb 9 22:50:16.239321 systemd-networkd[1409]: lo: Link UP Feb 9 22:50:16.239324 systemd-networkd[1409]: lo: Gained carrier Feb 9 22:50:16.239790 systemd-networkd[1409]: Enumeration completed Feb 9 22:50:16.239861 systemd[1]: Started systemd-networkd.service. Feb 9 22:50:16.240065 systemd-networkd[1409]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 22:50:16.245198 systemd-networkd[1409]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:79.network. Feb 9 22:50:16.252168 kernel: intel_rapl_common: Found RAPL domain package Feb 9 22:50:16.252203 kernel: intel_rapl_common: Found RAPL domain core Feb 9 22:50:16.252227 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 22:50:16.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:16.312416 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 22:50:16.333417 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 22:50:16.335770 systemd[1]: Finished systemd-udev-settle.service. Feb 9 22:50:16.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:16.344244 systemd[1]: Starting lvm2-activation-early.service... Feb 9 22:50:16.359818 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 22:50:16.392914 systemd[1]: Finished lvm2-activation-early.service. Feb 9 22:50:16.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:16.402556 systemd[1]: Reached target cryptsetup.target. Feb 9 22:50:16.412133 systemd[1]: Starting lvm2-activation.service... Feb 9 22:50:16.414460 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 22:50:16.439903 systemd[1]: Finished lvm2-activation.service. Feb 9 22:50:16.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:16.448637 systemd[1]: Reached target local-fs-pre.target. Feb 9 22:50:16.456501 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 22:50:16.456515 systemd[1]: Reached target local-fs.target. Feb 9 22:50:16.465468 systemd[1]: Reached target machines.target. Feb 9 22:50:16.474220 systemd[1]: Starting ldconfig.service... Feb 9 22:50:16.481814 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 22:50:16.481835 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 22:50:16.482568 systemd[1]: Starting systemd-boot-update.service... Feb 9 22:50:16.489950 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 22:50:16.500168 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 22:50:16.500267 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 22:50:16.500339 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 22:50:16.501050 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 22:50:16.501300 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1445 (bootctl) Feb 9 22:50:16.501966 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 22:50:16.514076 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 22:50:16.520979 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 22:50:16.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:16.539218 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 22:50:16.546372 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 22:50:16.732458 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 22:50:16.759457 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 9 22:50:16.760996 systemd-networkd[1409]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 9 22:50:16.786467 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 22:50:16.913456 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 22:50:16.913502 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 22:50:16.957466 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 9 22:50:16.957490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 22:50:16.978353 systemd-networkd[1409]: bond0: Link UP Feb 9 22:50:16.978579 systemd-networkd[1409]: enp1s0f1np1: Link UP Feb 9 22:50:16.978707 systemd-networkd[1409]: enp1s0f1np1: Gained carrier Feb 9 22:50:16.979662 systemd-networkd[1409]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 9 22:50:17.020019 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 22:50:17.020047 kernel: bond0: active interface up! Feb 9 22:50:17.042415 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 22:50:17.118576 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 22:50:17.118954 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 22:50:17.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:17.120280 systemd-networkd[1409]: enp1s0f0np0: Link UP Feb 9 22:50:17.120457 systemd-networkd[1409]: bond0: Gained carrier Feb 9 22:50:17.120543 systemd-networkd[1409]: enp1s0f0np0: Gained carrier Feb 9 22:50:17.128612 systemd-fsck[1454]: fsck.fat 4.2 (2021-01-31) Feb 9 22:50:17.128612 systemd-fsck[1454]: /dev/sdb1: 789 files, 115339/258078 clusters Feb 9 22:50:17.129345 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 22:50:17.131877 systemd-networkd[1409]: enp1s0f1np1: Link DOWN Feb 9 22:50:17.131880 systemd-networkd[1409]: enp1s0f1np1: Lost carrier Feb 9 22:50:17.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:17.140559 systemd[1]: Mounting boot.mount... Feb 9 22:50:17.169417 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.170564 systemd[1]: Mounted boot.mount. Feb 9 22:50:17.192416 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.197737 systemd[1]: Finished systemd-boot-update.service. Feb 9 22:50:17.213416 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:17.230737 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 22:50:17.234414 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 22:50:17.252505 systemd[1]: Starting audit-rules.service... Feb 9 22:50:17.257280 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.271000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 22:50:17.271000 audit[1479]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb2200010 a2=420 a3=0 items=0 ppid=1464 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 22:50:17.271000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 22:50:17.272467 augenrules[1479]: No rules Feb 9 22:50:17.274172 systemd[1]: Starting clean-ca-certificates.service... Feb 9 22:50:17.278462 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.278496 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 22:50:17.294469 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 22:50:17.297980 systemd-networkd[1409]: enp1s0f1np1: Link UP Feb 9 22:50:17.297983 systemd-networkd[1409]: enp1s0f1np1: Gained carrier Feb 9 22:50:17.312414 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 9 22:50:17.330649 ldconfig[1444]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 22:50:17.337204 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 22:50:17.346277 systemd[1]: Starting systemd-resolved.service... Feb 9 22:50:17.355273 systemd[1]: Starting systemd-timesyncd.service... Feb 9 22:50:17.363086 systemd[1]: Starting systemd-update-utmp.service... Feb 9 22:50:17.369828 systemd[1]: Finished ldconfig.service. Feb 9 22:50:17.377668 systemd[1]: Finished audit-rules.service. Feb 9 22:50:17.384668 systemd[1]: Finished clean-ca-certificates.service. Feb 9 22:50:17.392649 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 22:50:17.404266 systemd[1]: Starting systemd-update-done.service... Feb 9 22:50:17.411476 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 22:50:17.411924 systemd[1]: Finished systemd-update-done.service. Feb 9 22:50:17.425928 systemd[1]: Finished systemd-update-utmp.service. Feb 9 22:50:17.435416 kernel: bond0: (slave enp1s0f1np1): link status up again after 100 ms Feb 9 22:50:17.455416 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 22:50:17.462052 systemd[1]: Started systemd-timesyncd.service. Feb 9 22:50:17.464716 systemd-resolved[1489]: Positive Trust Anchors: Feb 9 22:50:17.464722 systemd-resolved[1489]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 22:50:17.464740 systemd-resolved[1489]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 22:50:17.468743 systemd-resolved[1489]: Using system hostname 'ci-3510.3.2-a-e9037c933d'. Feb 9 22:50:17.470628 systemd[1]: Started systemd-resolved.service. Feb 9 22:50:17.478547 systemd[1]: Reached target network.target. Feb 9 22:50:17.486502 systemd[1]: Reached target nss-lookup.target. Feb 9 22:50:17.494497 systemd[1]: Reached target sysinit.target. Feb 9 22:50:17.502537 systemd[1]: Started motdgen.path. Feb 9 22:50:17.509507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 22:50:17.519506 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 22:50:17.527485 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 22:50:17.527499 systemd[1]: Reached target paths.target. Feb 9 22:50:17.534497 systemd[1]: Reached target time-set.target. Feb 9 22:50:17.542553 systemd[1]: Started logrotate.timer. Feb 9 22:50:17.549531 systemd[1]: Started mdadm.timer. Feb 9 22:50:17.556480 systemd[1]: Reached target timers.target. Feb 9 22:50:17.563621 systemd[1]: Listening on dbus.socket. Feb 9 22:50:17.571078 systemd[1]: Starting docker.socket... Feb 9 22:50:17.578311 systemd[1]: Listening on sshd.socket. Feb 9 22:50:17.585561 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 22:50:17.585745 systemd[1]: Listening on docker.socket. Feb 9 22:50:17.592515 systemd[1]: Reached target sockets.target. Feb 9 22:50:17.600493 systemd[1]: Reached target basic.target. Feb 9 22:50:17.607550 systemd[1]: System is tainted: cgroupsv1 Feb 9 22:50:17.607573 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 22:50:17.607585 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 22:50:17.608076 systemd[1]: Starting containerd.service... Feb 9 22:50:17.615012 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 22:50:17.624033 systemd[1]: Starting coreos-metadata.service... Feb 9 22:50:17.631050 systemd[1]: Starting dbus.service... Feb 9 22:50:17.637029 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 22:50:17.641392 jq[1508]: false Feb 9 22:50:17.644060 coreos-metadata[1501]: Feb 09 22:50:17.644 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 22:50:17.644102 systemd[1]: Starting extend-filesystems.service... Feb 9 22:50:17.649935 dbus-daemon[1507]: [system] SELinux support is enabled Feb 9 22:50:17.650468 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 22:50:17.651205 systemd[1]: Starting motdgen.service... Feb 9 22:50:17.652004 extend-filesystems[1510]: Found sda Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb1 Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb2 Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb3 Feb 9 22:50:17.673602 extend-filesystems[1510]: Found usr Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb4 Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb6 Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb7 Feb 9 22:50:17.673602 extend-filesystems[1510]: Found sdb9 Feb 9 22:50:17.673602 extend-filesystems[1510]: Checking size of /dev/sdb9 Feb 9 22:50:17.673602 extend-filesystems[1510]: Resized partition /dev/sdb9 Feb 9 22:50:17.797522 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 9 22:50:17.797554 coreos-metadata[1504]: Feb 09 22:50:17.652 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 22:50:17.659325 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 22:50:17.797728 extend-filesystems[1526]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 22:50:17.681093 systemd[1]: Starting prepare-critools.service... Feb 9 22:50:17.706183 systemd[1]: Starting prepare-helm.service... Feb 9 22:50:17.725110 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 22:50:17.745100 systemd[1]: Starting sshd-keygen.service... Feb 9 22:50:17.764544 systemd[1]: Starting systemd-logind.service... Feb 9 22:50:17.777501 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 22:50:17.778139 systemd[1]: Starting tcsd.service... Feb 9 22:50:17.786212 systemd[1]: Starting update-engine.service... Feb 9 22:50:17.813054 jq[1547]: true Feb 9 22:50:17.788604 systemd-logind[1544]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 22:50:17.788614 systemd-logind[1544]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 22:50:17.788624 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 22:50:17.788773 systemd-logind[1544]: New seat seat0. Feb 9 22:50:17.805348 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 22:50:17.821945 systemd[1]: Started dbus.service. Feb 9 22:50:17.829545 update_engine[1546]: I0209 22:50:17.829114 1546 main.cc:92] Flatcar Update Engine starting Feb 9 22:50:17.832309 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 22:50:17.832403 update_engine[1546]: I0209 22:50:17.832386 1546 update_check_scheduler.cc:74] Next update check in 4m18s Feb 9 22:50:17.832486 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 22:50:17.832722 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 22:50:17.832878 systemd[1]: Finished motdgen.service. Feb 9 22:50:17.842014 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 22:50:17.842175 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 22:50:17.846315 tar[1551]: ./ Feb 9 22:50:17.846315 tar[1551]: ./macvlan Feb 9 22:50:17.853124 jq[1557]: true Feb 9 22:50:17.853273 tar[1553]: linux-amd64/helm Feb 9 22:50:17.853781 dbus-daemon[1507]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 22:50:17.854761 tar[1552]: crictl Feb 9 22:50:17.858124 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 22:50:17.858305 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 22:50:17.861873 systemd[1]: Started systemd-logind.service. Feb 9 22:50:17.863744 env[1558]: time="2024-02-09T22:50:17.863720428Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 22:50:17.868885 tar[1551]: ./static Feb 9 22:50:17.871485 systemd[1]: Started update-engine.service. Feb 9 22:50:17.871945 env[1558]: time="2024-02-09T22:50:17.871930263Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 22:50:17.872703 env[1558]: time="2024-02-09T22:50:17.872691740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 22:50:17.873700 env[1558]: time="2024-02-09T22:50:17.873678534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 22:50:17.874813 env[1558]: time="2024-02-09T22:50:17.873698698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 22:50:17.875297 env[1558]: time="2024-02-09T22:50:17.875282147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 22:50:17.875337 env[1558]: time="2024-02-09T22:50:17.875296089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 22:50:17.875337 env[1558]: time="2024-02-09T22:50:17.875308694Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 22:50:17.875337 env[1558]: time="2024-02-09T22:50:17.875318912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 22:50:17.875425 env[1558]: time="2024-02-09T22:50:17.875381435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 22:50:17.877525 env[1558]: time="2024-02-09T22:50:17.877511749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 22:50:17.877632 env[1558]: time="2024-02-09T22:50:17.877619116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 22:50:17.877670 env[1558]: time="2024-02-09T22:50:17.877631618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 22:50:17.877701 env[1558]: time="2024-02-09T22:50:17.877673503Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 22:50:17.877701 env[1558]: time="2024-02-09T22:50:17.877685061Z" level=info msg="metadata content store policy set" policy=shared Feb 9 22:50:17.881609 systemd[1]: Started locksmithd.service. Feb 9 22:50:17.882751 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Feb 9 22:50:17.888097 env[1558]: time="2024-02-09T22:50:17.888077063Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 22:50:17.888136 env[1558]: time="2024-02-09T22:50:17.888105816Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 22:50:17.888136 env[1558]: time="2024-02-09T22:50:17.888119989Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 22:50:17.888176 env[1558]: time="2024-02-09T22:50:17.888146746Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888176 env[1558]: time="2024-02-09T22:50:17.888159927Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888176 env[1558]: time="2024-02-09T22:50:17.888172292Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888230 env[1558]: time="2024-02-09T22:50:17.888183654Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888230 env[1558]: time="2024-02-09T22:50:17.888197172Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888230 env[1558]: time="2024-02-09T22:50:17.888208664Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888230 env[1558]: time="2024-02-09T22:50:17.888220812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888296 env[1558]: time="2024-02-09T22:50:17.888231864Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888296 env[1558]: time="2024-02-09T22:50:17.888242080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 22:50:17.888328 env[1558]: time="2024-02-09T22:50:17.888313075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 22:50:17.888387 env[1558]: time="2024-02-09T22:50:17.888377926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 22:50:17.888595 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 22:50:17.888659 env[1558]: time="2024-02-09T22:50:17.888635964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 22:50:17.888682 env[1558]: time="2024-02-09T22:50:17.888659428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888682 env[1558]: time="2024-02-09T22:50:17.888673531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 22:50:17.888726 env[1558]: time="2024-02-09T22:50:17.888711303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888726 env[1558]: time="2024-02-09T22:50:17.888723220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888767 env[1558]: time="2024-02-09T22:50:17.888734217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888767 env[1558]: time="2024-02-09T22:50:17.888743317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888767 env[1558]: time="2024-02-09T22:50:17.888753136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888740 systemd[1]: Reached target system-config.target. Feb 9 22:50:17.888858 env[1558]: time="2024-02-09T22:50:17.888764544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888858 env[1558]: time="2024-02-09T22:50:17.888775807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888858 env[1558]: time="2024-02-09T22:50:17.888794384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888858 env[1558]: time="2024-02-09T22:50:17.888811482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 22:50:17.888929 env[1558]: time="2024-02-09T22:50:17.888907182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888929 env[1558]: time="2024-02-09T22:50:17.888920283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888964 env[1558]: time="2024-02-09T22:50:17.888931816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.888964 env[1558]: time="2024-02-09T22:50:17.888942512Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 22:50:17.888964 env[1558]: time="2024-02-09T22:50:17.888957089Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 22:50:17.889010 env[1558]: time="2024-02-09T22:50:17.888967192Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 22:50:17.889010 env[1558]: time="2024-02-09T22:50:17.888982424Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 22:50:17.889060 env[1558]: time="2024-02-09T22:50:17.889009446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 22:50:17.889222 env[1558]: time="2024-02-09T22:50:17.889178501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889232883Z" level=info msg="Connect containerd service" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889259101Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889668276Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889761306Z" level=info msg="Start subscribing containerd event" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889797397Z" level=info msg="Start recovering state" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889817252Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889834884Z" level=info msg="Start event monitor" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889843350Z" level=info msg="Start snapshots syncer" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889848465Z" level=info msg="Start cni network conf syncer for default" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889854377Z" level=info msg="Start streaming server" Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889846339Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 22:50:17.891047 env[1558]: time="2024-02-09T22:50:17.889924951Z" level=info msg="containerd successfully booted in 0.026569s" Feb 9 22:50:17.892687 tar[1551]: ./vlan Feb 9 22:50:17.896585 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 22:50:17.896703 systemd[1]: Reached target user-config.target. Feb 9 22:50:17.907044 systemd[1]: Started containerd.service. Feb 9 22:50:17.913532 tar[1551]: ./portmap Feb 9 22:50:17.913845 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 22:50:17.933272 tar[1551]: ./host-local Feb 9 22:50:17.939254 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 22:50:17.950864 tar[1551]: ./vrf Feb 9 22:50:17.970026 tar[1551]: ./bridge Feb 9 22:50:17.992978 tar[1551]: ./tuning Feb 9 22:50:18.011272 tar[1551]: ./firewall Feb 9 22:50:18.034457 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 22:50:18.035042 tar[1551]: ./host-device Feb 9 22:50:18.046457 systemd[1]: Finished sshd-keygen.service. Feb 9 22:50:18.054789 systemd[1]: Starting issuegen.service... Feb 9 22:50:18.055710 tar[1551]: ./sbr Feb 9 22:50:18.061783 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 22:50:18.061945 systemd[1]: Finished issuegen.service. Feb 9 22:50:18.070713 systemd[1]: Starting systemd-user-sessions.service... Feb 9 22:50:18.074763 tar[1551]: ./loopback Feb 9 22:50:18.079863 systemd[1]: Finished systemd-user-sessions.service. Feb 9 22:50:18.089676 systemd[1]: Started getty@tty1.service. Feb 9 22:50:18.092781 tar[1551]: ./dhcp Feb 9 22:50:18.098523 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 22:50:18.103681 tar[1553]: linux-amd64/LICENSE Feb 9 22:50:18.103733 tar[1553]: linux-amd64/README.md Feb 9 22:50:18.106624 systemd[1]: Reached target getty.target. Feb 9 22:50:18.117502 systemd[1]: Finished prepare-critools.service. Feb 9 22:50:18.125827 systemd[1]: Finished prepare-helm.service. Feb 9 22:50:18.145137 tar[1551]: ./ptp Feb 9 22:50:18.167582 tar[1551]: ./ipvlan Feb 9 22:50:18.189278 tar[1551]: ./bandwidth Feb 9 22:50:18.209418 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 9 22:50:18.237956 extend-filesystems[1526]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 9 22:50:18.237956 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 22:50:18.237956 extend-filesystems[1526]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 9 22:50:18.282628 extend-filesystems[1510]: Resized filesystem in /dev/sdb9 Feb 9 22:50:18.238442 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 22:50:18.238577 systemd[1]: Finished extend-filesystems.service. Feb 9 22:50:18.255442 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 22:50:18.322696 systemd-networkd[1409]: bond0: Gained IPv6LL Feb 9 22:50:19.633594 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 22:50:23.137120 login[1623]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 22:50:23.146197 login[1622]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 22:50:23.149829 systemd-logind[1544]: New session 1 of user core. Feb 9 22:50:23.150643 systemd[1]: Created slice user-500.slice. Feb 9 22:50:23.151602 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 22:50:23.153694 systemd-logind[1544]: New session 2 of user core. Feb 9 22:50:23.160684 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 22:50:23.161626 systemd[1]: Starting user@500.service... Feb 9 22:50:23.164513 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:50:23.248430 systemd[1642]: Queued start job for default target default.target. Feb 9 22:50:23.248531 systemd[1642]: Reached target paths.target. Feb 9 22:50:23.248541 systemd[1642]: Reached target sockets.target. Feb 9 22:50:23.248550 systemd[1642]: Reached target timers.target. Feb 9 22:50:23.248556 systemd[1642]: Reached target basic.target. Feb 9 22:50:23.248575 systemd[1642]: Reached target default.target. Feb 9 22:50:23.248588 systemd[1642]: Startup finished in 79ms. Feb 9 22:50:23.248646 systemd[1]: Started user@500.service. Feb 9 22:50:23.249144 systemd[1]: Started session-1.scope. Feb 9 22:50:23.249449 systemd[1]: Started session-2.scope. Feb 9 22:50:23.635598 coreos-metadata[1504]: Feb 09 22:50:23.635 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 22:50:23.636447 coreos-metadata[1501]: Feb 09 22:50:23.635 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 22:50:24.635748 coreos-metadata[1504]: Feb 09 22:50:24.635 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 22:50:24.636600 coreos-metadata[1501]: Feb 09 22:50:24.635 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 22:50:25.065447 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 9 22:50:25.065611 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 22:50:25.709002 coreos-metadata[1504]: Feb 09 22:50:25.708 INFO Fetch successful Feb 9 22:50:25.711105 coreos-metadata[1501]: Feb 09 22:50:25.711 INFO Fetch successful Feb 9 22:50:25.735718 systemd[1]: Finished coreos-metadata.service. Feb 9 22:50:25.736701 systemd[1]: Started packet-phone-home.service. Feb 9 22:50:25.741684 unknown[1501]: wrote ssh authorized keys file for user: core Feb 9 22:50:25.746040 curl[1669]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 22:50:25.746202 curl[1669]: Dload Upload Total Spent Left Speed Feb 9 22:50:25.752048 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Feb 9 22:50:25.752270 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 22:50:25.752448 systemd[1]: Reached target multi-user.target. Feb 9 22:50:25.753133 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 22:50:25.756735 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 22:50:25.756841 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 22:50:25.756974 systemd[1]: Startup finished in 21.866s (kernel) + 14.906s (userspace) = 36.773s. Feb 9 22:50:25.937061 curl[1669]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 22:50:25.939476 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 22:50:26.515612 systemd-resolved[1489]: Clock change detected. Flushing caches. Feb 9 22:50:26.515797 systemd-timesyncd[1491]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Feb 9 22:50:26.515959 systemd-timesyncd[1491]: Initial clock synchronization to Fri 2024-02-09 22:50:26.515455 UTC. Feb 9 22:50:26.699334 systemd[1]: Created slice system-sshd.slice. Feb 9 22:50:26.700068 systemd[1]: Started sshd@0-147.75.49.127:22-139.178.89.65:58412.service. Feb 9 22:50:26.747673 sshd[1677]: Accepted publickey for core from 139.178.89.65 port 58412 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:50:26.750799 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:50:26.761527 systemd-logind[1544]: New session 3 of user core. Feb 9 22:50:26.763834 systemd[1]: Started session-3.scope. Feb 9 22:50:26.820544 systemd[1]: Started sshd@1-147.75.49.127:22-139.178.89.65:58414.service. Feb 9 22:50:26.854205 sshd[1682]: Accepted publickey for core from 139.178.89.65 port 58414 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:50:26.854909 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:50:26.857151 systemd-logind[1544]: New session 4 of user core. Feb 9 22:50:26.857557 systemd[1]: Started session-4.scope. Feb 9 22:50:26.906057 sshd[1682]: pam_unix(sshd:session): session closed for user core Feb 9 22:50:26.907501 systemd[1]: Started sshd@2-147.75.49.127:22-139.178.89.65:58430.service. Feb 9 22:50:26.907790 systemd[1]: sshd@1-147.75.49.127:22-139.178.89.65:58414.service: Deactivated successfully. Feb 9 22:50:26.908353 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. Feb 9 22:50:26.908368 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 22:50:26.908806 systemd-logind[1544]: Removed session 4. Feb 9 22:50:26.943361 sshd[1688]: Accepted publickey for core from 139.178.89.65 port 58430 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:50:26.946602 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:50:26.957021 systemd-logind[1544]: New session 5 of user core. Feb 9 22:50:26.959385 systemd[1]: Started session-5.scope. Feb 9 22:50:27.031303 sshd[1688]: pam_unix(sshd:session): session closed for user core Feb 9 22:50:27.037207 systemd[1]: Started sshd@3-147.75.49.127:22-139.178.89.65:58440.service. Feb 9 22:50:27.038741 systemd[1]: sshd@2-147.75.49.127:22-139.178.89.65:58430.service: Deactivated successfully. Feb 9 22:50:27.041066 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. Feb 9 22:50:27.041271 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 22:50:27.043830 systemd-logind[1544]: Removed session 5. Feb 9 22:50:27.103412 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 58440 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:50:27.105646 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:50:27.113104 systemd-logind[1544]: New session 6 of user core. Feb 9 22:50:27.114834 systemd[1]: Started session-6.scope. Feb 9 22:50:27.184744 sshd[1694]: pam_unix(sshd:session): session closed for user core Feb 9 22:50:27.186072 systemd[1]: Started sshd@4-147.75.49.127:22-139.178.89.65:58446.service. Feb 9 22:50:27.186305 systemd[1]: sshd@3-147.75.49.127:22-139.178.89.65:58440.service: Deactivated successfully. Feb 9 22:50:27.186805 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. Feb 9 22:50:27.186846 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 22:50:27.187397 systemd-logind[1544]: Removed session 6. Feb 9 22:50:27.221265 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 58446 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:50:27.222252 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:50:27.225916 systemd-logind[1544]: New session 7 of user core. Feb 9 22:50:27.226732 systemd[1]: Started session-7.scope. Feb 9 22:50:27.330309 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 22:50:27.330962 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 22:50:28.528806 systemd[1]: Started sshd@5-147.75.49.127:22-124.220.165.94:35736.service. Feb 9 22:50:29.361777 sshd[1712]: Invalid user topapp from 124.220.165.94 port 35736 Feb 9 22:50:29.363094 sshd[1712]: pam_faillock(sshd:auth): User unknown Feb 9 22:50:29.363318 sshd[1712]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:50:29.363337 sshd[1712]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:50:29.363550 sshd[1712]: pam_faillock(sshd:auth): User unknown Feb 9 22:50:31.371146 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 22:50:31.375650 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 22:50:31.375844 systemd[1]: Reached target network-online.target. Feb 9 22:50:31.376617 systemd[1]: Starting docker.service... Feb 9 22:50:31.395340 env[1731]: time="2024-02-09T22:50:31.395313397Z" level=info msg="Starting up" Feb 9 22:50:31.395954 env[1731]: time="2024-02-09T22:50:31.395943535Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 22:50:31.395954 env[1731]: time="2024-02-09T22:50:31.395953168Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 22:50:31.396013 env[1731]: time="2024-02-09T22:50:31.395967514Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 22:50:31.396013 env[1731]: time="2024-02-09T22:50:31.395974112Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 22:50:31.396775 env[1731]: time="2024-02-09T22:50:31.396764410Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 22:50:31.396775 env[1731]: time="2024-02-09T22:50:31.396773306Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 22:50:31.396821 env[1731]: time="2024-02-09T22:50:31.396782364Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 22:50:31.396821 env[1731]: time="2024-02-09T22:50:31.396787249Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 22:50:31.415669 sshd[1712]: Failed password for invalid user topapp from 124.220.165.94 port 35736 ssh2 Feb 9 22:50:31.800640 env[1731]: time="2024-02-09T22:50:31.800548928Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 22:50:31.800640 env[1731]: time="2024-02-09T22:50:31.800563456Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 22:50:31.800757 env[1731]: time="2024-02-09T22:50:31.800653421Z" level=info msg="Loading containers: start." Feb 9 22:50:31.852212 sshd[1712]: Received disconnect from 124.220.165.94 port 35736:11: Bye Bye [preauth] Feb 9 22:50:31.852212 sshd[1712]: Disconnected from invalid user topapp 124.220.165.94 port 35736 [preauth] Feb 9 22:50:31.852960 systemd[1]: sshd@5-147.75.49.127:22-124.220.165.94:35736.service: Deactivated successfully. Feb 9 22:50:31.931897 kernel: Initializing XFRM netlink socket Feb 9 22:50:32.015133 env[1731]: time="2024-02-09T22:50:32.015086912Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 22:50:32.090245 systemd-networkd[1409]: docker0: Link UP Feb 9 22:50:32.101344 env[1731]: time="2024-02-09T22:50:32.101260665Z" level=info msg="Loading containers: done." Feb 9 22:50:32.117341 env[1731]: time="2024-02-09T22:50:32.117294638Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 22:50:32.117427 env[1731]: time="2024-02-09T22:50:32.117398176Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 22:50:32.117462 env[1731]: time="2024-02-09T22:50:32.117453865Z" level=info msg="Daemon has completed initialization" Feb 9 22:50:32.118239 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1682383577-merged.mount: Deactivated successfully. Feb 9 22:50:32.123689 systemd[1]: Started docker.service. Feb 9 22:50:32.127124 env[1731]: time="2024-02-09T22:50:32.127063248Z" level=info msg="API listen on /run/docker.sock" Feb 9 22:50:32.147823 systemd[1]: Reloading. Feb 9 22:50:32.202756 /usr/lib/systemd/system-generators/torcx-generator[1886]: time="2024-02-09T22:50:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 22:50:32.202781 /usr/lib/systemd/system-generators/torcx-generator[1886]: time="2024-02-09T22:50:32Z" level=info msg="torcx already run" Feb 9 22:50:32.266284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 22:50:32.266294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 22:50:32.282485 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 22:50:32.331813 systemd[1]: Started kubelet.service. Feb 9 22:50:32.353444 kubelet[1950]: E0209 22:50:32.353341 1950 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 22:50:32.354595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 22:50:32.354685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 22:50:33.168343 env[1558]: time="2024-02-09T22:50:33.168217347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 22:50:34.036080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291955902.mount: Deactivated successfully. Feb 9 22:50:35.708813 env[1558]: time="2024-02-09T22:50:35.708756496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:35.709600 env[1558]: time="2024-02-09T22:50:35.709540148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:35.710552 env[1558]: time="2024-02-09T22:50:35.710509129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:35.711717 env[1558]: time="2024-02-09T22:50:35.711675248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:35.712495 env[1558]: time="2024-02-09T22:50:35.712216276Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 22:50:35.720619 env[1558]: time="2024-02-09T22:50:35.720585070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 22:50:37.805637 env[1558]: time="2024-02-09T22:50:37.805566909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:37.806269 env[1558]: time="2024-02-09T22:50:37.806228327Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:37.807346 env[1558]: time="2024-02-09T22:50:37.807306242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:37.808308 env[1558]: time="2024-02-09T22:50:37.808268986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:37.808775 env[1558]: time="2024-02-09T22:50:37.808722942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 22:50:37.814757 env[1558]: time="2024-02-09T22:50:37.814742314Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 22:50:39.088374 env[1558]: time="2024-02-09T22:50:39.088319017Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:39.088945 env[1558]: time="2024-02-09T22:50:39.088901389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:39.090016 env[1558]: time="2024-02-09T22:50:39.089972088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:39.091281 env[1558]: time="2024-02-09T22:50:39.091218199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:39.091688 env[1558]: time="2024-02-09T22:50:39.091630730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 22:50:39.097304 env[1558]: time="2024-02-09T22:50:39.097286804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 22:50:40.012043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231889115.mount: Deactivated successfully. Feb 9 22:50:40.326482 env[1558]: time="2024-02-09T22:50:40.326404883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.327048 env[1558]: time="2024-02-09T22:50:40.327008834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.328142 env[1558]: time="2024-02-09T22:50:40.328101280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.328754 env[1558]: time="2024-02-09T22:50:40.328714544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.329096 env[1558]: time="2024-02-09T22:50:40.329037355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 22:50:40.335022 env[1558]: time="2024-02-09T22:50:40.334968682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 22:50:40.859457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441437406.mount: Deactivated successfully. Feb 9 22:50:40.861024 env[1558]: time="2024-02-09T22:50:40.860965090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.861643 env[1558]: time="2024-02-09T22:50:40.861631931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.862414 env[1558]: time="2024-02-09T22:50:40.862400157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.863507 env[1558]: time="2024-02-09T22:50:40.863453666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:40.863649 env[1558]: time="2024-02-09T22:50:40.863636612Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 22:50:40.870052 env[1558]: time="2024-02-09T22:50:40.870018449Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 22:50:41.538039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284323451.mount: Deactivated successfully. Feb 9 22:50:42.602650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 22:50:42.602780 systemd[1]: Stopped kubelet.service. Feb 9 22:50:42.603680 systemd[1]: Started kubelet.service. Feb 9 22:50:42.629217 kubelet[2043]: E0209 22:50:42.629188 2043 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 22:50:42.631532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 22:50:42.631626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 22:50:44.443742 env[1558]: time="2024-02-09T22:50:44.443684526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:44.444284 env[1558]: time="2024-02-09T22:50:44.444250244Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:44.445079 env[1558]: time="2024-02-09T22:50:44.445033606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:44.446181 env[1558]: time="2024-02-09T22:50:44.446148484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:44.446442 env[1558]: time="2024-02-09T22:50:44.446390567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 22:50:44.453376 env[1558]: time="2024-02-09T22:50:44.453328335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 22:50:45.020777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840495585.mount: Deactivated successfully. Feb 9 22:50:45.475226 env[1558]: time="2024-02-09T22:50:45.475175609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:45.475994 env[1558]: time="2024-02-09T22:50:45.475959589Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:45.476718 env[1558]: time="2024-02-09T22:50:45.476671036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:45.477868 env[1558]: time="2024-02-09T22:50:45.477827115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:45.478046 env[1558]: time="2024-02-09T22:50:45.478003256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 22:50:46.970768 systemd[1]: Stopped kubelet.service. Feb 9 22:50:46.979906 systemd[1]: Reloading. Feb 9 22:50:47.006864 /usr/lib/systemd/system-generators/torcx-generator[2200]: time="2024-02-09T22:50:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 22:50:47.006885 /usr/lib/systemd/system-generators/torcx-generator[2200]: time="2024-02-09T22:50:47Z" level=info msg="torcx already run" Feb 9 22:50:47.068162 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 22:50:47.068171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 22:50:47.081975 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 22:50:47.133703 systemd[1]: Started kubelet.service. Feb 9 22:50:47.156046 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 22:50:47.156046 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 22:50:47.156344 kubelet[2264]: I0209 22:50:47.156078 2264 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 22:50:47.156960 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 22:50:47.156960 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 22:50:47.366522 kubelet[2264]: I0209 22:50:47.366484 2264 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 22:50:47.366522 kubelet[2264]: I0209 22:50:47.366495 2264 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 22:50:47.366662 kubelet[2264]: I0209 22:50:47.366631 2264 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 22:50:47.368023 kubelet[2264]: I0209 22:50:47.367983 2264 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 22:50:47.368389 kubelet[2264]: E0209 22:50:47.368355 2264 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.49.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.395579 kubelet[2264]: I0209 22:50:47.395556 2264 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 22:50:47.396025 kubelet[2264]: I0209 22:50:47.396006 2264 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 22:50:47.396116 kubelet[2264]: I0209 22:50:47.396105 2264 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 22:50:47.396276 kubelet[2264]: I0209 22:50:47.396137 2264 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 22:50:47.396276 kubelet[2264]: I0209 22:50:47.396156 2264 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 22:50:47.396276 kubelet[2264]: I0209 22:50:47.396273 2264 state_mem.go:36] "Initialized new in-memory state store" Feb 9 22:50:47.399654 kubelet[2264]: I0209 22:50:47.399603 2264 kubelet.go:398] "Attempting to sync node with API server" Feb 9 22:50:47.399654 kubelet[2264]: I0209 22:50:47.399630 2264 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 22:50:47.399654 kubelet[2264]: I0209 22:50:47.399657 2264 kubelet.go:297] "Adding apiserver pod source" Feb 9 22:50:47.399927 kubelet[2264]: I0209 22:50:47.399672 2264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 22:50:47.400367 kubelet[2264]: I0209 22:50:47.400339 2264 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 22:50:47.400482 kubelet[2264]: W0209 22:50:47.400356 2264 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.49.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e9037c933d&limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.400482 kubelet[2264]: W0209 22:50:47.400366 2264 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.49.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.400482 kubelet[2264]: E0209 22:50:47.400431 2264 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.49.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e9037c933d&limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.400482 kubelet[2264]: E0209 22:50:47.400458 2264 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.49.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.400739 kubelet[2264]: W0209 22:50:47.400655 2264 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 22:50:47.401199 kubelet[2264]: I0209 22:50:47.401176 2264 server.go:1186] "Started kubelet" Feb 9 22:50:47.401400 kubelet[2264]: I0209 22:50:47.401369 2264 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 22:50:47.401633 kubelet[2264]: E0209 22:50:47.401610 2264 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 22:50:47.401723 kubelet[2264]: E0209 22:50:47.401642 2264 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 22:50:47.401786 kubelet[2264]: E0209 22:50:47.401585 2264 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386db8f07dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 401146332, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 401146332, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://147.75.49.127:6443/api/v1/namespaces/default/events": dial tcp 147.75.49.127:6443: connect: connection refused'(may retry after sleeping) Feb 9 22:50:47.403187 kubelet[2264]: I0209 22:50:47.403129 2264 server.go:451] "Adding debug handlers to kubelet server" Feb 9 22:50:47.412403 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 22:50:47.412435 kubelet[2264]: I0209 22:50:47.412422 2264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 22:50:47.412924 kubelet[2264]: I0209 22:50:47.412884 2264 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 22:50:47.412966 kubelet[2264]: I0209 22:50:47.412940 2264 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 22:50:47.413281 kubelet[2264]: E0209 22:50:47.413236 2264 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://147.75.49.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e9037c933d?timeout=10s": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.413417 kubelet[2264]: W0209 22:50:47.413390 2264 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.49.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.413467 kubelet[2264]: E0209 22:50:47.413427 2264 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.49.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.432119 kubelet[2264]: I0209 22:50:47.432078 2264 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 22:50:47.442617 kubelet[2264]: I0209 22:50:47.442607 2264 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 22:50:47.442617 kubelet[2264]: I0209 22:50:47.442619 2264 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 22:50:47.442705 kubelet[2264]: I0209 22:50:47.442632 2264 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 22:50:47.442705 kubelet[2264]: E0209 22:50:47.442669 2264 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 22:50:47.443097 kubelet[2264]: W0209 22:50:47.443073 2264 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.49.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.443137 kubelet[2264]: E0209 22:50:47.443102 2264 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.49.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.497879 kubelet[2264]: I0209 22:50:47.497850 2264 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 22:50:47.497879 kubelet[2264]: I0209 22:50:47.497878 2264 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 22:50:47.498049 kubelet[2264]: I0209 22:50:47.497894 2264 state_mem.go:36] "Initialized new in-memory state store" Feb 9 22:50:47.499116 kubelet[2264]: I0209 22:50:47.499101 2264 policy_none.go:49] "None policy: Start" Feb 9 22:50:47.499713 kubelet[2264]: I0209 22:50:47.499675 2264 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 22:50:47.499713 kubelet[2264]: I0209 22:50:47.499692 2264 state_mem.go:35] "Initializing new in-memory state store" Feb 9 22:50:47.504994 kubelet[2264]: I0209 22:50:47.504970 2264 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 22:50:47.505549 kubelet[2264]: I0209 22:50:47.505517 2264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 22:50:47.506145 kubelet[2264]: E0209 22:50:47.506117 2264 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:47.515521 kubelet[2264]: I0209 22:50:47.515492 2264 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.516052 kubelet[2264]: E0209 22:50:47.516025 2264 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.127:6443/api/v1/nodes\": dial tcp 147.75.49.127:6443: connect: connection refused" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.543724 kubelet[2264]: I0209 22:50:47.543626 2264 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:50:47.547465 kubelet[2264]: I0209 22:50:47.547391 2264 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:50:47.550964 kubelet[2264]: I0209 22:50:47.550917 2264 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:50:47.551755 kubelet[2264]: I0209 22:50:47.551682 2264 status_manager.go:698] "Failed to get status for pod" podUID=9ba39bdbc16aa68d975e26185788ecde pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" err="Get \"https://147.75.49.127:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-e9037c933d\": dial tcp 147.75.49.127:6443: connect: connection refused" Feb 9 22:50:47.554903 kubelet[2264]: I0209 22:50:47.554818 2264 status_manager.go:698] "Failed to get status for pod" podUID=9fb0af17ba6264b9f338fd6b7085db1f pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" err="Get \"https://147.75.49.127:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-e9037c933d\": dial tcp 147.75.49.127:6443: connect: connection refused" Feb 9 22:50:47.558281 kubelet[2264]: I0209 22:50:47.558192 2264 status_manager.go:698] "Failed to get status for pod" podUID=8e0f32bd021950d709464bfdd339e3a9 pod="kube-system/kube-scheduler-ci-3510.3.2-a-e9037c933d" err="Get \"https://147.75.49.127:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-e9037c933d\": dial tcp 147.75.49.127:6443: connect: connection refused" Feb 9 22:50:47.614333 kubelet[2264]: E0209 22:50:47.614256 2264 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://147.75.49.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e9037c933d?timeout=10s": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:47.713998 kubelet[2264]: I0209 22:50:47.713804 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.713998 kubelet[2264]: I0209 22:50:47.713927 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.713998 kubelet[2264]: I0209 22:50:47.713993 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.714446 kubelet[2264]: I0209 22:50:47.714058 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.714446 kubelet[2264]: I0209 22:50:47.714152 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ba39bdbc16aa68d975e26185788ecde-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" (UID: \"9ba39bdbc16aa68d975e26185788ecde\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.714446 kubelet[2264]: I0209 22:50:47.714231 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ba39bdbc16aa68d975e26185788ecde-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" (UID: \"9ba39bdbc16aa68d975e26185788ecde\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.714446 kubelet[2264]: I0209 22:50:47.714294 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.714446 kubelet[2264]: I0209 22:50:47.714351 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e0f32bd021950d709464bfdd339e3a9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e9037c933d\" (UID: \"8e0f32bd021950d709464bfdd339e3a9\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.714987 kubelet[2264]: I0209 22:50:47.714506 2264 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ba39bdbc16aa68d975e26185788ecde-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" (UID: \"9ba39bdbc16aa68d975e26185788ecde\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.719749 kubelet[2264]: I0209 22:50:47.719741 2264 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.719993 kubelet[2264]: E0209 22:50:47.719950 2264 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.127:6443/api/v1/nodes\": dial tcp 147.75.49.127:6443: connect: connection refused" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:47.857648 env[1558]: time="2024-02-09T22:50:47.857517995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e9037c933d,Uid:9ba39bdbc16aa68d975e26185788ecde,Namespace:kube-system,Attempt:0,}" Feb 9 22:50:47.862743 env[1558]: time="2024-02-09T22:50:47.862666638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e9037c933d,Uid:9fb0af17ba6264b9f338fd6b7085db1f,Namespace:kube-system,Attempt:0,}" Feb 9 22:50:47.864828 env[1558]: time="2024-02-09T22:50:47.864756242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e9037c933d,Uid:8e0f32bd021950d709464bfdd339e3a9,Namespace:kube-system,Attempt:0,}" Feb 9 22:50:48.016266 kubelet[2264]: E0209 22:50:48.016042 2264 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://147.75.49.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e9037c933d?timeout=10s": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:48.124029 kubelet[2264]: I0209 22:50:48.123942 2264 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:48.124688 kubelet[2264]: E0209 22:50:48.124609 2264 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.127:6443/api/v1/nodes\": dial tcp 147.75.49.127:6443: connect: connection refused" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:48.324224 kubelet[2264]: W0209 22:50:48.323984 2264 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.49.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e9037c933d&limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:48.324224 kubelet[2264]: E0209 22:50:48.324106 2264 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.49.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e9037c933d&limit=500&resourceVersion=0": dial tcp 147.75.49.127:6443: connect: connection refused Feb 9 22:50:48.387272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874779447.mount: Deactivated successfully. Feb 9 22:50:48.388472 env[1558]: time="2024-02-09T22:50:48.388421323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.389987 env[1558]: time="2024-02-09T22:50:48.389931705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.390609 env[1558]: time="2024-02-09T22:50:48.390575303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.391117 env[1558]: time="2024-02-09T22:50:48.391070115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.391502 env[1558]: time="2024-02-09T22:50:48.391465833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.393257 env[1558]: time="2024-02-09T22:50:48.393213283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.394993 env[1558]: time="2024-02-09T22:50:48.394932606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.396212 env[1558]: time="2024-02-09T22:50:48.396160554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.397052 env[1558]: time="2024-02-09T22:50:48.397013335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.397461 env[1558]: time="2024-02-09T22:50:48.397420356Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.397862 env[1558]: time="2024-02-09T22:50:48.397844291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.398837 env[1558]: time="2024-02-09T22:50:48.398823892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:50:48.401548 env[1558]: time="2024-02-09T22:50:48.401429438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:50:48.401548 env[1558]: time="2024-02-09T22:50:48.401465407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:50:48.401548 env[1558]: time="2024-02-09T22:50:48.401471963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:50:48.401661 env[1558]: time="2024-02-09T22:50:48.401603714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3ac085fc13539e0e4c34051076c5d87cb88e2898a760d68fff8af0362857d80 pid=2350 runtime=io.containerd.runc.v2 Feb 9 22:50:48.404792 env[1558]: time="2024-02-09T22:50:48.404753693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:50:48.404792 env[1558]: time="2024-02-09T22:50:48.404776967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:50:48.404792 env[1558]: time="2024-02-09T22:50:48.404783835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:50:48.404966 env[1558]: time="2024-02-09T22:50:48.404850580Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d58807dba34c2ff27b013c277378b0005db4e301eddda0c438c332b8e41322fa pid=2378 runtime=io.containerd.runc.v2 Feb 9 22:50:48.405197 env[1558]: time="2024-02-09T22:50:48.405176215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:50:48.405197 env[1558]: time="2024-02-09T22:50:48.405192813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:50:48.405260 env[1558]: time="2024-02-09T22:50:48.405199897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:50:48.405289 env[1558]: time="2024-02-09T22:50:48.405263371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89894c2df3161d6ed64e3f459b83194324ae427485c9ffbcf9680a58252ec143 pid=2386 runtime=io.containerd.runc.v2 Feb 9 22:50:48.444584 env[1558]: time="2024-02-09T22:50:48.444556946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e9037c933d,Uid:9ba39bdbc16aa68d975e26185788ecde,Namespace:kube-system,Attempt:0,} returns sandbox id \"d58807dba34c2ff27b013c277378b0005db4e301eddda0c438c332b8e41322fa\"" Feb 9 22:50:48.444907 env[1558]: time="2024-02-09T22:50:48.444888962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e9037c933d,Uid:8e0f32bd021950d709464bfdd339e3a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"89894c2df3161d6ed64e3f459b83194324ae427485c9ffbcf9680a58252ec143\"" Feb 9 22:50:48.446083 env[1558]: time="2024-02-09T22:50:48.446071943Z" level=info msg="CreateContainer within sandbox \"89894c2df3161d6ed64e3f459b83194324ae427485c9ffbcf9680a58252ec143\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 22:50:48.446114 env[1558]: time="2024-02-09T22:50:48.446088734Z" level=info msg="CreateContainer within sandbox \"d58807dba34c2ff27b013c277378b0005db4e301eddda0c438c332b8e41322fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 22:50:48.452090 env[1558]: time="2024-02-09T22:50:48.452047402Z" level=info msg="CreateContainer within sandbox \"d58807dba34c2ff27b013c277378b0005db4e301eddda0c438c332b8e41322fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1ea2985dbc722ad912043f977c1b7afcfc47f3ae3f2407edbb2cfc329e9f204\"" Feb 9 22:50:48.452184 env[1558]: time="2024-02-09T22:50:48.452171148Z" level=info msg="CreateContainer within sandbox \"89894c2df3161d6ed64e3f459b83194324ae427485c9ffbcf9680a58252ec143\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c65a3323f9befa0af1289ec82bb9a8f7caaa38a05c2a5ebdbdca57b8599b2d5\"" Feb 9 22:50:48.452310 env[1558]: time="2024-02-09T22:50:48.452300406Z" level=info msg="StartContainer for \"1c65a3323f9befa0af1289ec82bb9a8f7caaa38a05c2a5ebdbdca57b8599b2d5\"" Feb 9 22:50:48.452310 env[1558]: time="2024-02-09T22:50:48.452300693Z" level=info msg="StartContainer for \"b1ea2985dbc722ad912043f977c1b7afcfc47f3ae3f2407edbb2cfc329e9f204\"" Feb 9 22:50:48.453079 env[1558]: time="2024-02-09T22:50:48.453060092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e9037c933d,Uid:9fb0af17ba6264b9f338fd6b7085db1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3ac085fc13539e0e4c34051076c5d87cb88e2898a760d68fff8af0362857d80\"" Feb 9 22:50:48.454123 env[1558]: time="2024-02-09T22:50:48.454108537Z" level=info msg="CreateContainer within sandbox \"e3ac085fc13539e0e4c34051076c5d87cb88e2898a760d68fff8af0362857d80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 22:50:48.458803 env[1558]: time="2024-02-09T22:50:48.458778433Z" level=info msg="CreateContainer within sandbox \"e3ac085fc13539e0e4c34051076c5d87cb88e2898a760d68fff8af0362857d80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8ca2c52191b84127afae2f045a2bc9d004d9ab2e083cc6e655ca7c6880fa2b0\"" Feb 9 22:50:48.459003 env[1558]: time="2024-02-09T22:50:48.458989210Z" level=info msg="StartContainer for \"e8ca2c52191b84127afae2f045a2bc9d004d9ab2e083cc6e655ca7c6880fa2b0\"" Feb 9 22:50:48.485064 env[1558]: time="2024-02-09T22:50:48.485042224Z" level=info msg="StartContainer for \"1c65a3323f9befa0af1289ec82bb9a8f7caaa38a05c2a5ebdbdca57b8599b2d5\" returns successfully" Feb 9 22:50:48.497875 env[1558]: time="2024-02-09T22:50:48.497844532Z" level=info msg="StartContainer for \"b1ea2985dbc722ad912043f977c1b7afcfc47f3ae3f2407edbb2cfc329e9f204\" returns successfully" Feb 9 22:50:48.503799 env[1558]: time="2024-02-09T22:50:48.503767507Z" level=info msg="StartContainer for \"e8ca2c52191b84127afae2f045a2bc9d004d9ab2e083cc6e655ca7c6880fa2b0\" returns successfully" Feb 9 22:50:48.925971 kubelet[2264]: I0209 22:50:48.925943 2264 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:49.352295 kubelet[2264]: E0209 22:50:49.352210 2264 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-e9037c933d\" not found" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:49.451967 kubelet[2264]: I0209 22:50:49.451846 2264 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:49.476076 kubelet[2264]: E0209 22:50:49.476016 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:49.576381 kubelet[2264]: E0209 22:50:49.576282 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:49.676645 kubelet[2264]: E0209 22:50:49.676544 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:49.777080 kubelet[2264]: E0209 22:50:49.776979 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:49.878115 kubelet[2264]: E0209 22:50:49.878011 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:49.978389 kubelet[2264]: E0209 22:50:49.978153 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.079126 kubelet[2264]: E0209 22:50:50.079022 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.142425 kubelet[2264]: E0209 22:50:50.142227 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386db8f07dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 401146332, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 401146332, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.179560 kubelet[2264]: E0209 22:50:50.179446 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.199490 kubelet[2264]: E0209 22:50:50.199249 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386db966110", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 401627920, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 401627920, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.256412 kubelet[2264]: E0209 22:50:50.256134 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14ac7bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497336764, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497336764, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.280428 kubelet[2264]: E0209 22:50:50.280301 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.313736 kubelet[2264]: E0209 22:50:50.313526 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14ae214", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497343508, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497343508, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.370732 kubelet[2264]: E0209 22:50:50.370480 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14af064", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497347172, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497347172, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.380573 kubelet[2264]: E0209 22:50:50.380517 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.426712 kubelet[2264]: E0209 22:50:50.426569 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e1cfdf16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 506059030, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 506059030, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.481007 kubelet[2264]: E0209 22:50:50.480923 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.487202 kubelet[2264]: E0209 22:50:50.487014 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14ac7bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497336764, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 515432553, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.548553 kubelet[2264]: E0209 22:50:50.548224 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14ae214", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497343508, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 515448073, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.581557 kubelet[2264]: E0209 22:50:50.581444 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.608917 kubelet[2264]: E0209 22:50:50.608693 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14af064", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497347172, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 515453841, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.682596 kubelet[2264]: E0209 22:50:50.682533 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.783481 kubelet[2264]: E0209 22:50:50.783429 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.883637 kubelet[2264]: E0209 22:50:50.883586 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:50.941759 kubelet[2264]: E0209 22:50:50.941567 2264 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e9037c933d.17b25386e14ac7bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e9037c933d", UID:"ci-3510.3.2-a-e9037c933d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-e9037c933d status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e9037c933d"}, FirstTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 497336764, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 22, 50, 47, 547248286, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 22:50:50.984740 kubelet[2264]: E0209 22:50:50.984689 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:51.085815 kubelet[2264]: E0209 22:50:51.085760 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:51.195157 kubelet[2264]: E0209 22:50:51.194762 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:51.295654 kubelet[2264]: E0209 22:50:51.295543 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:51.396164 kubelet[2264]: E0209 22:50:51.396065 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:51.497140 kubelet[2264]: E0209 22:50:51.496927 2264 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e9037c933d\" not found" Feb 9 22:50:52.241050 systemd[1]: Reloading. Feb 9 22:50:52.279523 /usr/lib/systemd/system-generators/torcx-generator[2631]: time="2024-02-09T22:50:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 22:50:52.279543 /usr/lib/systemd/system-generators/torcx-generator[2631]: time="2024-02-09T22:50:52Z" level=info msg="torcx already run" Feb 9 22:50:52.345077 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 22:50:52.345087 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 22:50:52.359700 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 22:50:52.401781 kubelet[2264]: I0209 22:50:52.401736 2264 apiserver.go:52] "Watching apiserver" Feb 9 22:50:52.413081 kubelet[2264]: I0209 22:50:52.413052 2264 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 22:50:52.419106 systemd[1]: Stopping kubelet.service... Feb 9 22:50:52.419214 kubelet[2264]: I0209 22:50:52.419104 2264 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 22:50:52.440286 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 22:50:52.440462 systemd[1]: Stopped kubelet.service. Feb 9 22:50:52.441426 systemd[1]: Started kubelet.service. Feb 9 22:50:52.466715 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 22:50:52.466715 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 22:50:52.466715 kubelet[2697]: I0209 22:50:52.466707 2697 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 22:50:52.467714 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 22:50:52.467714 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 22:50:52.469301 kubelet[2697]: I0209 22:50:52.469264 2697 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 22:50:52.469301 kubelet[2697]: I0209 22:50:52.469274 2697 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 22:50:52.469428 kubelet[2697]: I0209 22:50:52.469394 2697 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 22:50:52.470142 kubelet[2697]: I0209 22:50:52.470116 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 22:50:52.470514 kubelet[2697]: I0209 22:50:52.470478 2697 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 22:50:52.488700 kubelet[2697]: I0209 22:50:52.488656 2697 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 22:50:52.488935 kubelet[2697]: I0209 22:50:52.488878 2697 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 22:50:52.488935 kubelet[2697]: I0209 22:50:52.488920 2697 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 22:50:52.488935 kubelet[2697]: I0209 22:50:52.488931 2697 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 22:50:52.488935 kubelet[2697]: I0209 22:50:52.488938 2697 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 22:50:52.489061 kubelet[2697]: I0209 22:50:52.488959 2697 state_mem.go:36] "Initialized new in-memory state store" Feb 9 22:50:52.491229 kubelet[2697]: I0209 22:50:52.491194 2697 kubelet.go:398] "Attempting to sync node with API server" Feb 9 22:50:52.491229 kubelet[2697]: I0209 22:50:52.491205 2697 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 22:50:52.491229 kubelet[2697]: I0209 22:50:52.491223 2697 kubelet.go:297] "Adding apiserver pod source" Feb 9 22:50:52.491309 kubelet[2697]: I0209 22:50:52.491234 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 22:50:52.491507 kubelet[2697]: I0209 22:50:52.491493 2697 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 22:50:52.491741 kubelet[2697]: I0209 22:50:52.491731 2697 server.go:1186] "Started kubelet" Feb 9 22:50:52.491791 kubelet[2697]: I0209 22:50:52.491768 2697 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 22:50:52.492002 kubelet[2697]: E0209 22:50:52.491992 2697 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 22:50:52.492048 kubelet[2697]: E0209 22:50:52.492008 2697 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 22:50:52.492440 kubelet[2697]: I0209 22:50:52.492433 2697 server.go:451] "Adding debug handlers to kubelet server" Feb 9 22:50:52.492557 kubelet[2697]: I0209 22:50:52.492549 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 22:50:52.492593 kubelet[2697]: I0209 22:50:52.492578 2697 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 22:50:52.492622 kubelet[2697]: I0209 22:50:52.492612 2697 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 22:50:52.503824 kubelet[2697]: I0209 22:50:52.503811 2697 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 22:50:52.510261 kubelet[2697]: I0209 22:50:52.510246 2697 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 22:50:52.510261 kubelet[2697]: I0209 22:50:52.510260 2697 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 22:50:52.510386 kubelet[2697]: I0209 22:50:52.510270 2697 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 22:50:52.510386 kubelet[2697]: E0209 22:50:52.510302 2697 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 22:50:52.524455 kubelet[2697]: I0209 22:50:52.524403 2697 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 22:50:52.524455 kubelet[2697]: I0209 22:50:52.524413 2697 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 22:50:52.524455 kubelet[2697]: I0209 22:50:52.524422 2697 state_mem.go:36] "Initialized new in-memory state store" Feb 9 22:50:52.524562 kubelet[2697]: I0209 22:50:52.524512 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 22:50:52.524562 kubelet[2697]: I0209 22:50:52.524519 2697 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 22:50:52.524562 kubelet[2697]: I0209 22:50:52.524523 2697 policy_none.go:49] "None policy: Start" Feb 9 22:50:52.524787 kubelet[2697]: I0209 22:50:52.524777 2697 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 22:50:52.524831 kubelet[2697]: I0209 22:50:52.524791 2697 state_mem.go:35] "Initializing new in-memory state store" Feb 9 22:50:52.524899 kubelet[2697]: I0209 22:50:52.524891 2697 state_mem.go:75] "Updated machine memory state" Feb 9 22:50:52.525552 kubelet[2697]: I0209 22:50:52.525543 2697 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 22:50:52.525691 kubelet[2697]: I0209 22:50:52.525682 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 22:50:52.597192 kubelet[2697]: I0209 22:50:52.597121 2697 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.607936 kubelet[2697]: I0209 22:50:52.607837 2697 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.608134 kubelet[2697]: I0209 22:50:52.608014 2697 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.610778 kubelet[2697]: I0209 22:50:52.610721 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:50:52.611066 kubelet[2697]: I0209 22:50:52.610951 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:50:52.611261 kubelet[2697]: I0209 22:50:52.611094 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:50:52.653246 sudo[2760]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 22:50:52.653701 sudo[2760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 22:50:52.794212 kubelet[2697]: I0209 22:50:52.794122 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794212 kubelet[2697]: I0209 22:50:52.794148 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e0f32bd021950d709464bfdd339e3a9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e9037c933d\" (UID: \"8e0f32bd021950d709464bfdd339e3a9\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794212 kubelet[2697]: I0209 22:50:52.794180 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ba39bdbc16aa68d975e26185788ecde-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" (UID: \"9ba39bdbc16aa68d975e26185788ecde\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794212 kubelet[2697]: I0209 22:50:52.794212 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ba39bdbc16aa68d975e26185788ecde-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" (UID: \"9ba39bdbc16aa68d975e26185788ecde\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794349 kubelet[2697]: I0209 22:50:52.794230 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794349 kubelet[2697]: I0209 22:50:52.794249 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794349 kubelet[2697]: I0209 22:50:52.794277 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ba39bdbc16aa68d975e26185788ecde-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" (UID: \"9ba39bdbc16aa68d975e26185788ecde\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794349 kubelet[2697]: I0209 22:50:52.794296 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:52.794349 kubelet[2697]: I0209 22:50:52.794309 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fb0af17ba6264b9f338fd6b7085db1f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" (UID: \"9fb0af17ba6264b9f338fd6b7085db1f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:53.012860 sudo[2760]: pam_unix(sudo:session): session closed for user root Feb 9 22:50:53.164874 systemd[1]: Started sshd@6-147.75.49.127:22-157.230.254.228:54696.service. Feb 9 22:50:53.492262 kubelet[2697]: I0209 22:50:53.492043 2697 apiserver.go:52] "Watching apiserver" Feb 9 22:50:53.793437 kubelet[2697]: I0209 22:50:53.793345 2697 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 22:50:53.799474 kubelet[2697]: I0209 22:50:53.799456 2697 reconciler.go:41] "Reconciler: start to sync state" Feb 9 22:50:53.935417 sudo[1707]: pam_unix(sudo:session): session closed for user root Feb 9 22:50:53.937641 sshd[1702]: pam_unix(sshd:session): session closed for user core Feb 9 22:50:53.942018 systemd[1]: sshd@4-147.75.49.127:22-139.178.89.65:58446.service: Deactivated successfully. Feb 9 22:50:53.944187 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 22:50:53.944189 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. Feb 9 22:50:53.945964 systemd-logind[1544]: Removed session 7. Feb 9 22:50:54.099956 kubelet[2697]: E0209 22:50:54.099787 2697 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-e9037c933d\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:54.300360 kubelet[2697]: E0209 22:50:54.300256 2697 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-e9037c933d\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:54.500976 kubelet[2697]: E0209 22:50:54.500852 2697 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-e9037c933d\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" Feb 9 22:50:54.561319 sshd[2782]: Invalid user shaw from 157.230.254.228 port 54696 Feb 9 22:50:54.567326 sshd[2782]: pam_faillock(sshd:auth): User unknown Feb 9 22:50:54.568478 sshd[2782]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:50:54.568569 sshd[2782]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:50:54.569554 sshd[2782]: pam_faillock(sshd:auth): User unknown Feb 9 22:50:54.714019 kubelet[2697]: I0209 22:50:54.713955 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e9037c933d" podStartSLOduration=2.713809052 pod.CreationTimestamp="2024-02-09 22:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:50:54.713758527 +0000 UTC m=+2.270513780" watchObservedRunningTime="2024-02-09 22:50:54.713809052 +0000 UTC m=+2.270564298" Feb 9 22:50:55.102955 kubelet[2697]: I0209 22:50:55.102875 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e9037c933d" podStartSLOduration=3.102775352 pod.CreationTimestamp="2024-02-09 22:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:50:55.102664422 +0000 UTC m=+2.659419669" watchObservedRunningTime="2024-02-09 22:50:55.102775352 +0000 UTC m=+2.659530581" Feb 9 22:50:55.504426 kubelet[2697]: I0209 22:50:55.504252 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e9037c933d" podStartSLOduration=3.50415271 pod.CreationTimestamp="2024-02-09 22:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:50:55.503880387 +0000 UTC m=+3.060635669" watchObservedRunningTime="2024-02-09 22:50:55.50415271 +0000 UTC m=+3.060907937" Feb 9 22:50:55.674990 systemd[1]: Started sshd@7-147.75.49.127:22-117.102.64.108:34670.service. Feb 9 22:50:56.386074 sshd[2782]: Failed password for invalid user shaw from 157.230.254.228 port 54696 ssh2 Feb 9 22:50:56.842481 sshd[2865]: Invalid user asgar from 117.102.64.108 port 34670 Feb 9 22:50:56.848461 sshd[2865]: pam_faillock(sshd:auth): User unknown Feb 9 22:50:56.849614 sshd[2865]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:50:56.849705 sshd[2865]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:50:56.850647 sshd[2865]: pam_faillock(sshd:auth): User unknown Feb 9 22:50:57.382408 sshd[2782]: Received disconnect from 157.230.254.228 port 54696:11: Bye Bye [preauth] Feb 9 22:50:57.382408 sshd[2782]: Disconnected from invalid user shaw 157.230.254.228 port 54696 [preauth] Feb 9 22:50:57.385042 systemd[1]: sshd@6-147.75.49.127:22-157.230.254.228:54696.service: Deactivated successfully. Feb 9 22:50:58.607055 sshd[2865]: Failed password for invalid user asgar from 117.102.64.108 port 34670 ssh2 Feb 9 22:50:59.023997 sshd[2865]: Received disconnect from 117.102.64.108 port 34670:11: Bye Bye [preauth] Feb 9 22:50:59.023997 sshd[2865]: Disconnected from invalid user asgar 117.102.64.108 port 34670 [preauth] Feb 9 22:50:59.026456 systemd[1]: sshd@7-147.75.49.127:22-117.102.64.108:34670.service: Deactivated successfully. Feb 9 22:50:59.215287 systemd[1]: Started sshd@8-147.75.49.127:22-216.10.245.180:44404.service. Feb 9 22:51:00.566747 sshd[2871]: Invalid user tcs from 216.10.245.180 port 44404 Feb 9 22:51:00.572759 sshd[2871]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:00.573780 sshd[2871]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:51:00.573901 sshd[2871]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:51:00.574953 sshd[2871]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:02.883113 sshd[2871]: Failed password for invalid user tcs from 216.10.245.180 port 44404 ssh2 Feb 9 22:51:03.412557 update_engine[1546]: I0209 22:51:03.412448 1546 update_attempter.cc:509] Updating boot flags... Feb 9 22:51:04.479104 sshd[2871]: Received disconnect from 216.10.245.180 port 44404:11: Bye Bye [preauth] Feb 9 22:51:04.479104 sshd[2871]: Disconnected from invalid user tcs 216.10.245.180 port 44404 [preauth] Feb 9 22:51:04.481512 systemd[1]: sshd@8-147.75.49.127:22-216.10.245.180:44404.service: Deactivated successfully. Feb 9 22:51:05.861155 kubelet[2697]: I0209 22:51:05.861055 2697 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 22:51:05.862119 env[1558]: time="2024-02-09T22:51:05.861795639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 22:51:05.862778 kubelet[2697]: I0209 22:51:05.862398 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 22:51:06.145759 kubelet[2697]: I0209 22:51:06.145606 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:51:06.153617 kubelet[2697]: I0209 22:51:06.153582 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:51:06.179433 kubelet[2697]: I0209 22:51:06.179395 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6af8caf5-ec2b-4f86-8100-d1ad0d7bb395-lib-modules\") pod \"kube-proxy-rdksh\" (UID: \"6af8caf5-ec2b-4f86-8100-d1ad0d7bb395\") " pod="kube-system/kube-proxy-rdksh" Feb 9 22:51:06.179433 kubelet[2697]: I0209 22:51:06.179418 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cni-path\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179433 kubelet[2697]: I0209 22:51:06.179435 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-config-path\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179613 kubelet[2697]: I0209 22:51:06.179450 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-run\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179613 kubelet[2697]: I0209 22:51:06.179464 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-net\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179613 kubelet[2697]: I0209 22:51:06.179478 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhb4\" (UniqueName: \"kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-kube-api-access-nxhb4\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179613 kubelet[2697]: I0209 22:51:06.179492 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6af8caf5-ec2b-4f86-8100-d1ad0d7bb395-kube-proxy\") pod \"kube-proxy-rdksh\" (UID: \"6af8caf5-ec2b-4f86-8100-d1ad0d7bb395\") " pod="kube-system/kube-proxy-rdksh" Feb 9 22:51:06.179613 kubelet[2697]: I0209 22:51:06.179506 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-hostproc\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179613 kubelet[2697]: I0209 22:51:06.179522 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e90a13e0-74e9-47f4-857d-13c07766c9ca-clustermesh-secrets\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179783 kubelet[2697]: I0209 22:51:06.179559 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-hubble-tls\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179783 kubelet[2697]: I0209 22:51:06.179593 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-bpf-maps\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179783 kubelet[2697]: I0209 22:51:06.179624 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9vw\" (UniqueName: \"kubernetes.io/projected/6af8caf5-ec2b-4f86-8100-d1ad0d7bb395-kube-api-access-rz9vw\") pod \"kube-proxy-rdksh\" (UID: \"6af8caf5-ec2b-4f86-8100-d1ad0d7bb395\") " pod="kube-system/kube-proxy-rdksh" Feb 9 22:51:06.179783 kubelet[2697]: I0209 22:51:06.179652 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-cgroup\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179783 kubelet[2697]: I0209 22:51:06.179670 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-etc-cni-netd\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179783 kubelet[2697]: I0209 22:51:06.179684 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-lib-modules\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179944 kubelet[2697]: I0209 22:51:06.179697 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-xtables-lock\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.179944 kubelet[2697]: I0209 22:51:06.179724 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6af8caf5-ec2b-4f86-8100-d1ad0d7bb395-xtables-lock\") pod \"kube-proxy-rdksh\" (UID: \"6af8caf5-ec2b-4f86-8100-d1ad0d7bb395\") " pod="kube-system/kube-proxy-rdksh" Feb 9 22:51:06.179944 kubelet[2697]: I0209 22:51:06.179741 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-kernel\") pod \"cilium-6jjxk\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " pod="kube-system/cilium-6jjxk" Feb 9 22:51:06.323763 kubelet[2697]: I0209 22:51:06.323722 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:51:06.453712 env[1558]: time="2024-02-09T22:51:06.453479259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdksh,Uid:6af8caf5-ec2b-4f86-8100-d1ad0d7bb395,Namespace:kube-system,Attempt:0,}" Feb 9 22:51:06.458642 env[1558]: time="2024-02-09T22:51:06.458534639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jjxk,Uid:e90a13e0-74e9-47f4-857d-13c07766c9ca,Namespace:kube-system,Attempt:0,}" Feb 9 22:51:06.479279 env[1558]: time="2024-02-09T22:51:06.479132120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:51:06.479279 env[1558]: time="2024-02-09T22:51:06.479242035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:51:06.479745 env[1558]: time="2024-02-09T22:51:06.479282525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:51:06.479745 env[1558]: time="2024-02-09T22:51:06.479613869Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a51bed28d528ac587810c6c55407ab1f44bdd8563866c89fe8d3ad9d9ef1fb65 pid=2911 runtime=io.containerd.runc.v2 Feb 9 22:51:06.483277 env[1558]: time="2024-02-09T22:51:06.483092263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:51:06.483277 env[1558]: time="2024-02-09T22:51:06.483203127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:51:06.483277 env[1558]: time="2024-02-09T22:51:06.483245919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:51:06.483748 env[1558]: time="2024-02-09T22:51:06.483630439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299 pid=2919 runtime=io.containerd.runc.v2 Feb 9 22:51:06.489210 kubelet[2697]: I0209 22:51:06.489151 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db5af86e-ef00-49c9-bff2-d87e0468d58f-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-fg7xv\" (UID: \"db5af86e-ef00-49c9-bff2-d87e0468d58f\") " pod="kube-system/cilium-operator-f59cbd8c6-fg7xv" Feb 9 22:51:06.489466 kubelet[2697]: I0209 22:51:06.489266 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994b4\" (UniqueName: \"kubernetes.io/projected/db5af86e-ef00-49c9-bff2-d87e0468d58f-kube-api-access-994b4\") pod \"cilium-operator-f59cbd8c6-fg7xv\" (UID: \"db5af86e-ef00-49c9-bff2-d87e0468d58f\") " pod="kube-system/cilium-operator-f59cbd8c6-fg7xv" Feb 9 22:51:06.555020 env[1558]: time="2024-02-09T22:51:06.554957499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jjxk,Uid:e90a13e0-74e9-47f4-857d-13c07766c9ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\"" Feb 9 22:51:06.556196 env[1558]: time="2024-02-09T22:51:06.556142271Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 22:51:06.565875 env[1558]: time="2024-02-09T22:51:06.565816393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdksh,Uid:6af8caf5-ec2b-4f86-8100-d1ad0d7bb395,Namespace:kube-system,Attempt:0,} returns sandbox id \"a51bed28d528ac587810c6c55407ab1f44bdd8563866c89fe8d3ad9d9ef1fb65\"" Feb 9 22:51:06.567674 env[1558]: time="2024-02-09T22:51:06.567622021Z" level=info msg="CreateContainer within sandbox \"a51bed28d528ac587810c6c55407ab1f44bdd8563866c89fe8d3ad9d9ef1fb65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 22:51:06.574777 env[1558]: time="2024-02-09T22:51:06.574722888Z" level=info msg="CreateContainer within sandbox \"a51bed28d528ac587810c6c55407ab1f44bdd8563866c89fe8d3ad9d9ef1fb65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac5dfde2e32f0cc0df0f9f9c79fadbe5787e93d600f019ed83572238f083fe6c\"" Feb 9 22:51:06.575196 env[1558]: time="2024-02-09T22:51:06.575142812Z" level=info msg="StartContainer for \"ac5dfde2e32f0cc0df0f9f9c79fadbe5787e93d600f019ed83572238f083fe6c\"" Feb 9 22:51:06.635656 env[1558]: time="2024-02-09T22:51:06.635517308Z" level=info msg="StartContainer for \"ac5dfde2e32f0cc0df0f9f9c79fadbe5787e93d600f019ed83572238f083fe6c\" returns successfully" Feb 9 22:51:07.228059 env[1558]: time="2024-02-09T22:51:07.227923332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-fg7xv,Uid:db5af86e-ef00-49c9-bff2-d87e0468d58f,Namespace:kube-system,Attempt:0,}" Feb 9 22:51:07.271186 env[1558]: time="2024-02-09T22:51:07.270952220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:51:07.271186 env[1558]: time="2024-02-09T22:51:07.271059933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:51:07.271186 env[1558]: time="2024-02-09T22:51:07.271110316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:51:07.271767 env[1558]: time="2024-02-09T22:51:07.271554045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288 pid=3139 runtime=io.containerd.runc.v2 Feb 9 22:51:07.350091 env[1558]: time="2024-02-09T22:51:07.350040475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-fg7xv,Uid:db5af86e-ef00-49c9-bff2-d87e0468d58f,Namespace:kube-system,Attempt:0,} returns sandbox id \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\"" Feb 9 22:51:07.575244 kubelet[2697]: I0209 22:51:07.575046 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rdksh" podStartSLOduration=1.5749602299999999 pod.CreationTimestamp="2024-02-09 22:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:51:07.574595367 +0000 UTC m=+15.131350675" watchObservedRunningTime="2024-02-09 22:51:07.57496023 +0000 UTC m=+15.131715498" Feb 9 22:51:10.128583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358503636.mount: Deactivated successfully. Feb 9 22:51:11.809053 env[1558]: time="2024-02-09T22:51:11.809000579Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:51:11.809719 env[1558]: time="2024-02-09T22:51:11.809671755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:51:11.810438 env[1558]: time="2024-02-09T22:51:11.810425965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:51:11.811214 env[1558]: time="2024-02-09T22:51:11.811199216Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 22:51:11.811816 env[1558]: time="2024-02-09T22:51:11.811756861Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 22:51:11.812544 env[1558]: time="2024-02-09T22:51:11.812529097Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 22:51:11.816814 env[1558]: time="2024-02-09T22:51:11.816797375Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\"" Feb 9 22:51:11.817099 env[1558]: time="2024-02-09T22:51:11.817082709Z" level=info msg="StartContainer for \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\"" Feb 9 22:51:11.818262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767256873.mount: Deactivated successfully. Feb 9 22:51:11.863618 env[1558]: time="2024-02-09T22:51:11.863564854Z" level=info msg="StartContainer for \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\" returns successfully" Feb 9 22:51:12.566273 env[1558]: time="2024-02-09T22:51:12.566219031Z" level=error msg="collecting metrics for 486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2" error="cgroups: cgroup deleted: unknown" Feb 9 22:51:12.820498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2-rootfs.mount: Deactivated successfully. Feb 9 22:51:14.256985 env[1558]: time="2024-02-09T22:51:14.256826851Z" level=info msg="shim disconnected" id=486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2 Feb 9 22:51:14.258017 env[1558]: time="2024-02-09T22:51:14.256988846Z" level=warning msg="cleaning up after shim disconnected" id=486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2 namespace=k8s.io Feb 9 22:51:14.258017 env[1558]: time="2024-02-09T22:51:14.257022233Z" level=info msg="cleaning up dead shim" Feb 9 22:51:14.264786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926015509.mount: Deactivated successfully. Feb 9 22:51:14.276921 env[1558]: time="2024-02-09T22:51:14.276884467Z" level=warning msg="cleanup warnings time=\"2024-02-09T22:51:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3221 runtime=io.containerd.runc.v2\n" Feb 9 22:51:14.562614 env[1558]: time="2024-02-09T22:51:14.562546970Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 22:51:14.567199 env[1558]: time="2024-02-09T22:51:14.567153677Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\"" Feb 9 22:51:14.567499 env[1558]: time="2024-02-09T22:51:14.567451286Z" level=info msg="StartContainer for \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\"" Feb 9 22:51:14.600462 env[1558]: time="2024-02-09T22:51:14.600410237Z" level=info msg="StartContainer for \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\" returns successfully" Feb 9 22:51:14.605741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 22:51:14.605921 systemd[1]: Stopped systemd-sysctl.service. Feb 9 22:51:14.606019 systemd[1]: Stopping systemd-sysctl.service... Feb 9 22:51:14.606885 systemd[1]: Starting systemd-sysctl.service... Feb 9 22:51:14.610605 systemd[1]: Finished systemd-sysctl.service. Feb 9 22:51:14.787086 env[1558]: time="2024-02-09T22:51:14.787038132Z" level=info msg="shim disconnected" id=5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1 Feb 9 22:51:14.787086 env[1558]: time="2024-02-09T22:51:14.787081895Z" level=warning msg="cleaning up after shim disconnected" id=5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1 namespace=k8s.io Feb 9 22:51:14.787333 env[1558]: time="2024-02-09T22:51:14.787096654Z" level=info msg="cleaning up dead shim" Feb 9 22:51:14.794168 env[1558]: time="2024-02-09T22:51:14.794130358Z" level=warning msg="cleanup warnings time=\"2024-02-09T22:51:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3286 runtime=io.containerd.runc.v2\n" Feb 9 22:51:14.799405 env[1558]: time="2024-02-09T22:51:14.799335607Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:51:14.800263 env[1558]: time="2024-02-09T22:51:14.800209414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:51:14.803355 env[1558]: time="2024-02-09T22:51:14.803313942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 22:51:14.804209 env[1558]: time="2024-02-09T22:51:14.804144834Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 22:51:14.806183 env[1558]: time="2024-02-09T22:51:14.806119428Z" level=info msg="CreateContainer within sandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 22:51:14.812136 env[1558]: time="2024-02-09T22:51:14.812073564Z" level=info msg="CreateContainer within sandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\"" Feb 9 22:51:14.812466 env[1558]: time="2024-02-09T22:51:14.812399211Z" level=info msg="StartContainer for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\"" Feb 9 22:51:14.869895 env[1558]: time="2024-02-09T22:51:14.869788603Z" level=info msg="StartContainer for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" returns successfully" Feb 9 22:51:15.258862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1-rootfs.mount: Deactivated successfully. Feb 9 22:51:15.576351 env[1558]: time="2024-02-09T22:51:15.576149721Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 22:51:15.585711 kubelet[2697]: I0209 22:51:15.585649 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-fg7xv" podStartSLOduration=-9.223372027269217e+09 pod.CreationTimestamp="2024-02-09 22:51:06 +0000 UTC" firstStartedPulling="2024-02-09 22:51:07.350710755 +0000 UTC m=+14.907465937" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:51:15.584971938 +0000 UTC m=+23.141727248" watchObservedRunningTime="2024-02-09 22:51:15.585558328 +0000 UTC m=+23.142313555" Feb 9 22:51:15.595029 env[1558]: time="2024-02-09T22:51:15.594901136Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\"" Feb 9 22:51:15.595843 env[1558]: time="2024-02-09T22:51:15.595770019Z" level=info msg="StartContainer for \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\"" Feb 9 22:51:15.690188 env[1558]: time="2024-02-09T22:51:15.690091481Z" level=info msg="StartContainer for \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\" returns successfully" Feb 9 22:51:15.721398 env[1558]: time="2024-02-09T22:51:15.721306857Z" level=info msg="shim disconnected" id=11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940 Feb 9 22:51:15.721398 env[1558]: time="2024-02-09T22:51:15.721369180Z" level=warning msg="cleaning up after shim disconnected" id=11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940 namespace=k8s.io Feb 9 22:51:15.721398 env[1558]: time="2024-02-09T22:51:15.721385500Z" level=info msg="cleaning up dead shim" Feb 9 22:51:15.744787 env[1558]: time="2024-02-09T22:51:15.744679790Z" level=warning msg="cleanup warnings time=\"2024-02-09T22:51:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3392 runtime=io.containerd.runc.v2\n" Feb 9 22:51:16.262874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940-rootfs.mount: Deactivated successfully. Feb 9 22:51:16.575465 env[1558]: time="2024-02-09T22:51:16.575371601Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 22:51:16.580418 env[1558]: time="2024-02-09T22:51:16.580394497Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\"" Feb 9 22:51:16.580668 env[1558]: time="2024-02-09T22:51:16.580617159Z" level=info msg="StartContainer for \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\"" Feb 9 22:51:16.581509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821568318.mount: Deactivated successfully. Feb 9 22:51:16.602677 env[1558]: time="2024-02-09T22:51:16.602651397Z" level=info msg="StartContainer for \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\" returns successfully" Feb 9 22:51:16.611691 env[1558]: time="2024-02-09T22:51:16.611636406Z" level=info msg="shim disconnected" id=ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de Feb 9 22:51:16.611691 env[1558]: time="2024-02-09T22:51:16.611666119Z" level=warning msg="cleaning up after shim disconnected" id=ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de namespace=k8s.io Feb 9 22:51:16.611691 env[1558]: time="2024-02-09T22:51:16.611672393Z" level=info msg="cleaning up dead shim" Feb 9 22:51:16.630281 env[1558]: time="2024-02-09T22:51:16.630193053Z" level=warning msg="cleanup warnings time=\"2024-02-09T22:51:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3446 runtime=io.containerd.runc.v2\n" Feb 9 22:51:17.266362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de-rootfs.mount: Deactivated successfully. Feb 9 22:51:17.598392 env[1558]: time="2024-02-09T22:51:17.598172103Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 22:51:17.614766 env[1558]: time="2024-02-09T22:51:17.614642460Z" level=info msg="CreateContainer within sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\"" Feb 9 22:51:17.615501 env[1558]: time="2024-02-09T22:51:17.615486810Z" level=info msg="StartContainer for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\"" Feb 9 22:51:17.649285 env[1558]: time="2024-02-09T22:51:17.649234010Z" level=info msg="StartContainer for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" returns successfully" Feb 9 22:51:17.701932 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 22:51:17.784329 kubelet[2697]: I0209 22:51:17.784315 2697 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 22:51:17.794908 kubelet[2697]: I0209 22:51:17.794889 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:51:17.795741 kubelet[2697]: I0209 22:51:17.795729 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 22:51:17.837870 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 22:51:17.969284 kubelet[2697]: I0209 22:51:17.969175 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bb2353d-ac9b-45c7-beab-615a0e78e4a6-config-volume\") pod \"coredns-787d4945fb-fmr86\" (UID: \"6bb2353d-ac9b-45c7-beab-615a0e78e4a6\") " pod="kube-system/coredns-787d4945fb-fmr86" Feb 9 22:51:17.969284 kubelet[2697]: I0209 22:51:17.969290 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7n9\" (UniqueName: \"kubernetes.io/projected/6bb2353d-ac9b-45c7-beab-615a0e78e4a6-kube-api-access-lg7n9\") pod \"coredns-787d4945fb-fmr86\" (UID: \"6bb2353d-ac9b-45c7-beab-615a0e78e4a6\") " pod="kube-system/coredns-787d4945fb-fmr86" Feb 9 22:51:17.969722 kubelet[2697]: I0209 22:51:17.969588 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffb5c46d-1f1c-45fa-b006-efcdcfc595b7-config-volume\") pod \"coredns-787d4945fb-lx8mg\" (UID: \"ffb5c46d-1f1c-45fa-b006-efcdcfc595b7\") " pod="kube-system/coredns-787d4945fb-lx8mg" Feb 9 22:51:17.969722 kubelet[2697]: I0209 22:51:17.969690 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8wl9\" (UniqueName: \"kubernetes.io/projected/ffb5c46d-1f1c-45fa-b006-efcdcfc595b7-kube-api-access-t8wl9\") pod \"coredns-787d4945fb-lx8mg\" (UID: \"ffb5c46d-1f1c-45fa-b006-efcdcfc595b7\") " pod="kube-system/coredns-787d4945fb-lx8mg" Feb 9 22:51:18.098086 env[1558]: time="2024-02-09T22:51:18.097954537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-lx8mg,Uid:ffb5c46d-1f1c-45fa-b006-efcdcfc595b7,Namespace:kube-system,Attempt:0,}" Feb 9 22:51:18.098366 env[1558]: time="2024-02-09T22:51:18.098161196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fmr86,Uid:6bb2353d-ac9b-45c7-beab-615a0e78e4a6,Namespace:kube-system,Attempt:0,}" Feb 9 22:51:18.628761 kubelet[2697]: I0209 22:51:18.628689 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6jjxk" podStartSLOduration=-9.223372024226171e+09 pod.CreationTimestamp="2024-02-09 22:51:06 +0000 UTC" firstStartedPulling="2024-02-09 22:51:06.555779596 +0000 UTC m=+14.112534785" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:51:18.628446107 +0000 UTC m=+26.185201369" watchObservedRunningTime="2024-02-09 22:51:18.628603565 +0000 UTC m=+26.185358788" Feb 9 22:51:19.430375 systemd-networkd[1409]: cilium_host: Link UP Feb 9 22:51:19.430491 systemd-networkd[1409]: cilium_net: Link UP Feb 9 22:51:19.430494 systemd-networkd[1409]: cilium_net: Gained carrier Feb 9 22:51:19.430621 systemd-networkd[1409]: cilium_host: Gained carrier Feb 9 22:51:19.438869 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 22:51:19.439335 systemd-networkd[1409]: cilium_host: Gained IPv6LL Feb 9 22:51:19.484268 systemd-networkd[1409]: cilium_vxlan: Link UP Feb 9 22:51:19.484272 systemd-networkd[1409]: cilium_vxlan: Gained carrier Feb 9 22:51:19.617912 kernel: NET: Registered PF_ALG protocol family Feb 9 22:51:19.797002 systemd-networkd[1409]: cilium_net: Gained IPv6LL Feb 9 22:51:20.034540 systemd[1]: Started sshd@9-147.75.49.127:22-124.220.165.94:53516.service. Feb 9 22:51:20.069238 systemd-networkd[1409]: lxc_health: Link UP Feb 9 22:51:20.097728 systemd-networkd[1409]: lxc_health: Gained carrier Feb 9 22:51:20.097865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 22:51:20.152698 systemd-networkd[1409]: lxc3be83f551ac0: Link UP Feb 9 22:51:20.152807 systemd-networkd[1409]: lxc1066cf829da9: Link UP Feb 9 22:51:20.189932 kernel: eth0: renamed from tmpb793a Feb 9 22:51:20.200922 kernel: eth0: renamed from tmpc4232 Feb 9 22:51:20.225305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1066cf829da9: link becomes ready Feb 9 22:51:20.225370 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3be83f551ac0: link becomes ready Feb 9 22:51:20.225803 systemd-networkd[1409]: lxc1066cf829da9: Gained carrier Feb 9 22:51:20.225949 systemd-networkd[1409]: lxc3be83f551ac0: Gained carrier Feb 9 22:51:20.889457 sshd[4054]: Invalid user justinfang from 124.220.165.94 port 53516 Feb 9 22:51:20.890797 sshd[4054]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:20.891069 sshd[4054]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:51:20.891115 sshd[4054]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:51:20.891318 sshd[4054]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:21.077997 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Feb 9 22:51:21.268957 systemd-networkd[1409]: lxc_health: Gained IPv6LL Feb 9 22:51:21.525004 systemd-networkd[1409]: lxc3be83f551ac0: Gained IPv6LL Feb 9 22:51:22.229016 systemd-networkd[1409]: lxc1066cf829da9: Gained IPv6LL Feb 9 22:51:22.536477 env[1558]: time="2024-02-09T22:51:22.536411083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:51:22.536477 env[1558]: time="2024-02-09T22:51:22.536432295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:51:22.536477 env[1558]: time="2024-02-09T22:51:22.536439746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:51:22.536731 env[1558]: time="2024-02-09T22:51:22.536503608Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b793a0e89fd0dfead0b6611ed9d4252e9b48e4ddfb54b05e55e773af686ee840 pid=4138 runtime=io.containerd.runc.v2 Feb 9 22:51:22.536731 env[1558]: time="2024-02-09T22:51:22.536510075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 22:51:22.536731 env[1558]: time="2024-02-09T22:51:22.536527873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 22:51:22.536731 env[1558]: time="2024-02-09T22:51:22.536535279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 22:51:22.536731 env[1558]: time="2024-02-09T22:51:22.536599702Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4232da4b2c1e65011e2c4c92e8ac7b60750e1ef0468636b3a186b890c99ec49 pid=4137 runtime=io.containerd.runc.v2 Feb 9 22:51:22.576798 env[1558]: time="2024-02-09T22:51:22.576772049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fmr86,Uid:6bb2353d-ac9b-45c7-beab-615a0e78e4a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b793a0e89fd0dfead0b6611ed9d4252e9b48e4ddfb54b05e55e773af686ee840\"" Feb 9 22:51:22.577994 env[1558]: time="2024-02-09T22:51:22.577975135Z" level=info msg="CreateContainer within sandbox \"b793a0e89fd0dfead0b6611ed9d4252e9b48e4ddfb54b05e55e773af686ee840\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 22:51:22.588256 env[1558]: time="2024-02-09T22:51:22.588207056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-lx8mg,Uid:ffb5c46d-1f1c-45fa-b006-efcdcfc595b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4232da4b2c1e65011e2c4c92e8ac7b60750e1ef0468636b3a186b890c99ec49\"" Feb 9 22:51:22.589222 env[1558]: time="2024-02-09T22:51:22.589207685Z" level=info msg="CreateContainer within sandbox \"c4232da4b2c1e65011e2c4c92e8ac7b60750e1ef0468636b3a186b890c99ec49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 22:51:22.594849 env[1558]: time="2024-02-09T22:51:22.594803713Z" level=info msg="CreateContainer within sandbox \"b793a0e89fd0dfead0b6611ed9d4252e9b48e4ddfb54b05e55e773af686ee840\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1dd1d2f82a8e1b103ecbb3ecda20b3567329434e0d1afe20cfe1c44089abe00\"" Feb 9 22:51:22.595046 env[1558]: time="2024-02-09T22:51:22.594993984Z" level=info msg="StartContainer for \"f1dd1d2f82a8e1b103ecbb3ecda20b3567329434e0d1afe20cfe1c44089abe00\"" Feb 9 22:51:22.595746 env[1558]: time="2024-02-09T22:51:22.595730506Z" level=info msg="CreateContainer within sandbox \"c4232da4b2c1e65011e2c4c92e8ac7b60750e1ef0468636b3a186b890c99ec49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25e782f74e29ca9b1c0765376f593c5cab5ce2c0bc5cda11d1bf7eb5bf7024d9\"" Feb 9 22:51:22.595936 env[1558]: time="2024-02-09T22:51:22.595922401Z" level=info msg="StartContainer for \"25e782f74e29ca9b1c0765376f593c5cab5ce2c0bc5cda11d1bf7eb5bf7024d9\"" Feb 9 22:51:22.633340 env[1558]: time="2024-02-09T22:51:22.633303883Z" level=info msg="StartContainer for \"25e782f74e29ca9b1c0765376f593c5cab5ce2c0bc5cda11d1bf7eb5bf7024d9\" returns successfully" Feb 9 22:51:22.646492 env[1558]: time="2024-02-09T22:51:22.646457102Z" level=info msg="StartContainer for \"f1dd1d2f82a8e1b103ecbb3ecda20b3567329434e0d1afe20cfe1c44089abe00\" returns successfully" Feb 9 22:51:22.943146 sshd[4054]: Failed password for invalid user justinfang from 124.220.165.94 port 53516 ssh2 Feb 9 22:51:23.637553 kubelet[2697]: I0209 22:51:23.637503 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fmr86" podStartSLOduration=17.637429228 pod.CreationTimestamp="2024-02-09 22:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:51:23.636455563 +0000 UTC m=+31.193210810" watchObservedRunningTime="2024-02-09 22:51:23.637429228 +0000 UTC m=+31.194184468" Feb 9 22:51:23.656500 kubelet[2697]: I0209 22:51:23.656481 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-lx8mg" podStartSLOduration=17.656434344 pod.CreationTimestamp="2024-02-09 22:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 22:51:23.655919534 +0000 UTC m=+31.212674714" watchObservedRunningTime="2024-02-09 22:51:23.656434344 +0000 UTC m=+31.213189518" Feb 9 22:51:24.236426 sshd[4054]: Received disconnect from 124.220.165.94 port 53516:11: Bye Bye [preauth] Feb 9 22:51:24.236426 sshd[4054]: Disconnected from invalid user justinfang 124.220.165.94 port 53516 [preauth] Feb 9 22:51:24.238681 systemd[1]: sshd@9-147.75.49.127:22-124.220.165.94:53516.service: Deactivated successfully. Feb 9 22:51:33.649096 kubelet[2697]: I0209 22:51:33.649001 2697 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 22:51:53.283563 systemd[1]: Started sshd@10-147.75.49.127:22-216.10.245.180:33530.service. Feb 9 22:51:54.662954 sshd[4358]: Invalid user a1buser from 216.10.245.180 port 33530 Feb 9 22:51:54.669004 sshd[4358]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:54.670140 sshd[4358]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:51:54.670231 sshd[4358]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:51:54.671174 sshd[4358]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:56.663171 systemd[1]: Started sshd@11-147.75.49.127:22-157.230.254.228:45100.service. Feb 9 22:51:57.059688 sshd[4358]: Failed password for invalid user a1buser from 216.10.245.180 port 33530 ssh2 Feb 9 22:51:57.704060 sshd[4360]: Invalid user wangjiaying from 157.230.254.228 port 45100 Feb 9 22:51:57.710273 sshd[4360]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:57.711263 sshd[4360]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:51:57.711352 sshd[4360]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:51:57.712279 sshd[4360]: pam_faillock(sshd:auth): User unknown Feb 9 22:51:59.304481 sshd[4358]: Received disconnect from 216.10.245.180 port 33530:11: Bye Bye [preauth] Feb 9 22:51:59.304481 sshd[4358]: Disconnected from invalid user a1buser 216.10.245.180 port 33530 [preauth] Feb 9 22:51:59.306974 systemd[1]: sshd@10-147.75.49.127:22-216.10.245.180:33530.service: Deactivated successfully. Feb 9 22:51:59.845165 sshd[4360]: Failed password for invalid user wangjiaying from 157.230.254.228 port 45100 ssh2 Feb 9 22:52:01.132275 sshd[4360]: Received disconnect from 157.230.254.228 port 45100:11: Bye Bye [preauth] Feb 9 22:52:01.132275 sshd[4360]: Disconnected from invalid user wangjiaying 157.230.254.228 port 45100 [preauth] Feb 9 22:52:01.135091 systemd[1]: sshd@11-147.75.49.127:22-157.230.254.228:45100.service: Deactivated successfully. Feb 9 22:52:02.882287 systemd[1]: Started sshd@12-147.75.49.127:22-117.102.64.108:53980.service. Feb 9 22:52:04.016959 sshd[4366]: Invalid user lana from 117.102.64.108 port 53980 Feb 9 22:52:04.023135 sshd[4366]: pam_faillock(sshd:auth): User unknown Feb 9 22:52:04.024309 sshd[4366]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:52:04.024400 sshd[4366]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:52:04.025459 sshd[4366]: pam_faillock(sshd:auth): User unknown Feb 9 22:52:05.451079 sshd[4366]: Failed password for invalid user lana from 117.102.64.108 port 53980 ssh2 Feb 9 22:52:06.042696 sshd[4366]: Received disconnect from 117.102.64.108 port 53980:11: Bye Bye [preauth] Feb 9 22:52:06.042696 sshd[4366]: Disconnected from invalid user lana 117.102.64.108 port 53980 [preauth] Feb 9 22:52:06.045191 systemd[1]: sshd@12-147.75.49.127:22-117.102.64.108:53980.service: Deactivated successfully. Feb 9 22:52:17.187113 systemd[1]: Started sshd@13-147.75.49.127:22-124.220.165.94:43070.service. Feb 9 22:52:18.051420 sshd[4374]: Invalid user sangsan from 124.220.165.94 port 43070 Feb 9 22:52:18.057713 sshd[4374]: pam_faillock(sshd:auth): User unknown Feb 9 22:52:18.058839 sshd[4374]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:52:18.058961 sshd[4374]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:52:18.059852 sshd[4374]: pam_faillock(sshd:auth): User unknown Feb 9 22:52:20.408375 sshd[4374]: Failed password for invalid user sangsan from 124.220.165.94 port 43070 ssh2 Feb 9 22:52:22.514920 sshd[4374]: Received disconnect from 124.220.165.94 port 43070:11: Bye Bye [preauth] Feb 9 22:52:22.514920 sshd[4374]: Disconnected from invalid user sangsan 124.220.165.94 port 43070 [preauth] Feb 9 22:52:22.517439 systemd[1]: sshd@13-147.75.49.127:22-124.220.165.94:43070.service: Deactivated successfully. Feb 9 22:52:25.582617 systemd[1]: Started sshd@14-147.75.49.127:22-218.92.0.107:52459.service. Feb 9 22:52:26.646760 sshd[4378]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.107 user=root Feb 9 22:52:28.760001 sshd[4378]: Failed password for root from 218.92.0.107 port 52459 ssh2 Feb 9 22:52:30.643981 sshd[4378]: Failed password for root from 218.92.0.107 port 52459 ssh2 Feb 9 22:52:33.332243 sshd[4378]: Failed password for root from 218.92.0.107 port 52459 ssh2 Feb 9 22:52:35.590673 sshd[4378]: Received disconnect from 218.92.0.107 port 52459:11: [preauth] Feb 9 22:52:35.590673 sshd[4378]: Disconnected from authenticating user root 218.92.0.107 port 52459 [preauth] Feb 9 22:52:35.591223 sshd[4378]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.107 user=root Feb 9 22:52:35.593194 systemd[1]: sshd@14-147.75.49.127:22-218.92.0.107:52459.service: Deactivated successfully. Feb 9 22:52:35.756740 systemd[1]: Started sshd@15-147.75.49.127:22-218.92.0.107:13693.service. Feb 9 22:52:36.827013 sshd[4382]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.107 user=root Feb 9 22:52:38.979928 sshd[4382]: Failed password for root from 218.92.0.107 port 13693 ssh2 Feb 9 22:52:41.218375 sshd[4382]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 22:52:43.391098 sshd[4382]: Failed password for root from 218.92.0.107 port 13693 ssh2 Feb 9 22:52:47.998458 sshd[4382]: Failed password for root from 218.92.0.107 port 13693 ssh2 Feb 9 22:52:48.604897 systemd[1]: Started sshd@16-147.75.49.127:22-216.10.245.180:50884.service. Feb 9 22:52:49.928036 sshd[4386]: Invalid user soso from 216.10.245.180 port 50884 Feb 9 22:52:49.934586 sshd[4386]: pam_faillock(sshd:auth): User unknown Feb 9 22:52:49.935562 sshd[4386]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:52:49.935652 sshd[4386]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:52:49.936647 sshd[4386]: pam_faillock(sshd:auth): User unknown Feb 9 22:52:50.000329 sshd[4382]: Received disconnect from 218.92.0.107 port 13693:11: [preauth] Feb 9 22:52:50.000329 sshd[4382]: Disconnected from authenticating user root 218.92.0.107 port 13693 [preauth] Feb 9 22:52:50.000903 sshd[4382]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.107 user=root Feb 9 22:52:50.002850 systemd[1]: sshd@15-147.75.49.127:22-218.92.0.107:13693.service: Deactivated successfully. Feb 9 22:52:50.157972 systemd[1]: Started sshd@17-147.75.49.127:22-218.92.0.107:48100.service. Feb 9 22:52:51.192851 sshd[4390]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.107 user=root Feb 9 22:52:51.873261 sshd[4386]: Failed password for invalid user soso from 216.10.245.180 port 50884 ssh2 Feb 9 22:52:52.749564 sshd[4386]: Received disconnect from 216.10.245.180 port 50884:11: Bye Bye [preauth] Feb 9 22:52:52.749564 sshd[4386]: Disconnected from invalid user soso 216.10.245.180 port 50884 [preauth] Feb 9 22:52:52.752045 systemd[1]: sshd@16-147.75.49.127:22-216.10.245.180:50884.service: Deactivated successfully. Feb 9 22:52:53.405993 sshd[4390]: Failed password for root from 218.92.0.107 port 48100 ssh2 Feb 9 22:52:58.006203 sshd[4390]: Failed password for root from 218.92.0.107 port 48100 ssh2 Feb 9 22:52:59.733337 systemd[1]: Started sshd@18-147.75.49.127:22-157.230.254.228:35504.service. Feb 9 22:53:01.045053 sshd[4396]: Invalid user wangying from 157.230.254.228 port 35504 Feb 9 22:53:01.051242 sshd[4396]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:01.052456 sshd[4396]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:53:01.052547 sshd[4396]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:53:01.053588 sshd[4396]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:01.604240 sshd[4390]: Failed password for root from 218.92.0.107 port 48100 ssh2 Feb 9 22:53:02.234767 sshd[4390]: Received disconnect from 218.92.0.107 port 48100:11: [preauth] Feb 9 22:53:02.234767 sshd[4390]: Disconnected from authenticating user root 218.92.0.107 port 48100 [preauth] Feb 9 22:53:02.235349 sshd[4390]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.107 user=root Feb 9 22:53:02.237392 systemd[1]: sshd@17-147.75.49.127:22-218.92.0.107:48100.service: Deactivated successfully. Feb 9 22:53:03.306635 sshd[4396]: Failed password for invalid user wangying from 157.230.254.228 port 35504 ssh2 Feb 9 22:53:05.542638 sshd[4396]: Received disconnect from 157.230.254.228 port 35504:11: Bye Bye [preauth] Feb 9 22:53:05.542638 sshd[4396]: Disconnected from invalid user wangying 157.230.254.228 port 35504 [preauth] Feb 9 22:53:05.545197 systemd[1]: sshd@18-147.75.49.127:22-157.230.254.228:35504.service: Deactivated successfully. Feb 9 22:53:11.273889 systemd[1]: Started sshd@19-147.75.49.127:22-117.102.64.108:45064.service. Feb 9 22:53:12.365552 sshd[4405]: Invalid user pnfsin from 117.102.64.108 port 45064 Feb 9 22:53:12.371705 sshd[4405]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:12.372705 sshd[4405]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:53:12.372793 sshd[4405]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:53:12.373845 sshd[4405]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:14.802712 sshd[4405]: Failed password for invalid user pnfsin from 117.102.64.108 port 45064 ssh2 Feb 9 22:53:15.225440 systemd[1]: Started sshd@20-147.75.49.127:22-124.220.165.94:60858.service. Feb 9 22:53:16.065430 sshd[4407]: Invalid user zahra from 124.220.165.94 port 60858 Feb 9 22:53:16.071473 sshd[4407]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:16.072621 sshd[4407]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:53:16.072711 sshd[4407]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:53:16.073731 sshd[4407]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:16.430441 sshd[4405]: Received disconnect from 117.102.64.108 port 45064:11: Bye Bye [preauth] Feb 9 22:53:16.430441 sshd[4405]: Disconnected from invalid user pnfsin 117.102.64.108 port 45064 [preauth] Feb 9 22:53:16.432940 systemd[1]: sshd@19-147.75.49.127:22-117.102.64.108:45064.service: Deactivated successfully. Feb 9 22:53:18.050930 sshd[4407]: Failed password for invalid user zahra from 124.220.165.94 port 60858 ssh2 Feb 9 22:53:18.573755 sshd[4407]: Received disconnect from 124.220.165.94 port 60858:11: Bye Bye [preauth] Feb 9 22:53:18.573755 sshd[4407]: Disconnected from invalid user zahra 124.220.165.94 port 60858 [preauth] Feb 9 22:53:18.576287 systemd[1]: sshd@20-147.75.49.127:22-124.220.165.94:60858.service: Deactivated successfully. Feb 9 22:53:45.019313 systemd[1]: Started sshd@21-147.75.49.127:22-216.10.245.180:40010.service. Feb 9 22:53:46.412740 sshd[4418]: Invalid user tetti from 216.10.245.180 port 40010 Feb 9 22:53:46.418792 sshd[4418]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:46.419844 sshd[4418]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:53:46.419962 sshd[4418]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:53:46.420940 sshd[4418]: pam_faillock(sshd:auth): User unknown Feb 9 22:53:47.850994 sshd[4418]: Failed password for invalid user tetti from 216.10.245.180 port 40010 ssh2 Feb 9 22:53:48.700235 sshd[4418]: Received disconnect from 216.10.245.180 port 40010:11: Bye Bye [preauth] Feb 9 22:53:48.700235 sshd[4418]: Disconnected from invalid user tetti 216.10.245.180 port 40010 [preauth] Feb 9 22:53:48.702702 systemd[1]: sshd@21-147.75.49.127:22-216.10.245.180:40010.service: Deactivated successfully. Feb 9 22:54:02.499474 systemd[1]: Started sshd@22-147.75.49.127:22-157.230.254.228:54138.service. Feb 9 22:54:03.465419 sshd[4424]: Invalid user bheras from 157.230.254.228 port 54138 Feb 9 22:54:03.471502 sshd[4424]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:03.472661 sshd[4424]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:54:03.472750 sshd[4424]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:54:03.473817 sshd[4424]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:05.902890 sshd[4424]: Failed password for invalid user bheras from 157.230.254.228 port 54138 ssh2 Feb 9 22:54:06.976893 sshd[4424]: Received disconnect from 157.230.254.228 port 54138:11: Bye Bye [preauth] Feb 9 22:54:06.976893 sshd[4424]: Disconnected from invalid user bheras 157.230.254.228 port 54138 [preauth] Feb 9 22:54:06.979307 systemd[1]: sshd@22-147.75.49.127:22-157.230.254.228:54138.service: Deactivated successfully. Feb 9 22:54:22.244466 systemd[1]: Started sshd@23-147.75.49.127:22-117.102.64.108:36134.service. Feb 9 22:54:23.320045 sshd[4430]: Invalid user koolkain from 117.102.64.108 port 36134 Feb 9 22:54:23.326056 sshd[4430]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:23.327256 sshd[4430]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:54:23.327345 sshd[4430]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:54:23.328404 sshd[4430]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:23.644135 systemd[1]: Started sshd@24-147.75.49.127:22-124.220.165.94:50414.service. Feb 9 22:54:24.469615 sshd[4432]: Invalid user yige from 124.220.165.94 port 50414 Feb 9 22:54:24.475814 sshd[4432]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:24.476853 sshd[4432]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:54:24.476976 sshd[4432]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:54:24.477887 sshd[4432]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:25.169725 sshd[4430]: Failed password for invalid user koolkain from 117.102.64.108 port 36134 ssh2 Feb 9 22:54:25.766375 sshd[4430]: Received disconnect from 117.102.64.108 port 36134:11: Bye Bye [preauth] Feb 9 22:54:25.766375 sshd[4430]: Disconnected from invalid user koolkain 117.102.64.108 port 36134 [preauth] Feb 9 22:54:25.768994 systemd[1]: sshd@23-147.75.49.127:22-117.102.64.108:36134.service: Deactivated successfully. Feb 9 22:54:26.791407 sshd[4432]: Failed password for invalid user yige from 124.220.165.94 port 50414 ssh2 Feb 9 22:54:28.680458 sshd[4432]: Received disconnect from 124.220.165.94 port 50414:11: Bye Bye [preauth] Feb 9 22:54:28.680458 sshd[4432]: Disconnected from invalid user yige 124.220.165.94 port 50414 [preauth] Feb 9 22:54:28.682935 systemd[1]: sshd@24-147.75.49.127:22-124.220.165.94:50414.service: Deactivated successfully. Feb 9 22:54:36.502219 update_engine[1546]: I0209 22:54:36.502094 1546 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 22:54:36.502219 update_engine[1546]: I0209 22:54:36.502171 1546 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 22:54:36.503337 update_engine[1546]: I0209 22:54:36.502956 1546 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 22:54:36.503916 update_engine[1546]: I0209 22:54:36.503808 1546 omaha_request_params.cc:62] Current group set to lts Feb 9 22:54:36.504159 update_engine[1546]: I0209 22:54:36.504118 1546 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 22:54:36.504159 update_engine[1546]: I0209 22:54:36.504140 1546 update_attempter.cc:643] Scheduling an action processor start. Feb 9 22:54:36.504382 update_engine[1546]: I0209 22:54:36.504174 1546 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 22:54:36.504382 update_engine[1546]: I0209 22:54:36.504242 1546 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 22:54:36.504575 update_engine[1546]: I0209 22:54:36.504380 1546 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 22:54:36.504575 update_engine[1546]: I0209 22:54:36.504397 1546 omaha_request_action.cc:271] Request: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: Feb 9 22:54:36.504575 update_engine[1546]: I0209 22:54:36.504407 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 22:54:36.505609 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 22:54:36.507515 update_engine[1546]: I0209 22:54:36.507426 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 22:54:36.507728 update_engine[1546]: E0209 22:54:36.507647 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 22:54:36.507852 update_engine[1546]: I0209 22:54:36.507805 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 22:54:43.615654 systemd[1]: Started sshd@25-147.75.49.127:22-216.10.245.180:57370.service. Feb 9 22:54:44.959843 sshd[4440]: Invalid user shixt from 216.10.245.180 port 57370 Feb 9 22:54:44.966225 sshd[4440]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:44.967325 sshd[4440]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:54:44.967415 sshd[4440]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:54:44.968347 sshd[4440]: pam_faillock(sshd:auth): User unknown Feb 9 22:54:46.412983 update_engine[1546]: I0209 22:54:46.412847 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 22:54:46.413881 update_engine[1546]: I0209 22:54:46.413357 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 22:54:46.413881 update_engine[1546]: E0209 22:54:46.413559 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 22:54:46.413881 update_engine[1546]: I0209 22:54:46.413733 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 22:54:47.025834 sshd[4440]: Failed password for invalid user shixt from 216.10.245.180 port 57370 ssh2 Feb 9 22:54:48.139088 sshd[4440]: Received disconnect from 216.10.245.180 port 57370:11: Bye Bye [preauth] Feb 9 22:54:48.139088 sshd[4440]: Disconnected from invalid user shixt 216.10.245.180 port 57370 [preauth] Feb 9 22:54:48.141669 systemd[1]: sshd@25-147.75.49.127:22-216.10.245.180:57370.service: Deactivated successfully. Feb 9 22:54:56.412996 update_engine[1546]: I0209 22:54:56.412887 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 22:54:56.414086 update_engine[1546]: I0209 22:54:56.413360 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 22:54:56.414086 update_engine[1546]: E0209 22:54:56.413565 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 22:54:56.414086 update_engine[1546]: I0209 22:54:56.413738 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 22:55:04.745022 systemd[1]: Started sshd@26-147.75.49.127:22-157.230.254.228:44540.service. Feb 9 22:55:05.747908 sshd[4446]: Invalid user jucemara from 157.230.254.228 port 44540 Feb 9 22:55:05.754003 sshd[4446]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:05.755197 sshd[4446]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:55:05.755287 sshd[4446]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:55:05.756336 sshd[4446]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:06.412997 update_engine[1546]: I0209 22:55:06.412890 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 22:55:06.413904 update_engine[1546]: I0209 22:55:06.413359 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 22:55:06.413904 update_engine[1546]: E0209 22:55:06.413571 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 22:55:06.413904 update_engine[1546]: I0209 22:55:06.413720 1546 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 22:55:06.413904 update_engine[1546]: I0209 22:55:06.413738 1546 omaha_request_action.cc:621] Omaha request response: Feb 9 22:55:06.413904 update_engine[1546]: E0209 22:55:06.413905 1546 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.413935 1546 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.413945 1546 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.413954 1546 update_attempter.cc:306] Processing Done. Feb 9 22:55:06.414409 update_engine[1546]: E0209 22:55:06.413980 1546 update_attempter.cc:619] Update failed. Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.413990 1546 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.413998 1546 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.414007 1546 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.414158 1546 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.414211 1546 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.414219 1546 omaha_request_action.cc:271] Request: Feb 9 22:55:06.414409 update_engine[1546]: Feb 9 22:55:06.414409 update_engine[1546]: Feb 9 22:55:06.414409 update_engine[1546]: Feb 9 22:55:06.414409 update_engine[1546]: Feb 9 22:55:06.414409 update_engine[1546]: Feb 9 22:55:06.414409 update_engine[1546]: Feb 9 22:55:06.414409 update_engine[1546]: I0209 22:55:06.414230 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414541 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 22:55:06.416007 update_engine[1546]: E0209 22:55:06.414707 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414840 1546 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414855 1546 omaha_request_action.cc:621] Omaha request response: Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414881 1546 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414889 1546 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414897 1546 update_attempter.cc:306] Processing Done. Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414906 1546 update_attempter.cc:310] Error event sent. Feb 9 22:55:06.416007 update_engine[1546]: I0209 22:55:06.414932 1546 update_check_scheduler.cc:74] Next update check in 43m39s Feb 9 22:55:06.416837 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 22:55:06.416837 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 22:55:07.030113 sshd[4446]: Failed password for invalid user jucemara from 157.230.254.228 port 44540 ssh2 Feb 9 22:55:07.278184 sshd[4446]: Received disconnect from 157.230.254.228 port 44540:11: Bye Bye [preauth] Feb 9 22:55:07.278184 sshd[4446]: Disconnected from invalid user jucemara 157.230.254.228 port 44540 [preauth] Feb 9 22:55:07.280680 systemd[1]: sshd@26-147.75.49.127:22-157.230.254.228:44540.service: Deactivated successfully. Feb 9 22:55:29.979126 systemd[1]: Started sshd@27-147.75.49.127:22-117.102.64.108:55468.service. Feb 9 22:55:31.054156 sshd[4453]: Invalid user guchang from 117.102.64.108 port 55468 Feb 9 22:55:31.056500 sshd[4453]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:31.056981 sshd[4453]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:55:31.057016 sshd[4453]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:55:31.057419 sshd[4453]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:32.811535 systemd[1]: Started sshd@28-147.75.49.127:22-124.220.165.94:39970.service. Feb 9 22:55:32.899188 sshd[4453]: Failed password for invalid user guchang from 117.102.64.108 port 55468 ssh2 Feb 9 22:55:33.635201 sshd[4458]: Invalid user amandabackup from 124.220.165.94 port 39970 Feb 9 22:55:33.641319 sshd[4458]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:33.642480 sshd[4458]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:55:33.642573 sshd[4458]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:55:33.643633 sshd[4458]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:34.481922 sshd[4453]: Received disconnect from 117.102.64.108 port 55468:11: Bye Bye [preauth] Feb 9 22:55:34.481922 sshd[4453]: Disconnected from invalid user guchang 117.102.64.108 port 55468 [preauth] Feb 9 22:55:34.484426 systemd[1]: sshd@27-147.75.49.127:22-117.102.64.108:55468.service: Deactivated successfully. Feb 9 22:55:36.429035 sshd[4458]: Failed password for invalid user amandabackup from 124.220.165.94 port 39970 ssh2 Feb 9 22:55:38.371037 sshd[4458]: Received disconnect from 124.220.165.94 port 39970:11: Bye Bye [preauth] Feb 9 22:55:38.371037 sshd[4458]: Disconnected from invalid user amandabackup 124.220.165.94 port 39970 [preauth] Feb 9 22:55:38.372435 systemd[1]: sshd@28-147.75.49.127:22-124.220.165.94:39970.service: Deactivated successfully. Feb 9 22:55:40.466423 systemd[1]: Started sshd@29-147.75.49.127:22-216.10.245.180:46496.service. Feb 9 22:55:41.815672 sshd[4467]: Invalid user wangyong from 216.10.245.180 port 46496 Feb 9 22:55:41.821654 sshd[4467]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:41.822716 sshd[4467]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:55:41.822807 sshd[4467]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:55:41.823716 sshd[4467]: pam_faillock(sshd:auth): User unknown Feb 9 22:55:43.038663 sshd[4467]: Failed password for invalid user wangyong from 216.10.245.180 port 46496 ssh2 Feb 9 22:55:44.409348 sshd[4467]: Received disconnect from 216.10.245.180 port 46496:11: Bye Bye [preauth] Feb 9 22:55:44.409348 sshd[4467]: Disconnected from invalid user wangyong 216.10.245.180 port 46496 [preauth] Feb 9 22:55:44.411778 systemd[1]: sshd@29-147.75.49.127:22-216.10.245.180:46496.service: Deactivated successfully. Feb 9 22:56:05.647610 systemd[1]: Started sshd@30-147.75.49.127:22-157.230.254.228:34932.service. Feb 9 22:56:06.977217 sshd[4473]: Invalid user h from 157.230.254.228 port 34932 Feb 9 22:56:06.983412 sshd[4473]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:06.984527 sshd[4473]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:56:06.984618 sshd[4473]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:56:06.985546 sshd[4473]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:08.632051 sshd[4473]: Failed password for invalid user h from 157.230.254.228 port 34932 ssh2 Feb 9 22:56:09.184748 sshd[4473]: Received disconnect from 157.230.254.228 port 34932:11: Bye Bye [preauth] Feb 9 22:56:09.184748 sshd[4473]: Disconnected from invalid user h 157.230.254.228 port 34932 [preauth] Feb 9 22:56:09.187246 systemd[1]: sshd@30-147.75.49.127:22-157.230.254.228:34932.service: Deactivated successfully. Feb 9 22:56:34.216317 systemd[1]: Started sshd@31-147.75.49.127:22-216.10.245.180:35618.service. Feb 9 22:56:35.560331 sshd[4481]: Invalid user marco from 216.10.245.180 port 35618 Feb 9 22:56:35.566413 sshd[4481]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:35.567225 sshd[4481]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:56:35.567261 sshd[4481]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:56:35.567470 sshd[4481]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:36.852578 systemd[1]: Started sshd@32-147.75.49.127:22-117.102.64.108:46536.service. Feb 9 22:56:37.529107 sshd[4481]: Failed password for invalid user marco from 216.10.245.180 port 35618 ssh2 Feb 9 22:56:37.905916 sshd[4481]: Received disconnect from 216.10.245.180 port 35618:11: Bye Bye [preauth] Feb 9 22:56:37.905916 sshd[4481]: Disconnected from invalid user marco 216.10.245.180 port 35618 [preauth] Feb 9 22:56:37.908394 systemd[1]: sshd@31-147.75.49.127:22-216.10.245.180:35618.service: Deactivated successfully. Feb 9 22:56:37.926206 sshd[4485]: Invalid user ebiram from 117.102.64.108 port 46536 Feb 9 22:56:37.927436 sshd[4485]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:37.927673 sshd[4485]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:56:37.927694 sshd[4485]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:56:37.927886 sshd[4485]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:39.498553 sshd[4485]: Failed password for invalid user ebiram from 117.102.64.108 port 46536 ssh2 Feb 9 22:56:40.226108 sshd[4485]: Received disconnect from 117.102.64.108 port 46536:11: Bye Bye [preauth] Feb 9 22:56:40.226108 sshd[4485]: Disconnected from invalid user ebiram 117.102.64.108 port 46536 [preauth] Feb 9 22:56:40.228547 systemd[1]: sshd@32-147.75.49.127:22-117.102.64.108:46536.service: Deactivated successfully. Feb 9 22:56:48.181703 systemd[1]: Started sshd@33-147.75.49.127:22-124.220.165.94:57760.service. Feb 9 22:56:49.015379 sshd[4492]: Invalid user initial from 124.220.165.94 port 57760 Feb 9 22:56:49.021700 sshd[4492]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:49.022685 sshd[4492]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:56:49.022774 sshd[4492]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:56:49.023792 sshd[4492]: pam_faillock(sshd:auth): User unknown Feb 9 22:56:50.574599 sshd[4492]: Failed password for invalid user initial from 124.220.165.94 port 57760 ssh2 Feb 9 22:56:50.945150 sshd[4492]: Received disconnect from 124.220.165.94 port 57760:11: Bye Bye [preauth] Feb 9 22:56:50.945150 sshd[4492]: Disconnected from invalid user initial 124.220.165.94 port 57760 [preauth] Feb 9 22:56:50.947615 systemd[1]: sshd@33-147.75.49.127:22-124.220.165.94:57760.service: Deactivated successfully. Feb 9 22:57:06.191551 systemd[1]: Started sshd@34-147.75.49.127:22-157.230.254.228:53560.service. Feb 9 22:57:07.191825 sshd[4498]: Invalid user skidera from 157.230.254.228 port 53560 Feb 9 22:57:07.193645 sshd[4498]: pam_faillock(sshd:auth): User unknown Feb 9 22:57:07.193998 sshd[4498]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:57:07.194026 sshd[4498]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:57:07.194335 sshd[4498]: pam_faillock(sshd:auth): User unknown Feb 9 22:57:09.216601 sshd[4498]: Failed password for invalid user skidera from 157.230.254.228 port 53560 ssh2 Feb 9 22:57:09.481120 sshd[4498]: Received disconnect from 157.230.254.228 port 53560:11: Bye Bye [preauth] Feb 9 22:57:09.481120 sshd[4498]: Disconnected from invalid user skidera 157.230.254.228 port 53560 [preauth] Feb 9 22:57:09.483643 systemd[1]: sshd@34-147.75.49.127:22-157.230.254.228:53560.service: Deactivated successfully. Feb 9 22:57:28.505715 systemd[1]: Started sshd@35-147.75.49.127:22-139.178.89.65:55778.service. Feb 9 22:57:28.542173 sshd[4505]: Accepted publickey for core from 139.178.89.65 port 55778 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:28.543112 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:28.546614 systemd-logind[1544]: New session 8 of user core. Feb 9 22:57:28.547310 systemd[1]: Started session-8.scope. Feb 9 22:57:28.683540 sshd[4505]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:28.685029 systemd[1]: sshd@35-147.75.49.127:22-139.178.89.65:55778.service: Deactivated successfully. Feb 9 22:57:28.685673 systemd-logind[1544]: Session 8 logged out. Waiting for processes to exit. Feb 9 22:57:28.685680 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 22:57:28.686379 systemd-logind[1544]: Removed session 8. Feb 9 22:57:31.748461 systemd[1]: Started sshd@36-147.75.49.127:22-216.10.245.180:52978.service. Feb 9 22:57:33.116544 sshd[4533]: Invalid user zhaosb from 216.10.245.180 port 52978 Feb 9 22:57:33.122540 sshd[4533]: pam_faillock(sshd:auth): User unknown Feb 9 22:57:33.123677 sshd[4533]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:57:33.123767 sshd[4533]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:57:33.124693 sshd[4533]: pam_faillock(sshd:auth): User unknown Feb 9 22:57:33.690911 systemd[1]: Started sshd@37-147.75.49.127:22-139.178.89.65:55790.service. Feb 9 22:57:33.725280 sshd[4535]: Accepted publickey for core from 139.178.89.65 port 55790 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:33.726274 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:33.729813 systemd-logind[1544]: New session 9 of user core. Feb 9 22:57:33.730652 systemd[1]: Started session-9.scope. Feb 9 22:57:33.823741 sshd[4535]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:33.826049 systemd[1]: sshd@37-147.75.49.127:22-139.178.89.65:55790.service: Deactivated successfully. Feb 9 22:57:33.827217 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 22:57:33.827221 systemd-logind[1544]: Session 9 logged out. Waiting for processes to exit. Feb 9 22:57:33.828329 systemd-logind[1544]: Removed session 9. Feb 9 22:57:35.383120 sshd[4533]: Failed password for invalid user zhaosb from 216.10.245.180 port 52978 ssh2 Feb 9 22:57:37.828441 sshd[4533]: Received disconnect from 216.10.245.180 port 52978:11: Bye Bye [preauth] Feb 9 22:57:37.828441 sshd[4533]: Disconnected from invalid user zhaosb 216.10.245.180 port 52978 [preauth] Feb 9 22:57:37.830946 systemd[1]: sshd@36-147.75.49.127:22-216.10.245.180:52978.service: Deactivated successfully. Feb 9 22:57:38.831726 systemd[1]: Started sshd@38-147.75.49.127:22-139.178.89.65:51466.service. Feb 9 22:57:38.870295 sshd[4567]: Accepted publickey for core from 139.178.89.65 port 51466 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:38.871195 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:38.874514 systemd-logind[1544]: New session 10 of user core. Feb 9 22:57:38.875218 systemd[1]: Started session-10.scope. Feb 9 22:57:39.021612 sshd[4567]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:39.023009 systemd[1]: sshd@38-147.75.49.127:22-139.178.89.65:51466.service: Deactivated successfully. Feb 9 22:57:39.023616 systemd-logind[1544]: Session 10 logged out. Waiting for processes to exit. Feb 9 22:57:39.023631 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 22:57:39.024087 systemd-logind[1544]: Removed session 10. Feb 9 22:57:42.191242 systemd[1]: Started sshd@39-147.75.49.127:22-117.102.64.108:37678.service. Feb 9 22:57:43.326616 sshd[4596]: Invalid user arambahari from 117.102.64.108 port 37678 Feb 9 22:57:43.328652 sshd[4596]: pam_faillock(sshd:auth): User unknown Feb 9 22:57:43.328989 sshd[4596]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:57:43.329019 sshd[4596]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:57:43.329300 sshd[4596]: pam_faillock(sshd:auth): User unknown Feb 9 22:57:44.028434 systemd[1]: Started sshd@40-147.75.49.127:22-139.178.89.65:51472.service. Feb 9 22:57:44.063743 sshd[4598]: Accepted publickey for core from 139.178.89.65 port 51472 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:44.064746 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:44.068387 systemd-logind[1544]: New session 11 of user core. Feb 9 22:57:44.069614 systemd[1]: Started session-11.scope. Feb 9 22:57:44.158592 sshd[4598]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:44.160359 systemd[1]: Started sshd@41-147.75.49.127:22-139.178.89.65:51480.service. Feb 9 22:57:44.160754 systemd[1]: sshd@40-147.75.49.127:22-139.178.89.65:51472.service: Deactivated successfully. Feb 9 22:57:44.161402 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 22:57:44.161441 systemd-logind[1544]: Session 11 logged out. Waiting for processes to exit. Feb 9 22:57:44.161861 systemd-logind[1544]: Removed session 11. Feb 9 22:57:44.196247 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 51480 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:44.199431 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:44.210093 systemd-logind[1544]: New session 12 of user core. Feb 9 22:57:44.212538 systemd[1]: Started session-12.scope. Feb 9 22:57:44.695414 sshd[4624]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:44.697116 systemd[1]: Started sshd@42-147.75.49.127:22-139.178.89.65:51494.service. Feb 9 22:57:44.697431 systemd[1]: sshd@41-147.75.49.127:22-139.178.89.65:51480.service: Deactivated successfully. Feb 9 22:57:44.698082 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 22:57:44.698096 systemd-logind[1544]: Session 12 logged out. Waiting for processes to exit. Feb 9 22:57:44.698591 systemd-logind[1544]: Removed session 12. Feb 9 22:57:44.732961 sshd[4650]: Accepted publickey for core from 139.178.89.65 port 51494 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:44.736592 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:44.747114 systemd-logind[1544]: New session 13 of user core. Feb 9 22:57:44.749541 systemd[1]: Started session-13.scope. Feb 9 22:57:44.897723 sshd[4650]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:44.899232 systemd[1]: sshd@42-147.75.49.127:22-139.178.89.65:51494.service: Deactivated successfully. Feb 9 22:57:44.899855 systemd-logind[1544]: Session 13 logged out. Waiting for processes to exit. Feb 9 22:57:44.899902 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 22:57:44.900655 systemd-logind[1544]: Removed session 13. Feb 9 22:57:44.960336 sshd[4596]: Failed password for invalid user arambahari from 117.102.64.108 port 37678 ssh2 Feb 9 22:57:45.520161 sshd[4596]: Received disconnect from 117.102.64.108 port 37678:11: Bye Bye [preauth] Feb 9 22:57:45.520161 sshd[4596]: Disconnected from invalid user arambahari 117.102.64.108 port 37678 [preauth] Feb 9 22:57:45.522666 systemd[1]: sshd@39-147.75.49.127:22-117.102.64.108:37678.service: Deactivated successfully. Feb 9 22:57:49.904282 systemd[1]: Started sshd@43-147.75.49.127:22-139.178.89.65:35722.service. Feb 9 22:57:49.939072 sshd[4682]: Accepted publickey for core from 139.178.89.65 port 35722 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:49.940247 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:49.943921 systemd-logind[1544]: New session 14 of user core. Feb 9 22:57:49.944710 systemd[1]: Started session-14.scope. Feb 9 22:57:50.034014 sshd[4682]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:50.035555 systemd[1]: sshd@43-147.75.49.127:22-139.178.89.65:35722.service: Deactivated successfully. Feb 9 22:57:50.036204 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 22:57:50.036253 systemd-logind[1544]: Session 14 logged out. Waiting for processes to exit. Feb 9 22:57:50.036762 systemd-logind[1544]: Removed session 14. Feb 9 22:57:55.040656 systemd[1]: Started sshd@44-147.75.49.127:22-139.178.89.65:35728.service. Feb 9 22:57:55.075545 sshd[4710]: Accepted publickey for core from 139.178.89.65 port 35728 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:57:55.076517 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:57:55.079945 systemd-logind[1544]: New session 15 of user core. Feb 9 22:57:55.080880 systemd[1]: Started session-15.scope. Feb 9 22:57:55.171584 sshd[4710]: pam_unix(sshd:session): session closed for user core Feb 9 22:57:55.173090 systemd[1]: sshd@44-147.75.49.127:22-139.178.89.65:35728.service: Deactivated successfully. Feb 9 22:57:55.173685 systemd-logind[1544]: Session 15 logged out. Waiting for processes to exit. Feb 9 22:57:55.173690 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 22:57:55.174229 systemd-logind[1544]: Removed session 15. Feb 9 22:58:00.173992 systemd[1]: Started sshd@45-147.75.49.127:22-139.178.89.65:40968.service. Feb 9 22:58:00.210044 sshd[4735]: Accepted publickey for core from 139.178.89.65 port 40968 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:00.213263 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:00.223880 systemd-logind[1544]: New session 16 of user core. Feb 9 22:58:00.226265 systemd[1]: Started session-16.scope. Feb 9 22:58:00.319198 sshd[4735]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:00.320625 systemd[1]: sshd@45-147.75.49.127:22-139.178.89.65:40968.service: Deactivated successfully. Feb 9 22:58:00.321280 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 22:58:00.321291 systemd-logind[1544]: Session 16 logged out. Waiting for processes to exit. Feb 9 22:58:00.321715 systemd-logind[1544]: Removed session 16. Feb 9 22:58:01.026211 systemd[1]: Started sshd@46-147.75.49.127:22-124.220.165.94:47318.service. Feb 9 22:58:01.832353 sshd[4761]: Invalid user mehrnaz from 124.220.165.94 port 47318 Feb 9 22:58:01.838401 sshd[4761]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:01.839401 sshd[4761]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:58:01.839491 sshd[4761]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:58:01.840382 sshd[4761]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:03.943253 sshd[4761]: Failed password for invalid user mehrnaz from 124.220.165.94 port 47318 ssh2 Feb 9 22:58:05.325491 systemd[1]: Started sshd@47-147.75.49.127:22-139.178.89.65:40970.service. Feb 9 22:58:05.360070 sshd[4763]: Accepted publickey for core from 139.178.89.65 port 40970 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:05.361050 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:05.364589 systemd-logind[1544]: New session 17 of user core. Feb 9 22:58:05.365386 systemd[1]: Started session-17.scope. Feb 9 22:58:05.455017 sshd[4763]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:05.456548 systemd[1]: sshd@47-147.75.49.127:22-139.178.89.65:40970.service: Deactivated successfully. Feb 9 22:58:05.457171 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 22:58:05.457213 systemd-logind[1544]: Session 17 logged out. Waiting for processes to exit. Feb 9 22:58:05.457735 systemd-logind[1544]: Removed session 17. Feb 9 22:58:06.066552 sshd[4761]: Received disconnect from 124.220.165.94 port 47318:11: Bye Bye [preauth] Feb 9 22:58:06.066552 sshd[4761]: Disconnected from invalid user mehrnaz 124.220.165.94 port 47318 [preauth] Feb 9 22:58:06.069098 systemd[1]: sshd@46-147.75.49.127:22-124.220.165.94:47318.service: Deactivated successfully. Feb 9 22:58:07.482744 systemd[1]: Started sshd@48-147.75.49.127:22-157.230.254.228:43960.service. Feb 9 22:58:08.819880 sshd[4795]: Invalid user mjtech from 157.230.254.228 port 43960 Feb 9 22:58:08.825898 sshd[4795]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:08.826985 sshd[4795]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:58:08.827071 sshd[4795]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:58:08.828052 sshd[4795]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:10.457439 systemd[1]: Started sshd@49-147.75.49.127:22-139.178.89.65:40896.service. Feb 9 22:58:10.492839 sshd[4797]: Accepted publickey for core from 139.178.89.65 port 40896 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:10.493796 sshd[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:10.496740 systemd-logind[1544]: New session 18 of user core. Feb 9 22:58:10.497581 systemd[1]: Started session-18.scope. Feb 9 22:58:10.586043 sshd[4797]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:10.587564 systemd[1]: sshd@49-147.75.49.127:22-139.178.89.65:40896.service: Deactivated successfully. Feb 9 22:58:10.588215 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 22:58:10.588226 systemd-logind[1544]: Session 18 logged out. Waiting for processes to exit. Feb 9 22:58:10.588783 systemd-logind[1544]: Removed session 18. Feb 9 22:58:10.890106 sshd[4795]: Failed password for invalid user mjtech from 157.230.254.228 port 43960 ssh2 Feb 9 22:58:11.212369 sshd[4795]: Received disconnect from 157.230.254.228 port 43960:11: Bye Bye [preauth] Feb 9 22:58:11.212369 sshd[4795]: Disconnected from invalid user mjtech 157.230.254.228 port 43960 [preauth] Feb 9 22:58:11.214747 systemd[1]: sshd@48-147.75.49.127:22-157.230.254.228:43960.service: Deactivated successfully. Feb 9 22:58:15.593214 systemd[1]: Started sshd@50-147.75.49.127:22-139.178.89.65:40908.service. Feb 9 22:58:15.628793 sshd[4826]: Accepted publickey for core from 139.178.89.65 port 40908 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:15.631985 sshd[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:15.642491 systemd-logind[1544]: New session 19 of user core. Feb 9 22:58:15.644939 systemd[1]: Started session-19.scope. Feb 9 22:58:15.738407 sshd[4826]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:15.739839 systemd[1]: sshd@50-147.75.49.127:22-139.178.89.65:40908.service: Deactivated successfully. Feb 9 22:58:15.740539 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 22:58:15.740583 systemd-logind[1544]: Session 19 logged out. Waiting for processes to exit. Feb 9 22:58:15.741168 systemd-logind[1544]: Removed session 19. Feb 9 22:58:20.745371 systemd[1]: Started sshd@51-147.75.49.127:22-139.178.89.65:33552.service. Feb 9 22:58:20.780435 sshd[4853]: Accepted publickey for core from 139.178.89.65 port 33552 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:20.781644 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:20.785801 systemd-logind[1544]: New session 20 of user core. Feb 9 22:58:20.786969 systemd[1]: Started session-20.scope. Feb 9 22:58:20.880025 sshd[4853]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:20.881528 systemd[1]: sshd@51-147.75.49.127:22-139.178.89.65:33552.service: Deactivated successfully. Feb 9 22:58:20.882187 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 22:58:20.882229 systemd-logind[1544]: Session 20 logged out. Waiting for processes to exit. Feb 9 22:58:20.882773 systemd-logind[1544]: Removed session 20. Feb 9 22:58:25.887032 systemd[1]: Started sshd@52-147.75.49.127:22-139.178.89.65:33554.service. Feb 9 22:58:25.922149 sshd[4879]: Accepted publickey for core from 139.178.89.65 port 33554 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:25.923123 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:25.926494 systemd-logind[1544]: New session 21 of user core. Feb 9 22:58:25.927269 systemd[1]: Started session-21.scope. Feb 9 22:58:26.016878 sshd[4879]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:26.018526 systemd[1]: sshd@52-147.75.49.127:22-139.178.89.65:33554.service: Deactivated successfully. Feb 9 22:58:26.019281 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 22:58:26.019332 systemd-logind[1544]: Session 21 logged out. Waiting for processes to exit. Feb 9 22:58:26.019852 systemd-logind[1544]: Removed session 21. Feb 9 22:58:28.057735 systemd[1]: Started sshd@53-147.75.49.127:22-216.10.245.180:42104.service. Feb 9 22:58:29.428185 sshd[4905]: Invalid user saxon from 216.10.245.180 port 42104 Feb 9 22:58:29.434269 sshd[4905]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:29.435420 sshd[4905]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:58:29.435513 sshd[4905]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:58:29.436613 sshd[4905]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:31.024346 systemd[1]: Started sshd@54-147.75.49.127:22-139.178.89.65:41472.service. Feb 9 22:58:31.047077 sshd[4905]: Failed password for invalid user saxon from 216.10.245.180 port 42104 ssh2 Feb 9 22:58:31.058367 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 41472 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:31.059352 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:31.062650 systemd-logind[1544]: New session 22 of user core. Feb 9 22:58:31.063417 systemd[1]: Started session-22.scope. Feb 9 22:58:31.153173 sshd[4907]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:31.154672 systemd[1]: sshd@54-147.75.49.127:22-139.178.89.65:41472.service: Deactivated successfully. Feb 9 22:58:31.155329 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 22:58:31.155371 systemd-logind[1544]: Session 22 logged out. Waiting for processes to exit. Feb 9 22:58:31.155851 systemd-logind[1544]: Removed session 22. Feb 9 22:58:31.559147 sshd[4905]: Received disconnect from 216.10.245.180 port 42104:11: Bye Bye [preauth] Feb 9 22:58:31.559147 sshd[4905]: Disconnected from invalid user saxon 216.10.245.180 port 42104 [preauth] Feb 9 22:58:31.561609 systemd[1]: sshd@53-147.75.49.127:22-216.10.245.180:42104.service: Deactivated successfully. Feb 9 22:58:36.156946 systemd[1]: Started sshd@55-147.75.49.127:22-139.178.89.65:41474.service. Feb 9 22:58:36.191929 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 41474 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:36.195097 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:36.205876 systemd-logind[1544]: New session 23 of user core. Feb 9 22:58:36.208187 systemd[1]: Started session-23.scope. Feb 9 22:58:36.304067 sshd[4932]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:36.305698 systemd[1]: sshd@55-147.75.49.127:22-139.178.89.65:41474.service: Deactivated successfully. Feb 9 22:58:36.306477 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 22:58:36.306522 systemd-logind[1544]: Session 23 logged out. Waiting for processes to exit. Feb 9 22:58:36.307090 systemd-logind[1544]: Removed session 23. Feb 9 22:58:41.310840 systemd[1]: Started sshd@56-147.75.49.127:22-139.178.89.65:47456.service. Feb 9 22:58:41.346331 sshd[4960]: Accepted publickey for core from 139.178.89.65 port 47456 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:41.349486 sshd[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:41.360040 systemd-logind[1544]: New session 24 of user core. Feb 9 22:58:41.362597 systemd[1]: Started session-24.scope. Feb 9 22:58:41.482246 sshd[4960]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:41.484223 systemd[1]: sshd@56-147.75.49.127:22-139.178.89.65:47456.service: Deactivated successfully. Feb 9 22:58:41.485170 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 22:58:41.485212 systemd-logind[1544]: Session 24 logged out. Waiting for processes to exit. Feb 9 22:58:41.485871 systemd-logind[1544]: Removed session 24. Feb 9 22:58:46.489987 systemd[1]: Started sshd@57-147.75.49.127:22-139.178.89.65:47462.service. Feb 9 22:58:46.528018 sshd[4986]: Accepted publickey for core from 139.178.89.65 port 47462 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:46.528956 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:46.532228 systemd-logind[1544]: New session 25 of user core. Feb 9 22:58:46.532855 systemd[1]: Started session-25.scope. Feb 9 22:58:46.621756 sshd[4986]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:46.623254 systemd[1]: sshd@57-147.75.49.127:22-139.178.89.65:47462.service: Deactivated successfully. Feb 9 22:58:46.623818 systemd-logind[1544]: Session 25 logged out. Waiting for processes to exit. Feb 9 22:58:46.623824 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 22:58:46.624468 systemd-logind[1544]: Removed session 25. Feb 9 22:58:51.628617 systemd[1]: Started sshd@58-147.75.49.127:22-139.178.89.65:57304.service. Feb 9 22:58:51.663199 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 57304 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:51.664193 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:51.667692 systemd-logind[1544]: New session 26 of user core. Feb 9 22:58:51.668416 systemd[1]: Started session-26.scope. Feb 9 22:58:51.757052 sshd[5011]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:51.758664 systemd[1]: sshd@58-147.75.49.127:22-139.178.89.65:57304.service: Deactivated successfully. Feb 9 22:58:51.759423 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 22:58:51.759465 systemd-logind[1544]: Session 26 logged out. Waiting for processes to exit. Feb 9 22:58:51.760094 systemd-logind[1544]: Removed session 26. Feb 9 22:58:51.996607 systemd[1]: Started sshd@59-147.75.49.127:22-117.102.64.108:56984.service. Feb 9 22:58:53.086466 sshd[5037]: Invalid user samaneh from 117.102.64.108 port 56984 Feb 9 22:58:53.092689 sshd[5037]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:53.093964 sshd[5037]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:58:53.094076 sshd[5037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 22:58:53.095218 sshd[5037]: pam_faillock(sshd:auth): User unknown Feb 9 22:58:55.338141 sshd[5037]: Failed password for invalid user samaneh from 117.102.64.108 port 56984 ssh2 Feb 9 22:58:56.764538 systemd[1]: Started sshd@60-147.75.49.127:22-139.178.89.65:57308.service. Feb 9 22:58:56.788617 sshd[5037]: Received disconnect from 117.102.64.108 port 56984:11: Bye Bye [preauth] Feb 9 22:58:56.788617 sshd[5037]: Disconnected from invalid user samaneh 117.102.64.108 port 56984 [preauth] Feb 9 22:58:56.789194 systemd[1]: sshd@59-147.75.49.127:22-117.102.64.108:56984.service: Deactivated successfully. Feb 9 22:58:56.799326 sshd[5041]: Accepted publickey for core from 139.178.89.65 port 57308 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:58:56.800433 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:58:56.803921 systemd-logind[1544]: New session 27 of user core. Feb 9 22:58:56.804714 systemd[1]: Started session-27.scope. Feb 9 22:58:56.894214 sshd[5041]: pam_unix(sshd:session): session closed for user core Feb 9 22:58:56.895597 systemd[1]: sshd@60-147.75.49.127:22-139.178.89.65:57308.service: Deactivated successfully. Feb 9 22:58:56.896291 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 22:58:56.896332 systemd-logind[1544]: Session 27 logged out. Waiting for processes to exit. Feb 9 22:58:56.896774 systemd-logind[1544]: Removed session 27. Feb 9 22:59:01.899977 systemd[1]: Started sshd@61-147.75.49.127:22-139.178.89.65:59540.service. Feb 9 22:59:01.935358 sshd[5069]: Accepted publickey for core from 139.178.89.65 port 59540 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:01.938534 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:01.949185 systemd-logind[1544]: New session 28 of user core. Feb 9 22:59:01.951539 systemd[1]: Started session-28.scope. Feb 9 22:59:02.041621 sshd[5069]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:02.043104 systemd[1]: sshd@61-147.75.49.127:22-139.178.89.65:59540.service: Deactivated successfully. Feb 9 22:59:02.043737 systemd-logind[1544]: Session 28 logged out. Waiting for processes to exit. Feb 9 22:59:02.043749 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 22:59:02.044422 systemd-logind[1544]: Removed session 28. Feb 9 22:59:07.048403 systemd[1]: Started sshd@62-147.75.49.127:22-139.178.89.65:59554.service. Feb 9 22:59:07.084018 sshd[5094]: Accepted publickey for core from 139.178.89.65 port 59554 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:07.087129 sshd[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:07.097806 systemd-logind[1544]: New session 29 of user core. Feb 9 22:59:07.100296 systemd[1]: Started session-29.scope. Feb 9 22:59:07.245431 sshd[5094]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:07.247229 systemd[1]: sshd@62-147.75.49.127:22-139.178.89.65:59554.service: Deactivated successfully. Feb 9 22:59:07.247986 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 22:59:07.248036 systemd-logind[1544]: Session 29 logged out. Waiting for processes to exit. Feb 9 22:59:07.248742 systemd-logind[1544]: Removed session 29. Feb 9 22:59:10.008635 systemd[1]: Started sshd@63-147.75.49.127:22-157.230.254.228:34356.service. Feb 9 22:59:10.976619 sshd[5122]: Invalid user lining from 157.230.254.228 port 34356 Feb 9 22:59:10.982606 sshd[5122]: pam_faillock(sshd:auth): User unknown Feb 9 22:59:10.983721 sshd[5122]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:59:10.983793 sshd[5122]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 22:59:10.984053 sshd[5122]: pam_faillock(sshd:auth): User unknown Feb 9 22:59:12.252249 systemd[1]: Started sshd@64-147.75.49.127:22-139.178.89.65:36222.service. Feb 9 22:59:12.287291 sshd[5125]: Accepted publickey for core from 139.178.89.65 port 36222 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:12.288239 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:12.292092 systemd-logind[1544]: New session 30 of user core. Feb 9 22:59:12.292901 systemd[1]: Started session-30.scope. Feb 9 22:59:12.383529 sshd[5125]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:12.384924 systemd[1]: sshd@64-147.75.49.127:22-139.178.89.65:36222.service: Deactivated successfully. Feb 9 22:59:12.385535 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 22:59:12.385563 systemd-logind[1544]: Session 30 logged out. Waiting for processes to exit. Feb 9 22:59:12.386068 systemd-logind[1544]: Removed session 30. Feb 9 22:59:13.558242 sshd[5122]: Failed password for invalid user lining from 157.230.254.228 port 34356 ssh2 Feb 9 22:59:13.869473 sshd[5122]: Received disconnect from 157.230.254.228 port 34356:11: Bye Bye [preauth] Feb 9 22:59:13.869473 sshd[5122]: Disconnected from invalid user lining 157.230.254.228 port 34356 [preauth] Feb 9 22:59:13.872038 systemd[1]: sshd@63-147.75.49.127:22-157.230.254.228:34356.service: Deactivated successfully. Feb 9 22:59:17.389394 systemd[1]: Started sshd@65-147.75.49.127:22-139.178.89.65:36226.service. Feb 9 22:59:17.424251 sshd[5153]: Accepted publickey for core from 139.178.89.65 port 36226 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:17.425126 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:17.428335 systemd-logind[1544]: New session 31 of user core. Feb 9 22:59:17.428917 systemd[1]: Started session-31.scope. Feb 9 22:59:17.514260 sshd[5153]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:17.515869 systemd[1]: sshd@65-147.75.49.127:22-139.178.89.65:36226.service: Deactivated successfully. Feb 9 22:59:17.516606 systemd-logind[1544]: Session 31 logged out. Waiting for processes to exit. Feb 9 22:59:17.516609 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 22:59:17.517213 systemd-logind[1544]: Removed session 31. Feb 9 22:59:22.521029 systemd[1]: Started sshd@66-147.75.49.127:22-139.178.89.65:48248.service. Feb 9 22:59:22.555279 sshd[5179]: Accepted publickey for core from 139.178.89.65 port 48248 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:22.556293 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:22.559807 systemd-logind[1544]: New session 32 of user core. Feb 9 22:59:22.560533 systemd[1]: Started session-32.scope. Feb 9 22:59:22.653580 sshd[5179]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:22.655252 systemd[1]: sshd@66-147.75.49.127:22-139.178.89.65:48248.service: Deactivated successfully. Feb 9 22:59:22.655867 systemd-logind[1544]: Session 32 logged out. Waiting for processes to exit. Feb 9 22:59:22.655909 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 22:59:22.656514 systemd-logind[1544]: Removed session 32. Feb 9 22:59:22.839833 systemd[1]: Started sshd@67-147.75.49.127:22-124.220.165.94:36876.service. Feb 9 22:59:23.700709 sshd[5205]: Invalid user haniehasghari from 124.220.165.94 port 36876 Feb 9 22:59:23.707455 sshd[5205]: pam_faillock(sshd:auth): User unknown Feb 9 22:59:23.708549 sshd[5205]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:59:23.708567 sshd[5205]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 22:59:23.708750 sshd[5205]: pam_faillock(sshd:auth): User unknown Feb 9 22:59:24.551370 systemd[1]: Started sshd@68-147.75.49.127:22-216.10.245.180:59462.service. Feb 9 22:59:25.399906 sshd[5205]: Failed password for invalid user haniehasghari from 124.220.165.94 port 36876 ssh2 Feb 9 22:59:25.909947 sshd[5207]: Invalid user seneg from 216.10.245.180 port 59462 Feb 9 22:59:25.915986 sshd[5207]: pam_faillock(sshd:auth): User unknown Feb 9 22:59:25.917164 sshd[5207]: pam_unix(sshd:auth): check pass; user unknown Feb 9 22:59:25.917253 sshd[5207]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 22:59:25.918160 sshd[5207]: pam_faillock(sshd:auth): User unknown Feb 9 22:59:26.753980 sshd[5205]: Received disconnect from 124.220.165.94 port 36876:11: Bye Bye [preauth] Feb 9 22:59:26.753980 sshd[5205]: Disconnected from invalid user haniehasghari 124.220.165.94 port 36876 [preauth] Feb 9 22:59:26.756631 systemd[1]: sshd@67-147.75.49.127:22-124.220.165.94:36876.service: Deactivated successfully. Feb 9 22:59:27.659844 systemd[1]: Started sshd@69-147.75.49.127:22-139.178.89.65:48252.service. Feb 9 22:59:27.694301 sshd[5211]: Accepted publickey for core from 139.178.89.65 port 48252 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:27.695281 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:27.698839 systemd-logind[1544]: New session 33 of user core. Feb 9 22:59:27.699646 systemd[1]: Started session-33.scope. Feb 9 22:59:27.789220 sshd[5211]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:27.790653 systemd[1]: sshd@69-147.75.49.127:22-139.178.89.65:48252.service: Deactivated successfully. Feb 9 22:59:27.791293 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 22:59:27.791351 systemd-logind[1544]: Session 33 logged out. Waiting for processes to exit. Feb 9 22:59:27.791833 systemd-logind[1544]: Removed session 33. Feb 9 22:59:27.885339 sshd[5207]: Failed password for invalid user seneg from 216.10.245.180 port 59462 ssh2 Feb 9 22:59:28.863166 sshd[5207]: Received disconnect from 216.10.245.180 port 59462:11: Bye Bye [preauth] Feb 9 22:59:28.863166 sshd[5207]: Disconnected from invalid user seneg 216.10.245.180 port 59462 [preauth] Feb 9 22:59:28.865658 systemd[1]: sshd@68-147.75.49.127:22-216.10.245.180:59462.service: Deactivated successfully. Feb 9 22:59:32.796437 systemd[1]: Started sshd@70-147.75.49.127:22-139.178.89.65:53826.service. Feb 9 22:59:32.831349 sshd[5239]: Accepted publickey for core from 139.178.89.65 port 53826 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:32.832484 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:32.836352 systemd-logind[1544]: New session 34 of user core. Feb 9 22:59:32.837187 systemd[1]: Started session-34.scope. Feb 9 22:59:32.930178 sshd[5239]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:32.931674 systemd[1]: sshd@70-147.75.49.127:22-139.178.89.65:53826.service: Deactivated successfully. Feb 9 22:59:32.932339 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 22:59:32.932385 systemd-logind[1544]: Session 34 logged out. Waiting for processes to exit. Feb 9 22:59:32.932850 systemd-logind[1544]: Removed session 34. Feb 9 22:59:37.936865 systemd[1]: Started sshd@71-147.75.49.127:22-139.178.89.65:53834.service. Feb 9 22:59:37.971046 sshd[5267]: Accepted publickey for core from 139.178.89.65 port 53834 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:37.972050 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:37.975428 systemd-logind[1544]: New session 35 of user core. Feb 9 22:59:37.976195 systemd[1]: Started session-35.scope. Feb 9 22:59:38.069500 sshd[5267]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:38.070895 systemd[1]: sshd@71-147.75.49.127:22-139.178.89.65:53834.service: Deactivated successfully. Feb 9 22:59:38.071557 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 22:59:38.071583 systemd-logind[1544]: Session 35 logged out. Waiting for processes to exit. Feb 9 22:59:38.072264 systemd-logind[1544]: Removed session 35. Feb 9 22:59:43.075761 systemd[1]: Started sshd@72-147.75.49.127:22-139.178.89.65:52036.service. Feb 9 22:59:43.110931 sshd[5293]: Accepted publickey for core from 139.178.89.65 port 52036 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:43.114098 sshd[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:43.124832 systemd-logind[1544]: New session 36 of user core. Feb 9 22:59:43.127592 systemd[1]: Started session-36.scope. Feb 9 22:59:43.220247 sshd[5293]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:43.221588 systemd[1]: sshd@72-147.75.49.127:22-139.178.89.65:52036.service: Deactivated successfully. Feb 9 22:59:43.222245 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 22:59:43.222289 systemd-logind[1544]: Session 36 logged out. Waiting for processes to exit. Feb 9 22:59:43.222762 systemd-logind[1544]: Removed session 36. Feb 9 22:59:48.226389 systemd[1]: Started sshd@73-147.75.49.127:22-139.178.89.65:34038.service. Feb 9 22:59:48.261249 sshd[5318]: Accepted publickey for core from 139.178.89.65 port 34038 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:48.262248 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:48.265950 systemd-logind[1544]: New session 37 of user core. Feb 9 22:59:48.266783 systemd[1]: Started session-37.scope. Feb 9 22:59:48.351804 sshd[5318]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:48.353372 systemd[1]: sshd@73-147.75.49.127:22-139.178.89.65:34038.service: Deactivated successfully. Feb 9 22:59:48.353969 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 22:59:48.353974 systemd-logind[1544]: Session 37 logged out. Waiting for processes to exit. Feb 9 22:59:48.354504 systemd-logind[1544]: Removed session 37. Feb 9 22:59:53.358224 systemd[1]: Started sshd@74-147.75.49.127:22-139.178.89.65:34040.service. Feb 9 22:59:53.392358 sshd[5345]: Accepted publickey for core from 139.178.89.65 port 34040 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:53.393358 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:53.396853 systemd-logind[1544]: New session 38 of user core. Feb 9 22:59:53.397599 systemd[1]: Started session-38.scope. Feb 9 22:59:53.531038 sshd[5345]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:53.532605 systemd[1]: sshd@74-147.75.49.127:22-139.178.89.65:34040.service: Deactivated successfully. Feb 9 22:59:53.533279 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 22:59:53.533324 systemd-logind[1544]: Session 38 logged out. Waiting for processes to exit. Feb 9 22:59:53.533806 systemd-logind[1544]: Removed session 38. Feb 9 22:59:58.536855 systemd[1]: Started sshd@75-147.75.49.127:22-139.178.89.65:45264.service. Feb 9 22:59:58.571365 sshd[5371]: Accepted publickey for core from 139.178.89.65 port 45264 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 22:59:58.572531 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 22:59:58.576220 systemd-logind[1544]: New session 39 of user core. Feb 9 22:59:58.576989 systemd[1]: Started session-39.scope. Feb 9 22:59:58.668678 sshd[5371]: pam_unix(sshd:session): session closed for user core Feb 9 22:59:58.670090 systemd[1]: sshd@75-147.75.49.127:22-139.178.89.65:45264.service: Deactivated successfully. Feb 9 22:59:58.670690 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 22:59:58.670690 systemd-logind[1544]: Session 39 logged out. Waiting for processes to exit. Feb 9 22:59:58.671244 systemd-logind[1544]: Removed session 39. Feb 9 23:00:01.816336 systemd[1]: Started sshd@76-147.75.49.127:22-61.177.172.160:42405.service. Feb 9 23:00:02.899214 sshd[5397]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 23:00:03.677009 systemd[1]: Started sshd@77-147.75.49.127:22-139.178.89.65:45280.service. Feb 9 23:00:03.715891 sshd[5399]: Accepted publickey for core from 139.178.89.65 port 45280 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:03.719126 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:03.729834 systemd-logind[1544]: New session 40 of user core. Feb 9 23:00:03.732272 systemd[1]: Started session-40.scope. Feb 9 23:00:03.822515 sshd[5399]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:03.823944 systemd[1]: sshd@77-147.75.49.127:22-139.178.89.65:45280.service: Deactivated successfully. Feb 9 23:00:03.824538 systemd-logind[1544]: Session 40 logged out. Waiting for processes to exit. Feb 9 23:00:03.824548 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 23:00:03.825037 systemd-logind[1544]: Removed session 40. Feb 9 23:00:05.614277 sshd[5397]: Failed password for root from 61.177.172.160 port 42405 ssh2 Feb 9 23:00:05.645095 systemd[1]: Started sshd@78-147.75.49.127:22-117.102.64.108:48310.service. Feb 9 23:00:06.769043 sshd[5425]: Invalid user moh from 117.102.64.108 port 48310 Feb 9 23:00:06.774983 sshd[5425]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:06.776000 sshd[5425]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:00:06.776084 sshd[5425]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 23:00:06.776930 sshd[5425]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:08.829634 systemd[1]: Started sshd@79-147.75.49.127:22-139.178.89.65:54896.service. Feb 9 23:00:08.864207 sshd[5429]: Accepted publickey for core from 139.178.89.65 port 54896 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:08.865141 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:08.868433 systemd-logind[1544]: New session 41 of user core. Feb 9 23:00:08.869144 systemd[1]: Started session-41.scope. Feb 9 23:00:08.959208 sshd[5429]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:08.960632 systemd[1]: sshd@79-147.75.49.127:22-139.178.89.65:54896.service: Deactivated successfully. Feb 9 23:00:08.961248 systemd-logind[1544]: Session 41 logged out. Waiting for processes to exit. Feb 9 23:00:08.961260 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 23:00:08.961742 systemd-logind[1544]: Removed session 41. Feb 9 23:00:09.040399 sshd[5425]: Failed password for invalid user moh from 117.102.64.108 port 48310 ssh2 Feb 9 23:00:09.691518 sshd[5397]: Failed password for root from 61.177.172.160 port 42405 ssh2 Feb 9 23:00:10.354393 sshd[5425]: Received disconnect from 117.102.64.108 port 48310:11: Bye Bye [preauth] Feb 9 23:00:10.354393 sshd[5425]: Disconnected from invalid user moh 117.102.64.108 port 48310 [preauth] Feb 9 23:00:10.356905 systemd[1]: sshd@78-147.75.49.127:22-117.102.64.108:48310.service: Deactivated successfully. Feb 9 23:00:13.301777 sshd[5397]: Failed password for root from 61.177.172.160 port 42405 ssh2 Feb 9 23:00:13.963933 sshd[5397]: Received disconnect from 61.177.172.160 port 42405:11: [preauth] Feb 9 23:00:13.963933 sshd[5397]: Disconnected from authenticating user root 61.177.172.160 port 42405 [preauth] Feb 9 23:00:13.964137 sshd[5397]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 23:00:13.964766 systemd[1]: Started sshd@80-147.75.49.127:22-139.178.89.65:54908.service. Feb 9 23:00:13.965037 systemd[1]: sshd@76-147.75.49.127:22-61.177.172.160:42405.service: Deactivated successfully. Feb 9 23:00:13.999284 sshd[5458]: Accepted publickey for core from 139.178.89.65 port 54908 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:14.000175 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:14.003329 systemd-logind[1544]: New session 42 of user core. Feb 9 23:00:14.003938 systemd[1]: Started session-42.scope. Feb 9 23:00:14.091214 sshd[5458]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:14.092655 systemd[1]: Started sshd@81-147.75.49.127:22-61.177.172.160:15219.service. Feb 9 23:00:14.092918 systemd[1]: sshd@80-147.75.49.127:22-139.178.89.65:54908.service: Deactivated successfully. Feb 9 23:00:14.093471 systemd-logind[1544]: Session 42 logged out. Waiting for processes to exit. Feb 9 23:00:14.093509 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 23:00:14.093888 systemd-logind[1544]: Removed session 42. Feb 9 23:00:14.223071 systemd[1]: Started sshd@82-147.75.49.127:22-157.230.254.228:52992.service. Feb 9 23:00:15.059745 sshd[5483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 23:00:15.227437 sshd[5488]: Invalid user cillianm from 157.230.254.228 port 52992 Feb 9 23:00:15.233746 sshd[5488]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:15.234786 sshd[5488]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:00:15.234803 sshd[5488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 23:00:15.235053 sshd[5488]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:17.559134 sshd[5483]: Failed password for root from 61.177.172.160 port 15219 ssh2 Feb 9 23:00:17.734092 sshd[5488]: Failed password for invalid user cillianm from 157.230.254.228 port 52992 ssh2 Feb 9 23:00:18.143929 sshd[5488]: Received disconnect from 157.230.254.228 port 52992:11: Bye Bye [preauth] Feb 9 23:00:18.143929 sshd[5488]: Disconnected from invalid user cillianm 157.230.254.228 port 52992 [preauth] Feb 9 23:00:18.144890 systemd[1]: sshd@82-147.75.49.127:22-157.230.254.228:52992.service: Deactivated successfully. Feb 9 23:00:19.098194 systemd[1]: Started sshd@83-147.75.49.127:22-139.178.89.65:36384.service. Feb 9 23:00:19.132274 sshd[5492]: Accepted publickey for core from 139.178.89.65 port 36384 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:19.133252 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:19.136732 systemd-logind[1544]: New session 43 of user core. Feb 9 23:00:19.137404 systemd[1]: Started session-43.scope. Feb 9 23:00:19.224517 sshd[5492]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:19.225979 systemd[1]: sshd@83-147.75.49.127:22-139.178.89.65:36384.service: Deactivated successfully. Feb 9 23:00:19.226639 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 23:00:19.226674 systemd-logind[1544]: Session 43 logged out. Waiting for processes to exit. Feb 9 23:00:19.227321 systemd-logind[1544]: Removed session 43. Feb 9 23:00:19.434381 sshd[5483]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 23:00:21.481088 sshd[5483]: Failed password for root from 61.177.172.160 port 15219 ssh2 Feb 9 23:00:23.337454 systemd[1]: Started sshd@84-147.75.49.127:22-216.10.245.180:48588.service. Feb 9 23:00:23.681166 sshd[5483]: Failed password for root from 61.177.172.160 port 15219 ssh2 Feb 9 23:00:23.953505 sshd[5483]: Received disconnect from 61.177.172.160 port 15219:11: [preauth] Feb 9 23:00:23.953505 sshd[5483]: Disconnected from authenticating user root 61.177.172.160 port 15219 [preauth] Feb 9 23:00:23.953927 sshd[5483]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 23:00:23.955633 systemd[1]: sshd@81-147.75.49.127:22-61.177.172.160:15219.service: Deactivated successfully. Feb 9 23:00:24.090053 systemd[1]: Started sshd@85-147.75.49.127:22-61.177.172.160:25068.service. Feb 9 23:00:24.232228 systemd[1]: Started sshd@86-147.75.49.127:22-139.178.89.65:36394.service. Feb 9 23:00:24.271169 sshd[5524]: Accepted publickey for core from 139.178.89.65 port 36394 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:24.272095 sshd[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:24.275497 systemd-logind[1544]: New session 44 of user core. Feb 9 23:00:24.276172 systemd[1]: Started session-44.scope. Feb 9 23:00:24.366392 sshd[5524]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:24.367954 systemd[1]: sshd@86-147.75.49.127:22-139.178.89.65:36394.service: Deactivated successfully. Feb 9 23:00:24.368592 systemd-logind[1544]: Session 44 logged out. Waiting for processes to exit. Feb 9 23:00:24.368604 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 23:00:24.369197 systemd-logind[1544]: Removed session 44. Feb 9 23:00:24.642405 sshd[5518]: Invalid user largen from 216.10.245.180 port 48588 Feb 9 23:00:24.648459 sshd[5518]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:24.649613 sshd[5518]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:00:24.649699 sshd[5518]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 23:00:24.650644 sshd[5518]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:25.015632 sshd[5522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 23:00:26.717286 sshd[5518]: Failed password for invalid user largen from 216.10.245.180 port 48588 ssh2 Feb 9 23:00:27.101338 sshd[5518]: Received disconnect from 216.10.245.180 port 48588:11: Bye Bye [preauth] Feb 9 23:00:27.101338 sshd[5518]: Disconnected from invalid user largen 216.10.245.180 port 48588 [preauth] Feb 9 23:00:27.103757 systemd[1]: sshd@84-147.75.49.127:22-216.10.245.180:48588.service: Deactivated successfully. Feb 9 23:00:27.555055 sshd[5522]: Failed password for root from 61.177.172.160 port 25068 ssh2 Feb 9 23:00:29.373019 systemd[1]: Started sshd@87-147.75.49.127:22-139.178.89.65:54324.service. Feb 9 23:00:29.407284 sshd[5552]: Accepted publickey for core from 139.178.89.65 port 54324 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:29.408280 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:29.411881 systemd-logind[1544]: New session 45 of user core. Feb 9 23:00:29.412719 systemd[1]: Started session-45.scope. Feb 9 23:00:29.500222 sshd[5552]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:29.501654 systemd[1]: sshd@87-147.75.49.127:22-139.178.89.65:54324.service: Deactivated successfully. Feb 9 23:00:29.502291 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 23:00:29.502336 systemd-logind[1544]: Session 45 logged out. Waiting for processes to exit. Feb 9 23:00:29.502816 systemd-logind[1544]: Removed session 45. Feb 9 23:00:31.133134 sshd[5522]: Failed password for root from 61.177.172.160 port 25068 ssh2 Feb 9 23:00:33.662095 sshd[5522]: Failed password for root from 61.177.172.160 port 25068 ssh2 Feb 9 23:00:33.887619 sshd[5522]: Received disconnect from 61.177.172.160 port 25068:11: [preauth] Feb 9 23:00:33.887619 sshd[5522]: Disconnected from authenticating user root 61.177.172.160 port 25068 [preauth] Feb 9 23:00:33.888208 sshd[5522]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 23:00:33.890181 systemd[1]: sshd@85-147.75.49.127:22-61.177.172.160:25068.service: Deactivated successfully. Feb 9 23:00:34.506080 systemd[1]: Started sshd@88-147.75.49.127:22-139.178.89.65:54332.service. Feb 9 23:00:34.540788 sshd[5581]: Accepted publickey for core from 139.178.89.65 port 54332 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:34.541678 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:34.544828 systemd-logind[1544]: New session 46 of user core. Feb 9 23:00:34.545527 systemd[1]: Started session-46.scope. Feb 9 23:00:34.631747 sshd[5581]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:34.633327 systemd[1]: sshd@88-147.75.49.127:22-139.178.89.65:54332.service: Deactivated successfully. Feb 9 23:00:34.634022 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 23:00:34.634065 systemd-logind[1544]: Session 46 logged out. Waiting for processes to exit. Feb 9 23:00:34.634665 systemd-logind[1544]: Removed session 46. Feb 9 23:00:38.312830 systemd[1]: Started sshd@89-147.75.49.127:22-124.220.165.94:54670.service. Feb 9 23:00:39.155539 sshd[5610]: Invalid user dstent from 124.220.165.94 port 54670 Feb 9 23:00:39.156916 sshd[5610]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:39.157177 sshd[5610]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:00:39.157193 sshd[5610]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 23:00:39.157391 sshd[5610]: pam_faillock(sshd:auth): User unknown Feb 9 23:00:39.638187 systemd[1]: Started sshd@90-147.75.49.127:22-139.178.89.65:45376.service. Feb 9 23:00:39.672284 sshd[5612]: Accepted publickey for core from 139.178.89.65 port 45376 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:39.673163 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:39.676366 systemd-logind[1544]: New session 47 of user core. Feb 9 23:00:39.677383 systemd[1]: Started session-47.scope. Feb 9 23:00:39.767846 sshd[5612]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:39.769259 systemd[1]: sshd@90-147.75.49.127:22-139.178.89.65:45376.service: Deactivated successfully. Feb 9 23:00:39.769841 systemd-logind[1544]: Session 47 logged out. Waiting for processes to exit. Feb 9 23:00:39.769902 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 23:00:39.770590 systemd-logind[1544]: Removed session 47. Feb 9 23:00:41.616298 sshd[5610]: Failed password for invalid user dstent from 124.220.165.94 port 54670 ssh2 Feb 9 23:00:43.366399 sshd[5610]: Received disconnect from 124.220.165.94 port 54670:11: Bye Bye [preauth] Feb 9 23:00:43.366399 sshd[5610]: Disconnected from invalid user dstent 124.220.165.94 port 54670 [preauth] Feb 9 23:00:43.368925 systemd[1]: sshd@89-147.75.49.127:22-124.220.165.94:54670.service: Deactivated successfully. Feb 9 23:00:44.775068 systemd[1]: Started sshd@91-147.75.49.127:22-139.178.89.65:45378.service. Feb 9 23:00:44.810133 sshd[5640]: Accepted publickey for core from 139.178.89.65 port 45378 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:44.811044 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:44.814239 systemd-logind[1544]: New session 48 of user core. Feb 9 23:00:44.815297 systemd[1]: Started session-48.scope. Feb 9 23:00:44.903269 sshd[5640]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:44.905560 systemd[1]: Started sshd@92-147.75.49.127:22-139.178.89.65:45394.service. Feb 9 23:00:44.906064 systemd[1]: sshd@91-147.75.49.127:22-139.178.89.65:45378.service: Deactivated successfully. Feb 9 23:00:44.906706 systemd-logind[1544]: Session 48 logged out. Waiting for processes to exit. Feb 9 23:00:44.906782 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 23:00:44.907420 systemd-logind[1544]: Removed session 48. Feb 9 23:00:44.941146 sshd[5665]: Accepted publickey for core from 139.178.89.65 port 45394 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:44.942102 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:44.945337 systemd-logind[1544]: New session 49 of user core. Feb 9 23:00:44.946045 systemd[1]: Started session-49.scope. Feb 9 23:00:46.066939 sshd[5665]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:46.073379 systemd[1]: Started sshd@93-147.75.49.127:22-139.178.89.65:45404.service. Feb 9 23:00:46.073900 systemd[1]: sshd@92-147.75.49.127:22-139.178.89.65:45394.service: Deactivated successfully. Feb 9 23:00:46.074554 systemd-logind[1544]: Session 49 logged out. Waiting for processes to exit. Feb 9 23:00:46.074623 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 23:00:46.075165 systemd-logind[1544]: Removed session 49. Feb 9 23:00:46.108206 sshd[5690]: Accepted publickey for core from 139.178.89.65 port 45404 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:46.109197 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:46.112579 systemd-logind[1544]: New session 50 of user core. Feb 9 23:00:46.113745 systemd[1]: Started session-50.scope. Feb 9 23:00:46.971227 sshd[5690]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:46.973553 systemd[1]: Started sshd@94-147.75.49.127:22-139.178.89.65:45420.service. Feb 9 23:00:46.974099 systemd[1]: sshd@93-147.75.49.127:22-139.178.89.65:45404.service: Deactivated successfully. Feb 9 23:00:46.974717 systemd-logind[1544]: Session 50 logged out. Waiting for processes to exit. Feb 9 23:00:46.974753 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 23:00:46.975520 systemd-logind[1544]: Removed session 50. Feb 9 23:00:47.010123 sshd[5736]: Accepted publickey for core from 139.178.89.65 port 45420 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:47.011093 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:47.014330 systemd-logind[1544]: New session 51 of user core. Feb 9 23:00:47.014971 systemd[1]: Started session-51.scope. Feb 9 23:00:47.238125 sshd[5736]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:47.240348 systemd[1]: Started sshd@95-147.75.49.127:22-139.178.89.65:45424.service. Feb 9 23:00:47.240775 systemd[1]: sshd@94-147.75.49.127:22-139.178.89.65:45420.service: Deactivated successfully. Feb 9 23:00:47.241600 systemd-logind[1544]: Session 51 logged out. Waiting for processes to exit. Feb 9 23:00:47.241618 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 23:00:47.242371 systemd-logind[1544]: Removed session 51. Feb 9 23:00:47.280946 sshd[5794]: Accepted publickey for core from 139.178.89.65 port 45424 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:47.284219 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:47.295042 systemd-logind[1544]: New session 52 of user core. Feb 9 23:00:47.297417 systemd[1]: Started session-52.scope. Feb 9 23:00:47.465545 sshd[5794]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:47.467707 systemd[1]: sshd@95-147.75.49.127:22-139.178.89.65:45424.service: Deactivated successfully. Feb 9 23:00:47.468728 systemd-logind[1544]: Session 52 logged out. Waiting for processes to exit. Feb 9 23:00:47.468732 systemd[1]: session-52.scope: Deactivated successfully. Feb 9 23:00:47.469686 systemd-logind[1544]: Removed session 52. Feb 9 23:00:52.471969 systemd[1]: Started sshd@96-147.75.49.127:22-139.178.89.65:33590.service. Feb 9 23:00:52.507002 sshd[5821]: Accepted publickey for core from 139.178.89.65 port 33590 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:52.510431 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:52.521586 systemd-logind[1544]: New session 53 of user core. Feb 9 23:00:52.524194 systemd[1]: Started session-53.scope. Feb 9 23:00:52.660369 sshd[5821]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:52.666186 systemd[1]: sshd@96-147.75.49.127:22-139.178.89.65:33590.service: Deactivated successfully. Feb 9 23:00:52.668935 systemd-logind[1544]: Session 53 logged out. Waiting for processes to exit. Feb 9 23:00:52.669105 systemd[1]: session-53.scope: Deactivated successfully. Feb 9 23:00:52.671620 systemd-logind[1544]: Removed session 53. Feb 9 23:00:57.666693 systemd[1]: Started sshd@97-147.75.49.127:22-139.178.89.65:33592.service. Feb 9 23:00:57.701303 sshd[5848]: Accepted publickey for core from 139.178.89.65 port 33592 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:00:57.702287 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:00:57.705828 systemd-logind[1544]: New session 54 of user core. Feb 9 23:00:57.706714 systemd[1]: Started session-54.scope. Feb 9 23:00:57.793880 sshd[5848]: pam_unix(sshd:session): session closed for user core Feb 9 23:00:57.795433 systemd[1]: sshd@97-147.75.49.127:22-139.178.89.65:33592.service: Deactivated successfully. Feb 9 23:00:57.796100 systemd[1]: session-54.scope: Deactivated successfully. Feb 9 23:00:57.796147 systemd-logind[1544]: Session 54 logged out. Waiting for processes to exit. Feb 9 23:00:57.796615 systemd-logind[1544]: Removed session 54. Feb 9 23:01:02.800645 systemd[1]: Started sshd@98-147.75.49.127:22-139.178.89.65:59372.service. Feb 9 23:01:02.835313 sshd[5873]: Accepted publickey for core from 139.178.89.65 port 59372 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:02.836283 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:02.840085 systemd-logind[1544]: New session 55 of user core. Feb 9 23:01:02.840816 systemd[1]: Started session-55.scope. Feb 9 23:01:02.931746 sshd[5873]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:02.933147 systemd[1]: sshd@98-147.75.49.127:22-139.178.89.65:59372.service: Deactivated successfully. Feb 9 23:01:02.933757 systemd-logind[1544]: Session 55 logged out. Waiting for processes to exit. Feb 9 23:01:02.933770 systemd[1]: session-55.scope: Deactivated successfully. Feb 9 23:01:02.934392 systemd-logind[1544]: Removed session 55. Feb 9 23:01:07.935007 systemd[1]: Started sshd@99-147.75.49.127:22-139.178.89.65:59376.service. Feb 9 23:01:07.972018 sshd[5901]: Accepted publickey for core from 139.178.89.65 port 59376 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:07.975155 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:07.985709 systemd-logind[1544]: New session 56 of user core. Feb 9 23:01:07.988083 systemd[1]: Started session-56.scope. Feb 9 23:01:08.077770 sshd[5901]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:08.079312 systemd[1]: sshd@99-147.75.49.127:22-139.178.89.65:59376.service: Deactivated successfully. Feb 9 23:01:08.079872 systemd-logind[1544]: Session 56 logged out. Waiting for processes to exit. Feb 9 23:01:08.079914 systemd[1]: session-56.scope: Deactivated successfully. Feb 9 23:01:08.080396 systemd-logind[1544]: Removed session 56. Feb 9 23:01:13.083817 systemd[1]: Started sshd@100-147.75.49.127:22-139.178.89.65:39176.service. Feb 9 23:01:13.118400 sshd[5927]: Accepted publickey for core from 139.178.89.65 port 39176 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:13.119374 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:13.122814 systemd-logind[1544]: New session 57 of user core. Feb 9 23:01:13.123592 systemd[1]: Started session-57.scope. Feb 9 23:01:13.210957 sshd[5927]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:13.212461 systemd[1]: sshd@100-147.75.49.127:22-139.178.89.65:39176.service: Deactivated successfully. Feb 9 23:01:13.213152 systemd[1]: session-57.scope: Deactivated successfully. Feb 9 23:01:13.213167 systemd-logind[1544]: Session 57 logged out. Waiting for processes to exit. Feb 9 23:01:13.213759 systemd-logind[1544]: Removed session 57. Feb 9 23:01:17.557658 systemd[1]: Started sshd@101-147.75.49.127:22-117.102.64.108:39506.service. Feb 9 23:01:18.219554 systemd[1]: Started sshd@102-147.75.49.127:22-139.178.89.65:39452.service. Feb 9 23:01:18.258159 sshd[5957]: Accepted publickey for core from 139.178.89.65 port 39452 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:18.261325 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:18.272670 systemd-logind[1544]: New session 58 of user core. Feb 9 23:01:18.275163 systemd[1]: Started session-58.scope. Feb 9 23:01:18.375958 sshd[5957]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:18.377421 systemd[1]: sshd@102-147.75.49.127:22-139.178.89.65:39452.service: Deactivated successfully. Feb 9 23:01:18.378148 systemd[1]: session-58.scope: Deactivated successfully. Feb 9 23:01:18.378164 systemd-logind[1544]: Session 58 logged out. Waiting for processes to exit. Feb 9 23:01:18.378741 systemd-logind[1544]: Removed session 58. Feb 9 23:01:18.484496 systemd[1]: Started sshd@103-147.75.49.127:22-216.10.245.180:37714.service. Feb 9 23:01:18.743183 sshd[5955]: Invalid user liyq from 117.102.64.108 port 39506 Feb 9 23:01:18.749211 sshd[5955]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:18.750209 sshd[5955]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:01:18.750300 sshd[5955]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 23:01:18.751227 sshd[5955]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:18.806742 systemd[1]: Started sshd@104-147.75.49.127:22-157.230.254.228:43394.service. Feb 9 23:01:19.814797 sshd[5986]: Invalid user mitraaghdam from 157.230.254.228 port 43394 Feb 9 23:01:19.821004 sshd[5986]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:19.822113 sshd[5986]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:01:19.822203 sshd[5986]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 23:01:19.823252 sshd[5986]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:19.844661 sshd[5984]: Invalid user ddawson from 216.10.245.180 port 37714 Feb 9 23:01:19.850641 sshd[5984]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:19.851784 sshd[5984]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:01:19.851904 sshd[5984]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 23:01:19.852812 sshd[5984]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:20.231096 sshd[5955]: Failed password for invalid user liyq from 117.102.64.108 port 39506 ssh2 Feb 9 23:01:20.845239 sshd[5955]: Received disconnect from 117.102.64.108 port 39506:11: Bye Bye [preauth] Feb 9 23:01:20.845239 sshd[5955]: Disconnected from invalid user liyq 117.102.64.108 port 39506 [preauth] Feb 9 23:01:20.847726 systemd[1]: sshd@101-147.75.49.127:22-117.102.64.108:39506.service: Deactivated successfully. Feb 9 23:01:21.775270 sshd[5986]: Failed password for invalid user mitraaghdam from 157.230.254.228 port 43394 ssh2 Feb 9 23:01:21.804735 sshd[5984]: Failed password for invalid user ddawson from 216.10.245.180 port 37714 ssh2 Feb 9 23:01:23.383152 systemd[1]: Started sshd@105-147.75.49.127:22-139.178.89.65:39464.service. Feb 9 23:01:23.417432 sshd[5990]: Accepted publickey for core from 139.178.89.65 port 39464 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:23.418454 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:23.422347 systemd-logind[1544]: New session 59 of user core. Feb 9 23:01:23.423170 systemd[1]: Started session-59.scope. Feb 9 23:01:23.427689 sshd[5984]: Received disconnect from 216.10.245.180 port 37714:11: Bye Bye [preauth] Feb 9 23:01:23.427689 sshd[5984]: Disconnected from invalid user ddawson 216.10.245.180 port 37714 [preauth] Feb 9 23:01:23.428743 systemd[1]: sshd@103-147.75.49.127:22-216.10.245.180:37714.service: Deactivated successfully. Feb 9 23:01:23.515314 sshd[5990]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:23.516615 systemd[1]: sshd@105-147.75.49.127:22-139.178.89.65:39464.service: Deactivated successfully. Feb 9 23:01:23.517286 systemd[1]: session-59.scope: Deactivated successfully. Feb 9 23:01:23.517325 systemd-logind[1544]: Session 59 logged out. Waiting for processes to exit. Feb 9 23:01:23.517729 systemd-logind[1544]: Removed session 59. Feb 9 23:01:23.637375 sshd[5986]: Received disconnect from 157.230.254.228 port 43394:11: Bye Bye [preauth] Feb 9 23:01:23.637375 sshd[5986]: Disconnected from invalid user mitraaghdam 157.230.254.228 port 43394 [preauth] Feb 9 23:01:23.640180 systemd[1]: sshd@104-147.75.49.127:22-157.230.254.228:43394.service: Deactivated successfully. Feb 9 23:01:28.522519 systemd[1]: Started sshd@106-147.75.49.127:22-139.178.89.65:44704.service. Feb 9 23:01:28.557940 sshd[6020]: Accepted publickey for core from 139.178.89.65 port 44704 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:28.561416 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:28.572053 systemd-logind[1544]: New session 60 of user core. Feb 9 23:01:28.574759 systemd[1]: Started session-60.scope. Feb 9 23:01:28.694720 sshd[6020]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:28.696808 systemd[1]: sshd@106-147.75.49.127:22-139.178.89.65:44704.service: Deactivated successfully. Feb 9 23:01:28.697767 systemd-logind[1544]: Session 60 logged out. Waiting for processes to exit. Feb 9 23:01:28.697774 systemd[1]: session-60.scope: Deactivated successfully. Feb 9 23:01:28.698662 systemd-logind[1544]: Removed session 60. Feb 9 23:01:33.697016 systemd[1]: Started sshd@107-147.75.49.127:22-139.178.89.65:44708.service. Feb 9 23:01:33.732724 sshd[6046]: Accepted publickey for core from 139.178.89.65 port 44708 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:33.736028 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:33.746872 systemd-logind[1544]: New session 61 of user core. Feb 9 23:01:33.749363 systemd[1]: Started session-61.scope. Feb 9 23:01:33.839977 sshd[6046]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:33.841501 systemd[1]: sshd@107-147.75.49.127:22-139.178.89.65:44708.service: Deactivated successfully. Feb 9 23:01:33.842177 systemd[1]: session-61.scope: Deactivated successfully. Feb 9 23:01:33.842224 systemd-logind[1544]: Session 61 logged out. Waiting for processes to exit. Feb 9 23:01:33.842718 systemd-logind[1544]: Removed session 61. Feb 9 23:01:38.846553 systemd[1]: Started sshd@108-147.75.49.127:22-139.178.89.65:52330.service. Feb 9 23:01:38.881135 sshd[6074]: Accepted publickey for core from 139.178.89.65 port 52330 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:38.882137 sshd[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:38.885777 systemd-logind[1544]: New session 62 of user core. Feb 9 23:01:38.886706 systemd[1]: Started session-62.scope. Feb 9 23:01:38.976817 sshd[6074]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:38.978351 systemd[1]: sshd@108-147.75.49.127:22-139.178.89.65:52330.service: Deactivated successfully. Feb 9 23:01:38.978954 systemd[1]: session-62.scope: Deactivated successfully. Feb 9 23:01:38.979020 systemd-logind[1544]: Session 62 logged out. Waiting for processes to exit. Feb 9 23:01:38.979521 systemd-logind[1544]: Removed session 62. Feb 9 23:01:43.983475 systemd[1]: Started sshd@109-147.75.49.127:22-139.178.89.65:52332.service. Feb 9 23:01:44.019090 sshd[6100]: Accepted publickey for core from 139.178.89.65 port 52332 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:44.022247 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:44.033046 systemd-logind[1544]: New session 63 of user core. Feb 9 23:01:44.035462 systemd[1]: Started session-63.scope. Feb 9 23:01:44.150011 sshd[6100]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:44.151603 systemd[1]: sshd@109-147.75.49.127:22-139.178.89.65:52332.service: Deactivated successfully. Feb 9 23:01:44.152379 systemd[1]: session-63.scope: Deactivated successfully. Feb 9 23:01:44.152384 systemd-logind[1544]: Session 63 logged out. Waiting for processes to exit. Feb 9 23:01:44.152929 systemd-logind[1544]: Removed session 63. Feb 9 23:01:49.155635 systemd[1]: Started sshd@110-147.75.49.127:22-139.178.89.65:53744.service. Feb 9 23:01:49.190891 sshd[6126]: Accepted publickey for core from 139.178.89.65 port 53744 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:49.194372 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:49.204990 systemd-logind[1544]: New session 64 of user core. Feb 9 23:01:49.208108 systemd[1]: Started session-64.scope. Feb 9 23:01:49.299439 sshd[6126]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:49.300702 systemd[1]: sshd@110-147.75.49.127:22-139.178.89.65:53744.service: Deactivated successfully. Feb 9 23:01:49.301337 systemd[1]: session-64.scope: Deactivated successfully. Feb 9 23:01:49.301384 systemd-logind[1544]: Session 64 logged out. Waiting for processes to exit. Feb 9 23:01:49.301766 systemd-logind[1544]: Removed session 64. Feb 9 23:01:53.723280 systemd[1]: Started sshd@111-147.75.49.127:22-124.220.165.94:44230.service. Feb 9 23:01:54.307246 systemd[1]: Started sshd@112-147.75.49.127:22-139.178.89.65:53750.service. Feb 9 23:01:54.342924 sshd[6155]: Accepted publickey for core from 139.178.89.65 port 53750 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:54.346054 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:54.356670 systemd-logind[1544]: New session 65 of user core. Feb 9 23:01:54.359086 systemd[1]: Started session-65.scope. Feb 9 23:01:54.503893 sshd[6155]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:54.507954 systemd[1]: sshd@112-147.75.49.127:22-139.178.89.65:53750.service: Deactivated successfully. Feb 9 23:01:54.509896 systemd-logind[1544]: Session 65 logged out. Waiting for processes to exit. Feb 9 23:01:54.510007 systemd[1]: session-65.scope: Deactivated successfully. Feb 9 23:01:54.511835 systemd-logind[1544]: Removed session 65. Feb 9 23:01:54.561898 sshd[6153]: Invalid user jimy from 124.220.165.94 port 44230 Feb 9 23:01:54.567886 sshd[6153]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:54.568987 sshd[6153]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:01:54.569074 sshd[6153]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 23:01:54.569991 sshd[6153]: pam_faillock(sshd:auth): User unknown Feb 9 23:01:56.326543 sshd[6153]: Failed password for invalid user jimy from 124.220.165.94 port 44230 ssh2 Feb 9 23:01:56.701522 sshd[6153]: Received disconnect from 124.220.165.94 port 44230:11: Bye Bye [preauth] Feb 9 23:01:56.701522 sshd[6153]: Disconnected from invalid user jimy 124.220.165.94 port 44230 [preauth] Feb 9 23:01:56.704117 systemd[1]: sshd@111-147.75.49.127:22-124.220.165.94:44230.service: Deactivated successfully. Feb 9 23:01:59.509964 systemd[1]: Started sshd@113-147.75.49.127:22-139.178.89.65:56552.service. Feb 9 23:01:59.544658 sshd[6183]: Accepted publickey for core from 139.178.89.65 port 56552 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:01:59.545699 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:01:59.549497 systemd-logind[1544]: New session 66 of user core. Feb 9 23:01:59.550359 systemd[1]: Started session-66.scope. Feb 9 23:01:59.643395 sshd[6183]: pam_unix(sshd:session): session closed for user core Feb 9 23:01:59.645119 systemd[1]: sshd@113-147.75.49.127:22-139.178.89.65:56552.service: Deactivated successfully. Feb 9 23:01:59.645814 systemd-logind[1544]: Session 66 logged out. Waiting for processes to exit. Feb 9 23:01:59.645866 systemd[1]: session-66.scope: Deactivated successfully. Feb 9 23:01:59.646616 systemd-logind[1544]: Removed session 66. Feb 9 23:02:04.650223 systemd[1]: Started sshd@114-147.75.49.127:22-139.178.89.65:56556.service. Feb 9 23:02:04.684281 sshd[6208]: Accepted publickey for core from 139.178.89.65 port 56556 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:04.685249 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:04.688648 systemd-logind[1544]: New session 67 of user core. Feb 9 23:02:04.689374 systemd[1]: Started session-67.scope. Feb 9 23:02:04.777974 sshd[6208]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:04.779538 systemd[1]: sshd@114-147.75.49.127:22-139.178.89.65:56556.service: Deactivated successfully. Feb 9 23:02:04.780279 systemd[1]: session-67.scope: Deactivated successfully. Feb 9 23:02:04.780318 systemd-logind[1544]: Session 67 logged out. Waiting for processes to exit. Feb 9 23:02:04.780811 systemd-logind[1544]: Removed session 67. Feb 9 23:02:09.785558 systemd[1]: Started sshd@115-147.75.49.127:22-139.178.89.65:53556.service. Feb 9 23:02:09.820191 sshd[6234]: Accepted publickey for core from 139.178.89.65 port 53556 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:09.821221 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:09.824935 systemd-logind[1544]: New session 68 of user core. Feb 9 23:02:09.825751 systemd[1]: Started session-68.scope. Feb 9 23:02:09.912391 sshd[6234]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:09.913945 systemd[1]: sshd@115-147.75.49.127:22-139.178.89.65:53556.service: Deactivated successfully. Feb 9 23:02:09.914582 systemd[1]: session-68.scope: Deactivated successfully. Feb 9 23:02:09.914616 systemd-logind[1544]: Session 68 logged out. Waiting for processes to exit. Feb 9 23:02:09.915183 systemd-logind[1544]: Removed session 68. Feb 9 23:02:11.609682 systemd[1]: Started sshd@116-147.75.49.127:22-216.10.245.180:55068.service. Feb 9 23:02:12.938260 sshd[6260]: Invalid user waldeyr from 216.10.245.180 port 55068 Feb 9 23:02:12.944377 sshd[6260]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:12.945348 sshd[6260]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:02:12.945438 sshd[6260]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 23:02:12.946339 sshd[6260]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:14.507058 sshd[6260]: Failed password for invalid user waldeyr from 216.10.245.180 port 55068 ssh2 Feb 9 23:02:14.918732 systemd[1]: Started sshd@117-147.75.49.127:22-139.178.89.65:53566.service. Feb 9 23:02:14.953263 sshd[6262]: Accepted publickey for core from 139.178.89.65 port 53566 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:14.954128 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:14.957292 systemd-logind[1544]: New session 69 of user core. Feb 9 23:02:14.957959 systemd[1]: Started session-69.scope. Feb 9 23:02:15.041053 sshd[6262]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:15.042515 systemd[1]: sshd@117-147.75.49.127:22-139.178.89.65:53566.service: Deactivated successfully. Feb 9 23:02:15.043132 systemd[1]: session-69.scope: Deactivated successfully. Feb 9 23:02:15.043157 systemd-logind[1544]: Session 69 logged out. Waiting for processes to exit. Feb 9 23:02:15.043690 systemd-logind[1544]: Removed session 69. Feb 9 23:02:15.986810 sshd[6260]: Received disconnect from 216.10.245.180 port 55068:11: Bye Bye [preauth] Feb 9 23:02:15.986810 sshd[6260]: Disconnected from invalid user waldeyr 216.10.245.180 port 55068 [preauth] Feb 9 23:02:15.989356 systemd[1]: sshd@116-147.75.49.127:22-216.10.245.180:55068.service: Deactivated successfully. Feb 9 23:02:20.047175 systemd[1]: Started sshd@118-147.75.49.127:22-139.178.89.65:38506.service. Feb 9 23:02:20.082732 sshd[6291]: Accepted publickey for core from 139.178.89.65 port 38506 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:20.085972 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:20.096541 systemd-logind[1544]: New session 70 of user core. Feb 9 23:02:20.098809 systemd[1]: Started session-70.scope. Feb 9 23:02:20.191077 sshd[6291]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:20.192488 systemd[1]: sshd@118-147.75.49.127:22-139.178.89.65:38506.service: Deactivated successfully. Feb 9 23:02:20.193151 systemd[1]: session-70.scope: Deactivated successfully. Feb 9 23:02:20.193192 systemd-logind[1544]: Session 70 logged out. Waiting for processes to exit. Feb 9 23:02:20.193697 systemd-logind[1544]: Removed session 70. Feb 9 23:02:21.888246 systemd[1]: Started sshd@119-147.75.49.127:22-157.230.254.228:33798.service. Feb 9 23:02:23.288883 sshd[6317]: Invalid user inspien from 157.230.254.228 port 33798 Feb 9 23:02:23.294957 sshd[6317]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:23.296120 sshd[6317]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:02:23.296210 sshd[6317]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 23:02:23.297210 sshd[6317]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:24.702156 sshd[6317]: Failed password for invalid user inspien from 157.230.254.228 port 33798 ssh2 Feb 9 23:02:25.194419 systemd[1]: Started sshd@120-147.75.49.127:22-139.178.89.65:38508.service. Feb 9 23:02:25.231343 sshd[6319]: Accepted publickey for core from 139.178.89.65 port 38508 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:25.232476 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:25.235804 systemd-logind[1544]: New session 71 of user core. Feb 9 23:02:25.236525 systemd[1]: Started session-71.scope. Feb 9 23:02:25.325385 sshd[6319]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:25.326656 systemd[1]: sshd@120-147.75.49.127:22-139.178.89.65:38508.service: Deactivated successfully. Feb 9 23:02:25.327301 systemd[1]: session-71.scope: Deactivated successfully. Feb 9 23:02:25.327359 systemd-logind[1544]: Session 71 logged out. Waiting for processes to exit. Feb 9 23:02:25.327786 systemd-logind[1544]: Removed session 71. Feb 9 23:02:25.500165 sshd[6317]: Received disconnect from 157.230.254.228 port 33798:11: Bye Bye [preauth] Feb 9 23:02:25.500165 sshd[6317]: Disconnected from invalid user inspien 157.230.254.228 port 33798 [preauth] Feb 9 23:02:25.502921 systemd[1]: sshd@119-147.75.49.127:22-157.230.254.228:33798.service: Deactivated successfully. Feb 9 23:02:26.472636 systemd[1]: Started sshd@121-147.75.49.127:22-117.102.64.108:59152.service. Feb 9 23:02:27.595035 sshd[6346]: Invalid user wumei from 117.102.64.108 port 59152 Feb 9 23:02:27.601172 sshd[6346]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:27.602158 sshd[6346]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:02:27.602248 sshd[6346]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 23:02:27.603178 sshd[6346]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:29.555626 sshd[6346]: Failed password for invalid user wumei from 117.102.64.108 port 59152 ssh2 Feb 9 23:02:29.806001 systemd[1]: Started sshd@122-147.75.49.127:22-49.0.116.196:22744.service. Feb 9 23:02:30.333070 systemd[1]: Started sshd@123-147.75.49.127:22-139.178.89.65:52012.service. Feb 9 23:02:30.367634 sshd[6350]: Accepted publickey for core from 139.178.89.65 port 52012 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:30.368626 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:30.372378 systemd-logind[1544]: New session 72 of user core. Feb 9 23:02:30.373157 systemd[1]: Started session-72.scope. Feb 9 23:02:30.509169 sshd[6350]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:30.515212 systemd[1]: sshd@123-147.75.49.127:22-139.178.89.65:52012.service: Deactivated successfully. Feb 9 23:02:30.517920 systemd-logind[1544]: Session 72 logged out. Waiting for processes to exit. Feb 9 23:02:30.517989 systemd[1]: session-72.scope: Deactivated successfully. Feb 9 23:02:30.520656 systemd-logind[1544]: Removed session 72. Feb 9 23:02:31.078280 sshd[6346]: Received disconnect from 117.102.64.108 port 59152:11: Bye Bye [preauth] Feb 9 23:02:31.078280 sshd[6346]: Disconnected from invalid user wumei 117.102.64.108 port 59152 [preauth] Feb 9 23:02:31.080770 systemd[1]: sshd@121-147.75.49.127:22-117.102.64.108:59152.service: Deactivated successfully. Feb 9 23:02:31.774571 sshd[6348]: Invalid user shanti from 49.0.116.196 port 22744 Feb 9 23:02:31.780485 sshd[6348]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:31.781600 sshd[6348]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:02:31.781687 sshd[6348]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=49.0.116.196 Feb 9 23:02:31.782619 sshd[6348]: pam_faillock(sshd:auth): User unknown Feb 9 23:02:33.283474 sshd[6348]: Failed password for invalid user shanti from 49.0.116.196 port 22744 ssh2 Feb 9 23:02:33.701713 sshd[6348]: Received disconnect from 49.0.116.196 port 22744:11: Bye Bye [preauth] Feb 9 23:02:33.701713 sshd[6348]: Disconnected from invalid user shanti 49.0.116.196 port 22744 [preauth] Feb 9 23:02:33.704226 systemd[1]: sshd@122-147.75.49.127:22-49.0.116.196:22744.service: Deactivated successfully. Feb 9 23:02:35.515815 systemd[1]: Started sshd@124-147.75.49.127:22-139.178.89.65:52022.service. Feb 9 23:02:35.550815 sshd[6380]: Accepted publickey for core from 139.178.89.65 port 52022 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:35.552034 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:35.555813 systemd-logind[1544]: New session 73 of user core. Feb 9 23:02:35.556773 systemd[1]: Started session-73.scope. Feb 9 23:02:35.647639 sshd[6380]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:35.649094 systemd[1]: sshd@124-147.75.49.127:22-139.178.89.65:52022.service: Deactivated successfully. Feb 9 23:02:35.649674 systemd-logind[1544]: Session 73 logged out. Waiting for processes to exit. Feb 9 23:02:35.649712 systemd[1]: session-73.scope: Deactivated successfully. Feb 9 23:02:35.650252 systemd-logind[1544]: Removed session 73. Feb 9 23:02:40.655038 systemd[1]: Started sshd@125-147.75.49.127:22-139.178.89.65:50764.service. Feb 9 23:02:40.693248 sshd[6408]: Accepted publickey for core from 139.178.89.65 port 50764 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:40.694205 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:40.697690 systemd-logind[1544]: New session 74 of user core. Feb 9 23:02:40.698416 systemd[1]: Started session-74.scope. Feb 9 23:02:40.787333 sshd[6408]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:40.788726 systemd[1]: sshd@125-147.75.49.127:22-139.178.89.65:50764.service: Deactivated successfully. Feb 9 23:02:40.789347 systemd[1]: session-74.scope: Deactivated successfully. Feb 9 23:02:40.789390 systemd-logind[1544]: Session 74 logged out. Waiting for processes to exit. Feb 9 23:02:40.789821 systemd-logind[1544]: Removed session 74. Feb 9 23:02:45.793953 systemd[1]: Started sshd@126-147.75.49.127:22-139.178.89.65:50768.service. Feb 9 23:02:45.828779 sshd[6434]: Accepted publickey for core from 139.178.89.65 port 50768 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:45.829801 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:45.833324 systemd-logind[1544]: New session 75 of user core. Feb 9 23:02:45.834099 systemd[1]: Started session-75.scope. Feb 9 23:02:45.920138 sshd[6434]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:45.921552 systemd[1]: sshd@126-147.75.49.127:22-139.178.89.65:50768.service: Deactivated successfully. Feb 9 23:02:45.922192 systemd[1]: session-75.scope: Deactivated successfully. Feb 9 23:02:45.922234 systemd-logind[1544]: Session 75 logged out. Waiting for processes to exit. Feb 9 23:02:45.922710 systemd-logind[1544]: Removed session 75. Feb 9 23:02:50.926179 systemd[1]: Started sshd@127-147.75.49.127:22-139.178.89.65:43544.service. Feb 9 23:02:50.961184 sshd[6460]: Accepted publickey for core from 139.178.89.65 port 43544 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:50.962020 sshd[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:50.965075 systemd-logind[1544]: New session 76 of user core. Feb 9 23:02:50.965745 systemd[1]: Started session-76.scope. Feb 9 23:02:51.050541 sshd[6460]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:51.051925 systemd[1]: sshd@127-147.75.49.127:22-139.178.89.65:43544.service: Deactivated successfully. Feb 9 23:02:51.052551 systemd[1]: session-76.scope: Deactivated successfully. Feb 9 23:02:51.052578 systemd-logind[1544]: Session 76 logged out. Waiting for processes to exit. Feb 9 23:02:51.053043 systemd-logind[1544]: Removed session 76. Feb 9 23:02:56.056986 systemd[1]: Started sshd@128-147.75.49.127:22-139.178.89.65:43546.service. Feb 9 23:02:56.091396 sshd[6489]: Accepted publickey for core from 139.178.89.65 port 43546 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:02:56.092395 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:02:56.095821 systemd-logind[1544]: New session 77 of user core. Feb 9 23:02:56.096607 systemd[1]: Started session-77.scope. Feb 9 23:02:56.182827 sshd[6489]: pam_unix(sshd:session): session closed for user core Feb 9 23:02:56.184363 systemd[1]: sshd@128-147.75.49.127:22-139.178.89.65:43546.service: Deactivated successfully. Feb 9 23:02:56.184945 systemd[1]: session-77.scope: Deactivated successfully. Feb 9 23:02:56.184995 systemd-logind[1544]: Session 77 logged out. Waiting for processes to exit. Feb 9 23:02:56.185621 systemd-logind[1544]: Removed session 77. Feb 9 23:03:01.185699 systemd[1]: Started sshd@129-147.75.49.127:22-139.178.89.65:51314.service. Feb 9 23:03:01.222040 sshd[6515]: Accepted publickey for core from 139.178.89.65 port 51314 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:01.222982 sshd[6515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:01.226521 systemd-logind[1544]: New session 78 of user core. Feb 9 23:03:01.227322 systemd[1]: Started session-78.scope. Feb 9 23:03:01.313554 sshd[6515]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:01.315009 systemd[1]: sshd@129-147.75.49.127:22-139.178.89.65:51314.service: Deactivated successfully. Feb 9 23:03:01.315667 systemd-logind[1544]: Session 78 logged out. Waiting for processes to exit. Feb 9 23:03:01.315681 systemd[1]: session-78.scope: Deactivated successfully. Feb 9 23:03:01.316334 systemd-logind[1544]: Removed session 78. Feb 9 23:03:05.686389 systemd[1]: Started sshd@130-147.75.49.127:22-216.10.245.180:44192.service. Feb 9 23:03:06.321885 systemd[1]: Started sshd@131-147.75.49.127:22-139.178.89.65:51320.service. Feb 9 23:03:06.362012 sshd[6544]: Accepted publickey for core from 139.178.89.65 port 51320 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:06.365275 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:06.376047 systemd-logind[1544]: New session 79 of user core. Feb 9 23:03:06.378758 systemd[1]: Started session-79.scope. Feb 9 23:03:06.466192 sshd[6544]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:06.467611 systemd[1]: sshd@131-147.75.49.127:22-139.178.89.65:51320.service: Deactivated successfully. Feb 9 23:03:06.468263 systemd-logind[1544]: Session 79 logged out. Waiting for processes to exit. Feb 9 23:03:06.468272 systemd[1]: session-79.scope: Deactivated successfully. Feb 9 23:03:06.468753 systemd-logind[1544]: Removed session 79. Feb 9 23:03:07.050060 sshd[6542]: Invalid user dgnpc from 216.10.245.180 port 44192 Feb 9 23:03:07.056273 sshd[6542]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:07.057438 sshd[6542]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:03:07.057526 sshd[6542]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 23:03:07.058470 sshd[6542]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:09.170106 sshd[6542]: Failed password for invalid user dgnpc from 216.10.245.180 port 44192 ssh2 Feb 9 23:03:09.407488 systemd[1]: Started sshd@132-147.75.49.127:22-124.220.165.94:33790.service. Feb 9 23:03:09.506221 sshd[6542]: Received disconnect from 216.10.245.180 port 44192:11: Bye Bye [preauth] Feb 9 23:03:09.506221 sshd[6542]: Disconnected from invalid user dgnpc 216.10.245.180 port 44192 [preauth] Feb 9 23:03:09.508698 systemd[1]: sshd@130-147.75.49.127:22-216.10.245.180:44192.service: Deactivated successfully. Feb 9 23:03:10.278538 sshd[6571]: Invalid user yshioki from 124.220.165.94 port 33790 Feb 9 23:03:10.284611 sshd[6571]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:10.285752 sshd[6571]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:03:10.285845 sshd[6571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 23:03:10.286791 sshd[6571]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:11.472949 systemd[1]: Started sshd@133-147.75.49.127:22-139.178.89.65:38958.service. Feb 9 23:03:11.507779 sshd[6575]: Accepted publickey for core from 139.178.89.65 port 38958 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:11.508774 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:11.512067 systemd-logind[1544]: New session 80 of user core. Feb 9 23:03:11.513001 systemd[1]: Started session-80.scope. Feb 9 23:03:11.599951 sshd[6575]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:11.601572 systemd[1]: sshd@133-147.75.49.127:22-139.178.89.65:38958.service: Deactivated successfully. Feb 9 23:03:11.602291 systemd[1]: session-80.scope: Deactivated successfully. Feb 9 23:03:11.602333 systemd-logind[1544]: Session 80 logged out. Waiting for processes to exit. Feb 9 23:03:11.602833 systemd-logind[1544]: Removed session 80. Feb 9 23:03:12.810927 sshd[6571]: Failed password for invalid user yshioki from 124.220.165.94 port 33790 ssh2 Feb 9 23:03:14.817507 sshd[6571]: Received disconnect from 124.220.165.94 port 33790:11: Bye Bye [preauth] Feb 9 23:03:14.817507 sshd[6571]: Disconnected from invalid user yshioki 124.220.165.94 port 33790 [preauth] Feb 9 23:03:14.820085 systemd[1]: sshd@132-147.75.49.127:22-124.220.165.94:33790.service: Deactivated successfully. Feb 9 23:03:15.233238 systemd[1]: Started sshd@134-147.75.49.127:22-218.92.0.40:22386.service. Feb 9 23:03:15.383058 sshd[6603]: Unable to negotiate with 218.92.0.40 port 22386: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 23:03:15.384986 systemd[1]: sshd@134-147.75.49.127:22-218.92.0.40:22386.service: Deactivated successfully. Feb 9 23:03:16.606840 systemd[1]: Started sshd@135-147.75.49.127:22-139.178.89.65:38968.service. Feb 9 23:03:16.641407 sshd[6607]: Accepted publickey for core from 139.178.89.65 port 38968 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:16.642378 sshd[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:16.645808 systemd-logind[1544]: New session 81 of user core. Feb 9 23:03:16.646511 systemd[1]: Started session-81.scope. Feb 9 23:03:16.736693 sshd[6607]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:16.738173 systemd[1]: sshd@135-147.75.49.127:22-139.178.89.65:38968.service: Deactivated successfully. Feb 9 23:03:16.738799 systemd-logind[1544]: Session 81 logged out. Waiting for processes to exit. Feb 9 23:03:16.738808 systemd[1]: session-81.scope: Deactivated successfully. Feb 9 23:03:16.739437 systemd-logind[1544]: Removed session 81. Feb 9 23:03:21.743742 systemd[1]: Started sshd@136-147.75.49.127:22-139.178.89.65:43860.service. Feb 9 23:03:21.778315 sshd[6632]: Accepted publickey for core from 139.178.89.65 port 43860 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:21.779350 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:21.782872 systemd-logind[1544]: New session 82 of user core. Feb 9 23:03:21.783655 systemd[1]: Started session-82.scope. Feb 9 23:03:21.868710 sshd[6632]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:21.870409 systemd[1]: sshd@136-147.75.49.127:22-139.178.89.65:43860.service: Deactivated successfully. Feb 9 23:03:21.871141 systemd[1]: session-82.scope: Deactivated successfully. Feb 9 23:03:21.871181 systemd-logind[1544]: Session 82 logged out. Waiting for processes to exit. Feb 9 23:03:21.871684 systemd-logind[1544]: Removed session 82. Feb 9 23:03:24.592648 systemd[1]: Started sshd@137-147.75.49.127:22-157.230.254.228:52424.service. Feb 9 23:03:25.596310 sshd[6658]: Invalid user hjahangir from 157.230.254.228 port 52424 Feb 9 23:03:25.602396 sshd[6658]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:25.603619 sshd[6658]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:03:25.603639 sshd[6658]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 23:03:25.603839 sshd[6658]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:26.875601 systemd[1]: Started sshd@138-147.75.49.127:22-139.178.89.65:43868.service. Feb 9 23:03:26.910150 sshd[6662]: Accepted publickey for core from 139.178.89.65 port 43868 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:26.911138 sshd[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:26.914773 systemd-logind[1544]: New session 83 of user core. Feb 9 23:03:26.915515 systemd[1]: Started session-83.scope. Feb 9 23:03:27.006022 sshd[6662]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:27.007545 systemd[1]: sshd@138-147.75.49.127:22-139.178.89.65:43868.service: Deactivated successfully. Feb 9 23:03:27.008203 systemd[1]: session-83.scope: Deactivated successfully. Feb 9 23:03:27.008234 systemd-logind[1544]: Session 83 logged out. Waiting for processes to exit. Feb 9 23:03:27.008765 systemd-logind[1544]: Removed session 83. Feb 9 23:03:27.184316 sshd[6658]: Failed password for invalid user hjahangir from 157.230.254.228 port 52424 ssh2 Feb 9 23:03:27.660195 sshd[6658]: Received disconnect from 157.230.254.228 port 52424:11: Bye Bye [preauth] Feb 9 23:03:27.660195 sshd[6658]: Disconnected from invalid user hjahangir 157.230.254.228 port 52424 [preauth] Feb 9 23:03:27.663073 systemd[1]: sshd@137-147.75.49.127:22-157.230.254.228:52424.service: Deactivated successfully. Feb 9 23:03:31.944529 systemd[1]: Started sshd@139-147.75.49.127:22-117.102.64.108:50420.service. Feb 9 23:03:32.013713 systemd[1]: Started sshd@140-147.75.49.127:22-139.178.89.65:46504.service. Feb 9 23:03:32.051631 sshd[6694]: Accepted publickey for core from 139.178.89.65 port 46504 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:32.052553 sshd[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:32.055700 systemd-logind[1544]: New session 84 of user core. Feb 9 23:03:32.056438 systemd[1]: Started session-84.scope. Feb 9 23:03:32.142355 sshd[6694]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:32.143808 systemd[1]: sshd@140-147.75.49.127:22-139.178.89.65:46504.service: Deactivated successfully. Feb 9 23:03:32.144421 systemd[1]: session-84.scope: Deactivated successfully. Feb 9 23:03:32.144456 systemd-logind[1544]: Session 84 logged out. Waiting for processes to exit. Feb 9 23:03:32.144873 systemd-logind[1544]: Removed session 84. Feb 9 23:03:33.036230 sshd[6692]: Invalid user swachhta from 117.102.64.108 port 50420 Feb 9 23:03:33.042707 sshd[6692]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:33.043769 sshd[6692]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:03:33.043887 sshd[6692]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 23:03:33.044843 sshd[6692]: pam_faillock(sshd:auth): User unknown Feb 9 23:03:35.057553 sshd[6692]: Failed password for invalid user swachhta from 117.102.64.108 port 50420 ssh2 Feb 9 23:03:36.319793 sshd[6692]: Received disconnect from 117.102.64.108 port 50420:11: Bye Bye [preauth] Feb 9 23:03:36.319793 sshd[6692]: Disconnected from invalid user swachhta 117.102.64.108 port 50420 [preauth] Feb 9 23:03:36.322471 systemd[1]: sshd@139-147.75.49.127:22-117.102.64.108:50420.service: Deactivated successfully. Feb 9 23:03:37.149579 systemd[1]: Started sshd@141-147.75.49.127:22-139.178.89.65:46518.service. Feb 9 23:03:37.184613 sshd[6724]: Accepted publickey for core from 139.178.89.65 port 46518 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:37.185650 sshd[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:37.189122 systemd-logind[1544]: New session 85 of user core. Feb 9 23:03:37.189915 systemd[1]: Started session-85.scope. Feb 9 23:03:37.277069 sshd[6724]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:37.278579 systemd[1]: sshd@141-147.75.49.127:22-139.178.89.65:46518.service: Deactivated successfully. Feb 9 23:03:37.279222 systemd[1]: session-85.scope: Deactivated successfully. Feb 9 23:03:37.279262 systemd-logind[1544]: Session 85 logged out. Waiting for processes to exit. Feb 9 23:03:37.279791 systemd-logind[1544]: Removed session 85. Feb 9 23:03:39.982967 systemd[1]: Started sshd@142-147.75.49.127:22-218.92.0.22:28414.service. Feb 9 23:03:40.889094 sshd[6750]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 23:03:42.284679 systemd[1]: Started sshd@143-147.75.49.127:22-139.178.89.65:50610.service. Feb 9 23:03:42.318893 sshd[6752]: Accepted publickey for core from 139.178.89.65 port 50610 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:42.319910 sshd[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:42.323425 systemd-logind[1544]: New session 86 of user core. Feb 9 23:03:42.324326 systemd[1]: Started session-86.scope. Feb 9 23:03:42.415110 sshd[6752]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:42.416591 systemd[1]: sshd@143-147.75.49.127:22-139.178.89.65:50610.service: Deactivated successfully. Feb 9 23:03:42.417236 systemd[1]: session-86.scope: Deactivated successfully. Feb 9 23:03:42.417279 systemd-logind[1544]: Session 86 logged out. Waiting for processes to exit. Feb 9 23:03:42.417799 systemd-logind[1544]: Removed session 86. Feb 9 23:03:43.197438 sshd[6750]: Failed password for root from 218.92.0.22 port 28414 ssh2 Feb 9 23:03:44.722681 systemd[1]: Started sshd@144-147.75.49.127:22-218.92.0.29:44219.service. Feb 9 23:03:45.746238 sshd[6778]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.29 user=root Feb 9 23:03:47.421790 systemd[1]: Started sshd@145-147.75.49.127:22-139.178.89.65:50614.service. Feb 9 23:03:47.456877 sshd[6780]: Accepted publickey for core from 139.178.89.65 port 50614 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:47.457878 sshd[6780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:47.461589 systemd-logind[1544]: New session 87 of user core. Feb 9 23:03:47.462400 systemd[1]: Started session-87.scope. Feb 9 23:03:47.553659 sshd[6780]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:47.555452 systemd[1]: Started sshd@146-147.75.49.127:22-139.178.89.65:50618.service. Feb 9 23:03:47.555782 systemd[1]: sshd@145-147.75.49.127:22-139.178.89.65:50614.service: Deactivated successfully. Feb 9 23:03:47.556406 systemd[1]: session-87.scope: Deactivated successfully. Feb 9 23:03:47.556407 systemd-logind[1544]: Session 87 logged out. Waiting for processes to exit. Feb 9 23:03:47.556863 systemd-logind[1544]: Removed session 87. Feb 9 23:03:47.579996 sshd[6750]: Failed password for root from 218.92.0.22 port 28414 ssh2 Feb 9 23:03:47.590838 sshd[6804]: Accepted publickey for core from 139.178.89.65 port 50618 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:47.591666 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:47.594645 systemd-logind[1544]: New session 88 of user core. Feb 9 23:03:47.595283 systemd[1]: Started session-88.scope. Feb 9 23:03:48.074128 sshd[6778]: Failed password for root from 218.92.0.29 port 44219 ssh2 Feb 9 23:03:48.926584 env[1558]: time="2024-02-09T23:03:48.926558910Z" level=info msg="StopContainer for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" with timeout 30 (s)" Feb 9 23:03:48.926843 env[1558]: time="2024-02-09T23:03:48.926774135Z" level=info msg="Stop container \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" with signal terminated" Feb 9 23:03:48.950454 env[1558]: time="2024-02-09T23:03:48.950418956Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 23:03:48.953217 env[1558]: time="2024-02-09T23:03:48.953199890Z" level=info msg="StopContainer for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" with timeout 1 (s)" Feb 9 23:03:48.953307 env[1558]: time="2024-02-09T23:03:48.953294626Z" level=info msg="Stop container \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" with signal terminated" Feb 9 23:03:48.956735 systemd-networkd[1409]: lxc_health: Link DOWN Feb 9 23:03:48.956738 systemd-networkd[1409]: lxc_health: Lost carrier Feb 9 23:03:48.963757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725-rootfs.mount: Deactivated successfully. Feb 9 23:03:48.967736 env[1558]: time="2024-02-09T23:03:48.967678211Z" level=info msg="shim disconnected" id=32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725 Feb 9 23:03:48.967736 env[1558]: time="2024-02-09T23:03:48.967710450Z" level=warning msg="cleaning up after shim disconnected" id=32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725 namespace=k8s.io Feb 9 23:03:48.967736 env[1558]: time="2024-02-09T23:03:48.967718520Z" level=info msg="cleaning up dead shim" Feb 9 23:03:48.985122 env[1558]: time="2024-02-09T23:03:48.985067547Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6872 runtime=io.containerd.runc.v2\n" Feb 9 23:03:48.986045 env[1558]: time="2024-02-09T23:03:48.985993638Z" level=info msg="StopContainer for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" returns successfully" Feb 9 23:03:48.986510 env[1558]: time="2024-02-09T23:03:48.986458008Z" level=info msg="StopPodSandbox for \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\"" Feb 9 23:03:48.986577 env[1558]: time="2024-02-09T23:03:48.986519132Z" level=info msg="Container to stop \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:48.988645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288-shm.mount: Deactivated successfully. Feb 9 23:03:49.041714 env[1558]: time="2024-02-09T23:03:49.041612003Z" level=info msg="shim disconnected" id=595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57 Feb 9 23:03:49.042002 env[1558]: time="2024-02-09T23:03:49.041717464Z" level=warning msg="cleaning up after shim disconnected" id=595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57 namespace=k8s.io Feb 9 23:03:49.042002 env[1558]: time="2024-02-09T23:03:49.041747147Z" level=info msg="cleaning up dead shim" Feb 9 23:03:49.043042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57-rootfs.mount: Deactivated successfully. Feb 9 23:03:49.045888 env[1558]: time="2024-02-09T23:03:49.045752409Z" level=info msg="shim disconnected" id=391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288 Feb 9 23:03:49.045888 env[1558]: time="2024-02-09T23:03:49.045840961Z" level=warning msg="cleaning up after shim disconnected" id=391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288 namespace=k8s.io Feb 9 23:03:49.045888 env[1558]: time="2024-02-09T23:03:49.045884033Z" level=info msg="cleaning up dead shim" Feb 9 23:03:49.048621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288-rootfs.mount: Deactivated successfully. Feb 9 23:03:49.053531 env[1558]: time="2024-02-09T23:03:49.053455298Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6923 runtime=io.containerd.runc.v2\n" Feb 9 23:03:49.054707 env[1558]: time="2024-02-09T23:03:49.054667423Z" level=info msg="StopContainer for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" returns successfully" Feb 9 23:03:49.055254 env[1558]: time="2024-02-09T23:03:49.055185311Z" level=info msg="StopPodSandbox for \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\"" Feb 9 23:03:49.055375 env[1558]: time="2024-02-09T23:03:49.055273256Z" level=info msg="Container to stop \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:49.055375 env[1558]: time="2024-02-09T23:03:49.055306061Z" level=info msg="Container to stop \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:49.055375 env[1558]: time="2024-02-09T23:03:49.055322994Z" level=info msg="Container to stop \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:49.055375 env[1558]: time="2024-02-09T23:03:49.055339479Z" level=info msg="Container to stop \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:49.055375 env[1558]: time="2024-02-09T23:03:49.055354278Z" level=info msg="Container to stop \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:49.055646 env[1558]: time="2024-02-09T23:03:49.055615200Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6930 runtime=io.containerd.runc.v2\n" Feb 9 23:03:49.056014 env[1558]: time="2024-02-09T23:03:49.055952390Z" level=info msg="TearDown network for sandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" successfully" Feb 9 23:03:49.056014 env[1558]: time="2024-02-09T23:03:49.055981041Z" level=info msg="StopPodSandbox for \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" returns successfully" Feb 9 23:03:49.076635 env[1558]: time="2024-02-09T23:03:49.076577532Z" level=info msg="shim disconnected" id=8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299 Feb 9 23:03:49.076877 env[1558]: time="2024-02-09T23:03:49.076638468Z" level=warning msg="cleaning up after shim disconnected" id=8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299 namespace=k8s.io Feb 9 23:03:49.076877 env[1558]: time="2024-02-09T23:03:49.076657183Z" level=info msg="cleaning up dead shim" Feb 9 23:03:49.084048 env[1558]: time="2024-02-09T23:03:49.083983275Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6967 runtime=io.containerd.runc.v2\n" Feb 9 23:03:49.084345 env[1558]: time="2024-02-09T23:03:49.084284454Z" level=info msg="TearDown network for sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" successfully" Feb 9 23:03:49.084345 env[1558]: time="2024-02-09T23:03:49.084316807Z" level=info msg="StopPodSandbox for \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" returns successfully" Feb 9 23:03:49.240339 kubelet[2697]: I0209 23:03:49.240108 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-hostproc\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.240339 kubelet[2697]: I0209 23:03:49.240206 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-xtables-lock\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.240339 kubelet[2697]: I0209 23:03:49.240232 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.240339 kubelet[2697]: I0209 23:03:49.240277 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-hubble-tls\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.240339 kubelet[2697]: I0209 23:03:49.240347 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-kernel\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.241778 kubelet[2697]: I0209 23:03:49.240322 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.241778 kubelet[2697]: I0209 23:03:49.240410 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-run\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.241778 kubelet[2697]: I0209 23:03:49.240471 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-net\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.241778 kubelet[2697]: I0209 23:03:49.240453 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.241778 kubelet[2697]: I0209 23:03:49.240496 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.242474 kubelet[2697]: I0209 23:03:49.240521 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.242474 kubelet[2697]: I0209 23:03:49.240556 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-lib-modules\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.242474 kubelet[2697]: I0209 23:03:49.240643 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cni-path\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.242474 kubelet[2697]: I0209 23:03:49.240695 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.242474 kubelet[2697]: I0209 23:03:49.240754 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-etc-cni-netd\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.243075 kubelet[2697]: I0209 23:03:49.240664 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.243075 kubelet[2697]: I0209 23:03:49.240794 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.243075 kubelet[2697]: I0209 23:03:49.240854 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db5af86e-ef00-49c9-bff2-d87e0468d58f-cilium-config-path\") pod \"db5af86e-ef00-49c9-bff2-d87e0468d58f\" (UID: \"db5af86e-ef00-49c9-bff2-d87e0468d58f\") " Feb 9 23:03:49.243075 kubelet[2697]: I0209 23:03:49.240982 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxhb4\" (UniqueName: \"kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-kube-api-access-nxhb4\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.243075 kubelet[2697]: I0209 23:03:49.241049 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e90a13e0-74e9-47f4-857d-13c07766c9ca-clustermesh-secrets\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.243597 kubelet[2697]: I0209 23:03:49.241119 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-cgroup\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.243597 kubelet[2697]: I0209 23:03:49.241227 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994b4\" (UniqueName: \"kubernetes.io/projected/db5af86e-ef00-49c9-bff2-d87e0468d58f-kube-api-access-994b4\") pod \"db5af86e-ef00-49c9-bff2-d87e0468d58f\" (UID: \"db5af86e-ef00-49c9-bff2-d87e0468d58f\") " Feb 9 23:03:49.243597 kubelet[2697]: W0209 23:03:49.241270 2697 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/db5af86e-ef00-49c9-bff2-d87e0468d58f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 23:03:49.243597 kubelet[2697]: I0209 23:03:49.241294 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.243597 kubelet[2697]: I0209 23:03:49.241342 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-config-path\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.243597 kubelet[2697]: I0209 23:03:49.241441 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-bpf-maps\") pod \"e90a13e0-74e9-47f4-857d-13c07766c9ca\" (UID: \"e90a13e0-74e9-47f4-857d-13c07766c9ca\") " Feb 9 23:03:49.244255 kubelet[2697]: I0209 23:03:49.241574 2697 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-hostproc\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.244255 kubelet[2697]: I0209 23:03:49.241604 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:49.244255 kubelet[2697]: I0209 23:03:49.241644 2697 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-xtables-lock\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.244255 kubelet[2697]: I0209 23:03:49.241822 2697 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.244255 kubelet[2697]: I0209 23:03:49.241918 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-run\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.244255 kubelet[2697]: I0209 23:03:49.241980 2697 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-host-proc-sys-net\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.244255 kubelet[2697]: W0209 23:03:49.242029 2697 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e90a13e0-74e9-47f4-857d-13c07766c9ca/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 23:03:49.245021 kubelet[2697]: I0209 23:03:49.242067 2697 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-lib-modules\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.245021 kubelet[2697]: I0209 23:03:49.242136 2697 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cni-path\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.245021 kubelet[2697]: I0209 23:03:49.242196 2697 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-etc-cni-netd\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.245021 kubelet[2697]: I0209 23:03:49.242231 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-cgroup\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.247098 kubelet[2697]: I0209 23:03:49.246996 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 23:03:49.247430 kubelet[2697]: I0209 23:03:49.247330 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-kube-api-access-nxhb4" (OuterVolumeSpecName: "kube-api-access-nxhb4") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "kube-api-access-nxhb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 23:03:49.247731 kubelet[2697]: I0209 23:03:49.247653 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90a13e0-74e9-47f4-857d-13c07766c9ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 23:03:49.248751 kubelet[2697]: I0209 23:03:49.248647 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5af86e-ef00-49c9-bff2-d87e0468d58f-kube-api-access-994b4" (OuterVolumeSpecName: "kube-api-access-994b4") pod "db5af86e-ef00-49c9-bff2-d87e0468d58f" (UID: "db5af86e-ef00-49c9-bff2-d87e0468d58f"). InnerVolumeSpecName "kube-api-access-994b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 23:03:49.249879 kubelet[2697]: I0209 23:03:49.249753 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5af86e-ef00-49c9-bff2-d87e0468d58f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db5af86e-ef00-49c9-bff2-d87e0468d58f" (UID: "db5af86e-ef00-49c9-bff2-d87e0468d58f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 23:03:49.250341 kubelet[2697]: I0209 23:03:49.250240 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e90a13e0-74e9-47f4-857d-13c07766c9ca" (UID: "e90a13e0-74e9-47f4-857d-13c07766c9ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 23:03:49.343071 kubelet[2697]: I0209 23:03:49.342968 2697 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-hubble-tls\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.343071 kubelet[2697]: I0209 23:03:49.343069 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db5af86e-ef00-49c9-bff2-d87e0468d58f-cilium-config-path\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.343502 kubelet[2697]: I0209 23:03:49.343111 2697 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-994b4\" (UniqueName: \"kubernetes.io/projected/db5af86e-ef00-49c9-bff2-d87e0468d58f-kube-api-access-994b4\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.343502 kubelet[2697]: I0209 23:03:49.343145 2697 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nxhb4\" (UniqueName: \"kubernetes.io/projected/e90a13e0-74e9-47f4-857d-13c07766c9ca-kube-api-access-nxhb4\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.343502 kubelet[2697]: I0209 23:03:49.343177 2697 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e90a13e0-74e9-47f4-857d-13c07766c9ca-clustermesh-secrets\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.343502 kubelet[2697]: I0209 23:03:49.343209 2697 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e90a13e0-74e9-47f4-857d-13c07766c9ca-bpf-maps\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.343502 kubelet[2697]: I0209 23:03:49.343242 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e90a13e0-74e9-47f4-857d-13c07766c9ca-cilium-config-path\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:49.726797 kubelet[2697]: I0209 23:03:49.726699 2697 scope.go:115] "RemoveContainer" containerID="32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725" Feb 9 23:03:49.729662 env[1558]: time="2024-02-09T23:03:49.729543320Z" level=info msg="RemoveContainer for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\"" Feb 9 23:03:49.733317 env[1558]: time="2024-02-09T23:03:49.733301839Z" level=info msg="RemoveContainer for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" returns successfully" Feb 9 23:03:49.733383 kubelet[2697]: I0209 23:03:49.733377 2697 scope.go:115] "RemoveContainer" containerID="32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725" Feb 9 23:03:49.733516 env[1558]: time="2024-02-09T23:03:49.733466591Z" level=error msg="ContainerStatus for \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\": not found" Feb 9 23:03:49.733610 kubelet[2697]: E0209 23:03:49.733603 2697 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\": not found" containerID="32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725" Feb 9 23:03:49.733636 kubelet[2697]: I0209 23:03:49.733621 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725} err="failed to get container status \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\": rpc error: code = NotFound desc = an error occurred when try to find container \"32673239c219f95f107a4ed4d8989f31b905e0b7a54251473faca1aa3ca20725\": not found" Feb 9 23:03:49.733636 kubelet[2697]: I0209 23:03:49.733628 2697 scope.go:115] "RemoveContainer" containerID="595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57" Feb 9 23:03:49.734041 env[1558]: time="2024-02-09T23:03:49.734028831Z" level=info msg="RemoveContainer for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\"" Feb 9 23:03:49.735068 env[1558]: time="2024-02-09T23:03:49.735054704Z" level=info msg="RemoveContainer for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" returns successfully" Feb 9 23:03:49.735116 kubelet[2697]: I0209 23:03:49.735110 2697 scope.go:115] "RemoveContainer" containerID="ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de" Feb 9 23:03:49.735593 env[1558]: time="2024-02-09T23:03:49.735582209Z" level=info msg="RemoveContainer for \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\"" Feb 9 23:03:49.736730 env[1558]: time="2024-02-09T23:03:49.736716626Z" level=info msg="RemoveContainer for \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\" returns successfully" Feb 9 23:03:49.736778 kubelet[2697]: I0209 23:03:49.736773 2697 scope.go:115] "RemoveContainer" containerID="11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940" Feb 9 23:03:49.737423 env[1558]: time="2024-02-09T23:03:49.737411549Z" level=info msg="RemoveContainer for \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\"" Feb 9 23:03:49.738443 env[1558]: time="2024-02-09T23:03:49.738406458Z" level=info msg="RemoveContainer for \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\" returns successfully" Feb 9 23:03:49.738526 kubelet[2697]: I0209 23:03:49.738490 2697 scope.go:115] "RemoveContainer" containerID="5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1" Feb 9 23:03:49.738860 env[1558]: time="2024-02-09T23:03:49.738848113Z" level=info msg="RemoveContainer for \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\"" Feb 9 23:03:49.740161 env[1558]: time="2024-02-09T23:03:49.740148590Z" level=info msg="RemoveContainer for \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\" returns successfully" Feb 9 23:03:49.740240 kubelet[2697]: I0209 23:03:49.740232 2697 scope.go:115] "RemoveContainer" containerID="486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2" Feb 9 23:03:49.740684 env[1558]: time="2024-02-09T23:03:49.740672031Z" level=info msg="RemoveContainer for \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\"" Feb 9 23:03:49.741820 env[1558]: time="2024-02-09T23:03:49.741807051Z" level=info msg="RemoveContainer for \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\" returns successfully" Feb 9 23:03:49.741920 kubelet[2697]: I0209 23:03:49.741874 2697 scope.go:115] "RemoveContainer" containerID="595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57" Feb 9 23:03:49.742103 env[1558]: time="2024-02-09T23:03:49.742008268Z" level=error msg="ContainerStatus for \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\": not found" Feb 9 23:03:49.742183 kubelet[2697]: E0209 23:03:49.742175 2697 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\": not found" containerID="595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57" Feb 9 23:03:49.742225 kubelet[2697]: I0209 23:03:49.742198 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57} err="failed to get container status \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\": rpc error: code = NotFound desc = an error occurred when try to find container \"595541aa6fbd4dfe8f57e49beb58057aa01603e10df04306d579074b6120ce57\": not found" Feb 9 23:03:49.742225 kubelet[2697]: I0209 23:03:49.742211 2697 scope.go:115] "RemoveContainer" containerID="ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de" Feb 9 23:03:49.742357 env[1558]: time="2024-02-09T23:03:49.742332749Z" level=error msg="ContainerStatus for \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\": not found" Feb 9 23:03:49.742409 kubelet[2697]: E0209 23:03:49.742403 2697 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\": not found" containerID="ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de" Feb 9 23:03:49.742435 kubelet[2697]: I0209 23:03:49.742418 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de} err="failed to get container status \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca83c359e43467490947461c8c844ecc96e909fc8275f8ed08626d3d8d11e4de\": not found" Feb 9 23:03:49.742435 kubelet[2697]: I0209 23:03:49.742423 2697 scope.go:115] "RemoveContainer" containerID="11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940" Feb 9 23:03:49.742525 env[1558]: time="2024-02-09T23:03:49.742495294Z" level=error msg="ContainerStatus for \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\": not found" Feb 9 23:03:49.742575 kubelet[2697]: E0209 23:03:49.742568 2697 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\": not found" containerID="11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940" Feb 9 23:03:49.742599 kubelet[2697]: I0209 23:03:49.742582 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940} err="failed to get container status \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\": rpc error: code = NotFound desc = an error occurred when try to find container \"11623bdc3ba45da0a53579da6f6f8bb1ed88f6e6b7959682ba06e3d436f13940\": not found" Feb 9 23:03:49.742599 kubelet[2697]: I0209 23:03:49.742587 2697 scope.go:115] "RemoveContainer" containerID="5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1" Feb 9 23:03:49.742676 env[1558]: time="2024-02-09T23:03:49.742655596Z" level=error msg="ContainerStatus for \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\": not found" Feb 9 23:03:49.742721 kubelet[2697]: E0209 23:03:49.742717 2697 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\": not found" containerID="5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1" Feb 9 23:03:49.742749 kubelet[2697]: I0209 23:03:49.742727 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1} err="failed to get container status \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\": rpc error: code = NotFound desc = an error occurred when try to find container \"5293c1ad0714aa593a64c36044443e1ca97c7a4e8d57a646bfba77b37f450ca1\": not found" Feb 9 23:03:49.742749 kubelet[2697]: I0209 23:03:49.742731 2697 scope.go:115] "RemoveContainer" containerID="486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2" Feb 9 23:03:49.742807 env[1558]: time="2024-02-09T23:03:49.742787849Z" level=error msg="ContainerStatus for \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\": not found" Feb 9 23:03:49.742854 kubelet[2697]: E0209 23:03:49.742847 2697 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\": not found" containerID="486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2" Feb 9 23:03:49.742891 kubelet[2697]: I0209 23:03:49.742874 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2} err="failed to get container status \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"486448eb0f056720e858f9074b4c8671f8593376a22cf433cfd7dd0b8731cdd2\": not found" Feb 9 23:03:49.945458 systemd[1]: var-lib-kubelet-pods-db5af86e\x2def00\x2d49c9\x2dbff2\x2dd87e0468d58f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d994b4.mount: Deactivated successfully. Feb 9 23:03:49.945691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299-rootfs.mount: Deactivated successfully. Feb 9 23:03:49.945886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299-shm.mount: Deactivated successfully. Feb 9 23:03:49.946081 systemd[1]: var-lib-kubelet-pods-e90a13e0\x2d74e9\x2d47f4\x2d857d\x2d13c07766c9ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnxhb4.mount: Deactivated successfully. Feb 9 23:03:49.946286 systemd[1]: var-lib-kubelet-pods-e90a13e0\x2d74e9\x2d47f4\x2d857d\x2d13c07766c9ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 23:03:49.946399 systemd[1]: var-lib-kubelet-pods-e90a13e0\x2d74e9\x2d47f4\x2d857d\x2d13c07766c9ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 23:03:50.517911 kubelet[2697]: I0209 23:03:50.517802 2697 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=db5af86e-ef00-49c9-bff2-d87e0468d58f path="/var/lib/kubelet/pods/db5af86e-ef00-49c9-bff2-d87e0468d58f/volumes" Feb 9 23:03:50.518959 kubelet[2697]: I0209 23:03:50.518887 2697 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e90a13e0-74e9-47f4-857d-13c07766c9ca path="/var/lib/kubelet/pods/e90a13e0-74e9-47f4-857d-13c07766c9ca/volumes" Feb 9 23:03:50.529949 sshd[6778]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 23:03:50.878718 sshd[6804]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:50.880182 systemd[1]: Started sshd@147-147.75.49.127:22-139.178.89.65:32792.service. Feb 9 23:03:50.880474 systemd[1]: sshd@146-147.75.49.127:22-139.178.89.65:50618.service: Deactivated successfully. Feb 9 23:03:50.881055 systemd[1]: session-88.scope: Deactivated successfully. Feb 9 23:03:50.881103 systemd-logind[1544]: Session 88 logged out. Waiting for processes to exit. Feb 9 23:03:50.881564 systemd-logind[1544]: Removed session 88. Feb 9 23:03:50.914654 sshd[6983]: Accepted publickey for core from 139.178.89.65 port 32792 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:50.915586 sshd[6983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:50.918742 systemd-logind[1544]: New session 89 of user core. Feb 9 23:03:50.919435 systemd[1]: Started session-89.scope. Feb 9 23:03:51.167099 sshd[6750]: Failed password for root from 218.92.0.22 port 28414 ssh2 Feb 9 23:03:51.390521 sshd[6983]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:51.392396 systemd[1]: Started sshd@148-147.75.49.127:22-139.178.89.65:32796.service. Feb 9 23:03:51.392733 systemd[1]: sshd@147-147.75.49.127:22-139.178.89.65:32792.service: Deactivated successfully. Feb 9 23:03:51.393270 systemd-logind[1544]: Session 89 logged out. Waiting for processes to exit. Feb 9 23:03:51.393270 systemd[1]: session-89.scope: Deactivated successfully. Feb 9 23:03:51.393731 systemd-logind[1544]: Removed session 89. Feb 9 23:03:51.397300 kubelet[2697]: I0209 23:03:51.397275 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 23:03:51.397408 kubelet[2697]: E0209 23:03:51.397324 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e90a13e0-74e9-47f4-857d-13c07766c9ca" containerName="apply-sysctl-overwrites" Feb 9 23:03:51.397408 kubelet[2697]: E0209 23:03:51.397336 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db5af86e-ef00-49c9-bff2-d87e0468d58f" containerName="cilium-operator" Feb 9 23:03:51.397408 kubelet[2697]: E0209 23:03:51.397342 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e90a13e0-74e9-47f4-857d-13c07766c9ca" containerName="cilium-agent" Feb 9 23:03:51.397408 kubelet[2697]: E0209 23:03:51.397348 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e90a13e0-74e9-47f4-857d-13c07766c9ca" containerName="clean-cilium-state" Feb 9 23:03:51.397408 kubelet[2697]: E0209 23:03:51.397356 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e90a13e0-74e9-47f4-857d-13c07766c9ca" containerName="mount-cgroup" Feb 9 23:03:51.397408 kubelet[2697]: E0209 23:03:51.397366 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e90a13e0-74e9-47f4-857d-13c07766c9ca" containerName="mount-bpf-fs" Feb 9 23:03:51.397408 kubelet[2697]: I0209 23:03:51.397389 2697 memory_manager.go:346] "RemoveStaleState removing state" podUID="db5af86e-ef00-49c9-bff2-d87e0468d58f" containerName="cilium-operator" Feb 9 23:03:51.397408 kubelet[2697]: I0209 23:03:51.397397 2697 memory_manager.go:346] "RemoveStaleState removing state" podUID="e90a13e0-74e9-47f4-857d-13c07766c9ca" containerName="cilium-agent" Feb 9 23:03:51.427473 sshd[7008]: Accepted publickey for core from 139.178.89.65 port 32796 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:51.428312 sshd[7008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:51.430841 systemd-logind[1544]: New session 90 of user core. Feb 9 23:03:51.431418 systemd[1]: Started session-90.scope. Feb 9 23:03:51.457808 kubelet[2697]: I0209 23:03:51.457788 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-clustermesh-secrets\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.457928 kubelet[2697]: I0209 23:03:51.457838 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-net\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.457928 kubelet[2697]: I0209 23:03:51.457892 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-xtables-lock\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458023 kubelet[2697]: I0209 23:03:51.457931 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-run\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458023 kubelet[2697]: I0209 23:03:51.457978 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-bpf-maps\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458023 kubelet[2697]: I0209 23:03:51.458020 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-lib-modules\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458155 kubelet[2697]: I0209 23:03:51.458075 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cni-path\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458155 kubelet[2697]: I0209 23:03:51.458111 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-cilium-ipsec-secrets\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458277 kubelet[2697]: I0209 23:03:51.458164 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-etc-cni-netd\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458277 kubelet[2697]: I0209 23:03:51.458226 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/477b4641-060e-4079-8dd1-8432c3988527-cilium-config-path\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458393 kubelet[2697]: I0209 23:03:51.458292 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-cgroup\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458393 kubelet[2697]: I0209 23:03:51.458361 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-hubble-tls\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458520 kubelet[2697]: I0209 23:03:51.458403 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-kernel\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458520 kubelet[2697]: I0209 23:03:51.458443 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8w9j\" (UniqueName: \"kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-kube-api-access-s8w9j\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.458520 kubelet[2697]: I0209 23:03:51.458481 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-hostproc\") pod \"cilium-n7bxf\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " pod="kube-system/cilium-n7bxf" Feb 9 23:03:51.581438 sshd[7008]: pam_unix(sshd:session): session closed for user core Feb 9 23:03:51.583404 systemd[1]: Started sshd@149-147.75.49.127:22-139.178.89.65:32810.service. Feb 9 23:03:51.583718 systemd[1]: sshd@148-147.75.49.127:22-139.178.89.65:32796.service: Deactivated successfully. Feb 9 23:03:51.584437 systemd[1]: session-90.scope: Deactivated successfully. Feb 9 23:03:51.584446 systemd-logind[1544]: Session 90 logged out. Waiting for processes to exit. Feb 9 23:03:51.584917 systemd-logind[1544]: Removed session 90. Feb 9 23:03:51.620435 sshd[7038]: Accepted publickey for core from 139.178.89.65 port 32810 ssh2: RSA SHA256:ya3CuIx5HRXQ7ikfrirbGy0PeU2mVoIERJKJ2pM2LHs Feb 9 23:03:51.621185 sshd[7038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 23:03:51.623487 systemd-logind[1544]: New session 91 of user core. Feb 9 23:03:51.623996 systemd[1]: Started session-91.scope. Feb 9 23:03:51.701058 env[1558]: time="2024-02-09T23:03:51.700821974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n7bxf,Uid:477b4641-060e-4079-8dd1-8432c3988527,Namespace:kube-system,Attempt:0,}" Feb 9 23:03:51.715244 env[1558]: time="2024-02-09T23:03:51.715153218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 23:03:51.715244 env[1558]: time="2024-02-09T23:03:51.715196483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 23:03:51.715244 env[1558]: time="2024-02-09T23:03:51.715216179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 23:03:51.715481 env[1558]: time="2024-02-09T23:03:51.715376055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c pid=7066 runtime=io.containerd.runc.v2 Feb 9 23:03:51.749853 env[1558]: time="2024-02-09T23:03:51.749822719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n7bxf,Uid:477b4641-060e-4079-8dd1-8432c3988527,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\"" Feb 9 23:03:51.751829 env[1558]: time="2024-02-09T23:03:51.751240545Z" level=info msg="CreateContainer within sandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 23:03:51.756385 env[1558]: time="2024-02-09T23:03:51.756337957Z" level=info msg="CreateContainer within sandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286\"" Feb 9 23:03:51.756573 env[1558]: time="2024-02-09T23:03:51.756530317Z" level=info msg="StartContainer for \"9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286\"" Feb 9 23:03:51.781636 env[1558]: time="2024-02-09T23:03:51.781608123Z" level=info msg="StartContainer for \"9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286\" returns successfully" Feb 9 23:03:51.800648 env[1558]: time="2024-02-09T23:03:51.800618221Z" level=info msg="shim disconnected" id=9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286 Feb 9 23:03:51.800648 env[1558]: time="2024-02-09T23:03:51.800648900Z" level=warning msg="cleaning up after shim disconnected" id=9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286 namespace=k8s.io Feb 9 23:03:51.800792 env[1558]: time="2024-02-09T23:03:51.800655371Z" level=info msg="cleaning up dead shim" Feb 9 23:03:51.805100 env[1558]: time="2024-02-09T23:03:51.805035850Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7151 runtime=io.containerd.runc.v2\n" Feb 9 23:03:51.876118 sshd[6750]: Received disconnect from 218.92.0.22 port 28414:11: [preauth] Feb 9 23:03:51.876118 sshd[6750]: Disconnected from authenticating user root 218.92.0.22 port 28414 [preauth] Feb 9 23:03:51.876696 sshd[6750]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 23:03:51.878706 systemd[1]: sshd@142-147.75.49.127:22-218.92.0.22:28414.service: Deactivated successfully. Feb 9 23:03:52.523817 env[1558]: time="2024-02-09T23:03:52.523787965Z" level=info msg="StopPodSandbox for \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\"" Feb 9 23:03:52.523937 env[1558]: time="2024-02-09T23:03:52.523854622Z" level=info msg="TearDown network for sandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" successfully" Feb 9 23:03:52.523937 env[1558]: time="2024-02-09T23:03:52.523892938Z" level=info msg="StopPodSandbox for \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" returns successfully" Feb 9 23:03:52.524191 env[1558]: time="2024-02-09T23:03:52.524148488Z" level=info msg="RemovePodSandbox for \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\"" Feb 9 23:03:52.524191 env[1558]: time="2024-02-09T23:03:52.524171010Z" level=info msg="Forcibly stopping sandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\"" Feb 9 23:03:52.524282 env[1558]: time="2024-02-09T23:03:52.524224116Z" level=info msg="TearDown network for sandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" successfully" Feb 9 23:03:52.525690 env[1558]: time="2024-02-09T23:03:52.525646778Z" level=info msg="RemovePodSandbox \"391f03b47c7e4de0ea3a0c80e29eb4ee49872abcd5928ac132f3d04d299b2288\" returns successfully" Feb 9 23:03:52.525931 env[1558]: time="2024-02-09T23:03:52.525882716Z" level=info msg="StopPodSandbox for \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\"" Feb 9 23:03:52.525989 env[1558]: time="2024-02-09T23:03:52.525930779Z" level=info msg="TearDown network for sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" successfully" Feb 9 23:03:52.525989 env[1558]: time="2024-02-09T23:03:52.525954383Z" level=info msg="StopPodSandbox for \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" returns successfully" Feb 9 23:03:52.526165 env[1558]: time="2024-02-09T23:03:52.526110583Z" level=info msg="RemovePodSandbox for \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\"" Feb 9 23:03:52.526165 env[1558]: time="2024-02-09T23:03:52.526132529Z" level=info msg="Forcibly stopping sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\"" Feb 9 23:03:52.526254 env[1558]: time="2024-02-09T23:03:52.526192193Z" level=info msg="TearDown network for sandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" successfully" Feb 9 23:03:52.527563 env[1558]: time="2024-02-09T23:03:52.527517053Z" level=info msg="RemovePodSandbox \"8e892d841a174b6c2065db438e6b0df525c2fda31a60d9315a3b20e344e69299\" returns successfully" Feb 9 23:03:52.542109 sshd[6778]: Failed password for root from 218.92.0.29 port 44219 ssh2 Feb 9 23:03:52.747489 env[1558]: time="2024-02-09T23:03:52.747393233Z" level=info msg="StopPodSandbox for \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\"" Feb 9 23:03:52.748480 env[1558]: time="2024-02-09T23:03:52.747533378Z" level=info msg="Container to stop \"9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 23:03:52.751522 kubelet[2697]: E0209 23:03:52.751467 2697 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 23:03:52.755107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c-shm.mount: Deactivated successfully. Feb 9 23:03:52.776847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c-rootfs.mount: Deactivated successfully. Feb 9 23:03:52.778190 env[1558]: time="2024-02-09T23:03:52.778158515Z" level=info msg="shim disconnected" id=a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c Feb 9 23:03:52.778265 env[1558]: time="2024-02-09T23:03:52.778193495Z" level=warning msg="cleaning up after shim disconnected" id=a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c namespace=k8s.io Feb 9 23:03:52.778265 env[1558]: time="2024-02-09T23:03:52.778204735Z" level=info msg="cleaning up dead shim" Feb 9 23:03:52.782712 env[1558]: time="2024-02-09T23:03:52.782656588Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7189 runtime=io.containerd.runc.v2\n" Feb 9 23:03:52.782861 env[1558]: time="2024-02-09T23:03:52.782841433Z" level=info msg="TearDown network for sandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" successfully" Feb 9 23:03:52.782899 env[1558]: time="2024-02-09T23:03:52.782863111Z" level=info msg="StopPodSandbox for \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" returns successfully" Feb 9 23:03:52.867212 kubelet[2697]: I0209 23:03:52.867113 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-xtables-lock\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.867212 kubelet[2697]: I0209 23:03:52.867206 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-lib-modules\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.867849 kubelet[2697]: I0209 23:03:52.867269 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-bpf-maps\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.867849 kubelet[2697]: I0209 23:03:52.867294 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.867849 kubelet[2697]: I0209 23:03:52.867327 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-cgroup\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.867849 kubelet[2697]: I0209 23:03:52.867376 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.867849 kubelet[2697]: I0209 23:03:52.867393 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.868708 kubelet[2697]: I0209 23:03:52.867468 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-net\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.868708 kubelet[2697]: I0209 23:03:52.867454 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.868708 kubelet[2697]: I0209 23:03:52.867524 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.868708 kubelet[2697]: I0209 23:03:52.867566 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-clustermesh-secrets\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.868708 kubelet[2697]: I0209 23:03:52.867629 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cni-path\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.868708 kubelet[2697]: I0209 23:03:52.867698 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-hostproc\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.869416 kubelet[2697]: I0209 23:03:52.867703 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cni-path" (OuterVolumeSpecName: "cni-path") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.869416 kubelet[2697]: I0209 23:03:52.867767 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-etc-cni-netd\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.869416 kubelet[2697]: I0209 23:03:52.867762 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-hostproc" (OuterVolumeSpecName: "hostproc") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.869416 kubelet[2697]: I0209 23:03:52.867850 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-hubble-tls\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.869416 kubelet[2697]: I0209 23:03:52.867973 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-run\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.869988 kubelet[2697]: I0209 23:03:52.867925 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.869988 kubelet[2697]: I0209 23:03:52.868060 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.869988 kubelet[2697]: I0209 23:03:52.868090 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/477b4641-060e-4079-8dd1-8432c3988527-cilium-config-path\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.869988 kubelet[2697]: I0209 23:03:52.868208 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-kernel\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.869988 kubelet[2697]: I0209 23:03:52.868279 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8w9j\" (UniqueName: \"kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-kube-api-access-s8w9j\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.870505 kubelet[2697]: I0209 23:03:52.868347 2697 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-cilium-ipsec-secrets\") pod \"477b4641-060e-4079-8dd1-8432c3988527\" (UID: \"477b4641-060e-4079-8dd1-8432c3988527\") " Feb 9 23:03:52.870505 kubelet[2697]: I0209 23:03:52.868340 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 23:03:52.870505 kubelet[2697]: I0209 23:03:52.868461 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-run\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.870505 kubelet[2697]: W0209 23:03:52.868453 2697 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/477b4641-060e-4079-8dd1-8432c3988527/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 23:03:52.870505 kubelet[2697]: I0209 23:03:52.868500 2697 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-xtables-lock\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.870505 kubelet[2697]: I0209 23:03:52.868545 2697 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-lib-modules\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.870505 kubelet[2697]: I0209 23:03:52.868578 2697 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-bpf-maps\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.871384 kubelet[2697]: I0209 23:03:52.868608 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cilium-cgroup\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.871384 kubelet[2697]: I0209 23:03:52.868641 2697 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-net\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.871384 kubelet[2697]: I0209 23:03:52.868674 2697 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-cni-path\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.871384 kubelet[2697]: I0209 23:03:52.868706 2697 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-etc-cni-netd\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.871384 kubelet[2697]: I0209 23:03:52.868739 2697 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-hostproc\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.873532 kubelet[2697]: I0209 23:03:52.873493 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477b4641-060e-4079-8dd1-8432c3988527-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 23:03:52.873743 kubelet[2697]: I0209 23:03:52.873705 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 23:03:52.873743 kubelet[2697]: I0209 23:03:52.873724 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 23:03:52.873818 kubelet[2697]: I0209 23:03:52.873808 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 23:03:52.873902 kubelet[2697]: I0209 23:03:52.873862 2697 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-kube-api-access-s8w9j" (OuterVolumeSpecName: "kube-api-access-s8w9j") pod "477b4641-060e-4079-8dd1-8432c3988527" (UID: "477b4641-060e-4079-8dd1-8432c3988527"). InnerVolumeSpecName "kube-api-access-s8w9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 23:03:52.875161 systemd[1]: var-lib-kubelet-pods-477b4641\x2d060e\x2d4079\x2d8dd1\x2d8432c3988527-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8w9j.mount: Deactivated successfully. Feb 9 23:03:52.875265 systemd[1]: var-lib-kubelet-pods-477b4641\x2d060e\x2d4079\x2d8dd1\x2d8432c3988527-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 23:03:52.875347 systemd[1]: var-lib-kubelet-pods-477b4641\x2d060e\x2d4079\x2d8dd1\x2d8432c3988527-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 23:03:52.875424 systemd[1]: var-lib-kubelet-pods-477b4641\x2d060e\x2d4079\x2d8dd1\x2d8432c3988527-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 23:03:52.969491 kubelet[2697]: I0209 23:03:52.969393 2697 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-hubble-tls\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.969491 kubelet[2697]: I0209 23:03:52.969467 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.969491 kubelet[2697]: I0209 23:03:52.969504 2697 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/477b4641-060e-4079-8dd1-8432c3988527-cilium-config-path\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.970072 kubelet[2697]: I0209 23:03:52.969540 2697 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/477b4641-060e-4079-8dd1-8432c3988527-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.970072 kubelet[2697]: I0209 23:03:52.969573 2697 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-s8w9j\" (UniqueName: \"kubernetes.io/projected/477b4641-060e-4079-8dd1-8432c3988527-kube-api-access-s8w9j\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:52.970072 kubelet[2697]: I0209 23:03:52.969635 2697 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/477b4641-060e-4079-8dd1-8432c3988527-clustermesh-secrets\") on node \"ci-3510.3.2-a-e9037c933d\" DevicePath \"\"" Feb 9 23:03:53.752764 kubelet[2697]: I0209 23:03:53.752702 2697 scope.go:115] "RemoveContainer" containerID="9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286" Feb 9 23:03:53.755463 env[1558]: time="2024-02-09T23:03:53.755334433Z" level=info msg="RemoveContainer for \"9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286\"" Feb 9 23:03:53.758545 env[1558]: time="2024-02-09T23:03:53.758505324Z" level=info msg="RemoveContainer for \"9450954a6181d3a4130ab398bc378c4f0eec43028aa9912e01c14ae59a4f2286\" returns successfully" Feb 9 23:03:53.772147 kubelet[2697]: I0209 23:03:53.772127 2697 topology_manager.go:210] "Topology Admit Handler" Feb 9 23:03:53.772272 kubelet[2697]: E0209 23:03:53.772161 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="477b4641-060e-4079-8dd1-8432c3988527" containerName="mount-cgroup" Feb 9 23:03:53.772272 kubelet[2697]: I0209 23:03:53.772180 2697 memory_manager.go:346] "RemoveStaleState removing state" podUID="477b4641-060e-4079-8dd1-8432c3988527" containerName="mount-cgroup" Feb 9 23:03:53.883053 kubelet[2697]: I0209 23:03:53.882952 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-hostproc\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.883353 kubelet[2697]: I0209 23:03:53.883116 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-bpf-maps\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.883353 kubelet[2697]: I0209 23:03:53.883306 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-clustermesh-secrets\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.883586 kubelet[2697]: I0209 23:03:53.883407 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-cilium-ipsec-secrets\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.883586 kubelet[2697]: I0209 23:03:53.883559 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-hubble-tls\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.883815 kubelet[2697]: I0209 23:03:53.883669 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-cilium-run\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.883815 kubelet[2697]: I0209 23:03:53.883761 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-lib-modules\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884077 kubelet[2697]: I0209 23:03:53.883897 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkw8\" (UniqueName: \"kubernetes.io/projected/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-kube-api-access-pdkw8\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884077 kubelet[2697]: I0209 23:03:53.884066 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-cilium-cgroup\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884304 kubelet[2697]: I0209 23:03:53.884143 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-xtables-lock\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884304 kubelet[2697]: I0209 23:03:53.884287 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-cilium-config-path\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884548 kubelet[2697]: I0209 23:03:53.884406 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-host-proc-sys-kernel\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884548 kubelet[2697]: I0209 23:03:53.884512 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-cni-path\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884759 kubelet[2697]: I0209 23:03:53.884665 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-etc-cni-netd\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:53.884881 kubelet[2697]: I0209 23:03:53.884783 2697 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f371c5a4-e6ae-4c8f-95a3-a42744184d0b-host-proc-sys-net\") pod \"cilium-s7mh6\" (UID: \"f371c5a4-e6ae-4c8f-95a3-a42744184d0b\") " pod="kube-system/cilium-s7mh6" Feb 9 23:03:54.047796 systemd[1]: Started sshd@150-147.75.49.127:22-218.92.0.22:10324.service. Feb 9 23:03:54.075409 env[1558]: time="2024-02-09T23:03:54.075314829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7mh6,Uid:f371c5a4-e6ae-4c8f-95a3-a42744184d0b,Namespace:kube-system,Attempt:0,}" Feb 9 23:03:54.091329 env[1558]: time="2024-02-09T23:03:54.091153161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 23:03:54.091329 env[1558]: time="2024-02-09T23:03:54.091250372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 23:03:54.091329 env[1558]: time="2024-02-09T23:03:54.091280178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 23:03:54.091747 env[1558]: time="2024-02-09T23:03:54.091637081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277 pid=7218 runtime=io.containerd.runc.v2 Feb 9 23:03:54.174283 env[1558]: time="2024-02-09T23:03:54.174201913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7mh6,Uid:f371c5a4-e6ae-4c8f-95a3-a42744184d0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\"" Feb 9 23:03:54.178621 env[1558]: time="2024-02-09T23:03:54.178550947Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 23:03:54.188915 env[1558]: time="2024-02-09T23:03:54.188808303Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6889969a12ec6d297d6a21e91a4965a1e208622752d9ca66a19e50acb9f04876\"" Feb 9 23:03:54.189608 env[1558]: time="2024-02-09T23:03:54.189505262Z" level=info msg="StartContainer for \"6889969a12ec6d297d6a21e91a4965a1e208622752d9ca66a19e50acb9f04876\"" Feb 9 23:03:54.306443 env[1558]: time="2024-02-09T23:03:54.306216147Z" level=info msg="StartContainer for \"6889969a12ec6d297d6a21e91a4965a1e208622752d9ca66a19e50acb9f04876\" returns successfully" Feb 9 23:03:54.395023 env[1558]: time="2024-02-09T23:03:54.394832385Z" level=info msg="shim disconnected" id=6889969a12ec6d297d6a21e91a4965a1e208622752d9ca66a19e50acb9f04876 Feb 9 23:03:54.395023 env[1558]: time="2024-02-09T23:03:54.394981583Z" level=warning msg="cleaning up after shim disconnected" id=6889969a12ec6d297d6a21e91a4965a1e208622752d9ca66a19e50acb9f04876 namespace=k8s.io Feb 9 23:03:54.395023 env[1558]: time="2024-02-09T23:03:54.395013188Z" level=info msg="cleaning up dead shim" Feb 9 23:03:54.424612 env[1558]: time="2024-02-09T23:03:54.424406584Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7301 runtime=io.containerd.runc.v2\n" Feb 9 23:03:54.518260 kubelet[2697]: I0209 23:03:54.518171 2697 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=477b4641-060e-4079-8dd1-8432c3988527 path="/var/lib/kubelet/pods/477b4641-060e-4079-8dd1-8432c3988527/volumes" Feb 9 23:03:54.765846 env[1558]: time="2024-02-09T23:03:54.765701109Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 23:03:54.775796 env[1558]: time="2024-02-09T23:03:54.775749049Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83ada5e2eb09f917aa410a0518b44719d0b338cecb1d0b85c35fa957d3708d09\"" Feb 9 23:03:54.776093 env[1558]: time="2024-02-09T23:03:54.776037178Z" level=info msg="StartContainer for \"83ada5e2eb09f917aa410a0518b44719d0b338cecb1d0b85c35fa957d3708d09\"" Feb 9 23:03:54.799361 env[1558]: time="2024-02-09T23:03:54.799334571Z" level=info msg="StartContainer for \"83ada5e2eb09f917aa410a0518b44719d0b338cecb1d0b85c35fa957d3708d09\" returns successfully" Feb 9 23:03:54.812385 env[1558]: time="2024-02-09T23:03:54.812354799Z" level=info msg="shim disconnected" id=83ada5e2eb09f917aa410a0518b44719d0b338cecb1d0b85c35fa957d3708d09 Feb 9 23:03:54.812385 env[1558]: time="2024-02-09T23:03:54.812386336Z" level=warning msg="cleaning up after shim disconnected" id=83ada5e2eb09f917aa410a0518b44719d0b338cecb1d0b85c35fa957d3708d09 namespace=k8s.io Feb 9 23:03:54.812539 env[1558]: time="2024-02-09T23:03:54.812393527Z" level=info msg="cleaning up dead shim" Feb 9 23:03:54.817095 env[1558]: time="2024-02-09T23:03:54.817067096Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7362 runtime=io.containerd.runc.v2\n" Feb 9 23:03:55.086578 sshd[6778]: Failed password for root from 218.92.0.29 port 44219 ssh2 Feb 9 23:03:55.346351 sshd[7209]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 23:03:55.772997 env[1558]: time="2024-02-09T23:03:55.772894057Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 23:03:55.782536 env[1558]: time="2024-02-09T23:03:55.782512648Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"739b1148fbeea32ff799f445b753c6153c1b35c2b88531cb48d2db0c7b89bc0f\"" Feb 9 23:03:55.782828 env[1558]: time="2024-02-09T23:03:55.782815408Z" level=info msg="StartContainer for \"739b1148fbeea32ff799f445b753c6153c1b35c2b88531cb48d2db0c7b89bc0f\"" Feb 9 23:03:55.893674 env[1558]: time="2024-02-09T23:03:55.893543997Z" level=info msg="StartContainer for \"739b1148fbeea32ff799f445b753c6153c1b35c2b88531cb48d2db0c7b89bc0f\" returns successfully" Feb 9 23:03:55.967053 env[1558]: time="2024-02-09T23:03:55.966945592Z" level=info msg="shim disconnected" id=739b1148fbeea32ff799f445b753c6153c1b35c2b88531cb48d2db0c7b89bc0f Feb 9 23:03:55.967461 env[1558]: time="2024-02-09T23:03:55.967059009Z" level=warning msg="cleaning up after shim disconnected" id=739b1148fbeea32ff799f445b753c6153c1b35c2b88531cb48d2db0c7b89bc0f namespace=k8s.io Feb 9 23:03:55.967461 env[1558]: time="2024-02-09T23:03:55.967092258Z" level=info msg="cleaning up dead shim" Feb 9 23:03:55.985804 env[1558]: time="2024-02-09T23:03:55.985687936Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7418 runtime=io.containerd.runc.v2\n" Feb 9 23:03:56.001168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-739b1148fbeea32ff799f445b753c6153c1b35c2b88531cb48d2db0c7b89bc0f-rootfs.mount: Deactivated successfully. Feb 9 23:03:56.780233 env[1558]: time="2024-02-09T23:03:56.780138371Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 23:03:56.789068 env[1558]: time="2024-02-09T23:03:56.789023793Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c462cf12665d05c16dad11868e50f550a52acfeeba2b54f55bd8bcf6971e9ef\"" Feb 9 23:03:56.789483 env[1558]: time="2024-02-09T23:03:56.789453388Z" level=info msg="StartContainer for \"7c462cf12665d05c16dad11868e50f550a52acfeeba2b54f55bd8bcf6971e9ef\"" Feb 9 23:03:56.790751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470971311.mount: Deactivated successfully. Feb 9 23:03:56.829897 env[1558]: time="2024-02-09T23:03:56.829871979Z" level=info msg="StartContainer for \"7c462cf12665d05c16dad11868e50f550a52acfeeba2b54f55bd8bcf6971e9ef\" returns successfully" Feb 9 23:03:56.863318 env[1558]: time="2024-02-09T23:03:56.863256074Z" level=info msg="shim disconnected" id=7c462cf12665d05c16dad11868e50f550a52acfeeba2b54f55bd8bcf6971e9ef Feb 9 23:03:56.863318 env[1558]: time="2024-02-09T23:03:56.863293992Z" level=warning msg="cleaning up after shim disconnected" id=7c462cf12665d05c16dad11868e50f550a52acfeeba2b54f55bd8bcf6971e9ef namespace=k8s.io Feb 9 23:03:56.863318 env[1558]: time="2024-02-09T23:03:56.863303235Z" level=info msg="cleaning up dead shim" Feb 9 23:03:56.868700 env[1558]: time="2024-02-09T23:03:56.868650912Z" level=warning msg="cleanup warnings time=\"2024-02-09T23:03:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7472 runtime=io.containerd.runc.v2\n" Feb 9 23:03:56.997316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c462cf12665d05c16dad11868e50f550a52acfeeba2b54f55bd8bcf6971e9ef-rootfs.mount: Deactivated successfully. Feb 9 23:03:57.047524 sshd[7209]: Failed password for root from 218.92.0.22 port 10324 ssh2 Feb 9 23:03:57.181652 sshd[6778]: Received disconnect from 218.92.0.29 port 44219:11: [preauth] Feb 9 23:03:57.181652 sshd[6778]: Disconnected from authenticating user root 218.92.0.29 port 44219 [preauth] Feb 9 23:03:57.182252 sshd[6778]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.29 user=root Feb 9 23:03:57.184289 systemd[1]: sshd@144-147.75.49.127:22-218.92.0.29:44219.service: Deactivated successfully. Feb 9 23:03:57.337975 systemd[1]: Started sshd@151-147.75.49.127:22-218.92.0.29:12735.service. Feb 9 23:03:57.753246 kubelet[2697]: E0209 23:03:57.753147 2697 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 23:03:57.789111 env[1558]: time="2024-02-09T23:03:57.789020141Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 23:03:57.804479 env[1558]: time="2024-02-09T23:03:57.804320641Z" level=info msg="CreateContainer within sandbox \"a5353563c6b75c4fb3a5e3f3f8b67ae66f9b2124a75940628a3d170923da9277\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"692823f917d063ae5b44d3fa76e670957a2f50163845fa557de398a6942c598d\"" Feb 9 23:03:57.805380 env[1558]: time="2024-02-09T23:03:57.805262254Z" level=info msg="StartContainer for \"692823f917d063ae5b44d3fa76e670957a2f50163845fa557de398a6942c598d\"" Feb 9 23:03:57.891995 env[1558]: time="2024-02-09T23:03:57.891924529Z" level=info msg="StartContainer for \"692823f917d063ae5b44d3fa76e670957a2f50163845fa557de398a6942c598d\" returns successfully" Feb 9 23:03:58.072865 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 23:03:58.354019 sshd[7486]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.29 user=root Feb 9 23:03:58.827015 kubelet[2697]: I0209 23:03:58.826799 2697 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-s7mh6" podStartSLOduration=5.826718913 pod.CreationTimestamp="2024-02-09 23:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 23:03:58.826327564 +0000 UTC m=+786.383082821" watchObservedRunningTime="2024-02-09 23:03:58.826718913 +0000 UTC m=+786.383474140" Feb 9 23:03:59.917085 sshd[7209]: Failed password for root from 218.92.0.22 port 10324 ssh2 Feb 9 23:03:59.983272 systemd[1]: Started sshd@152-147.75.49.127:22-216.10.245.180:33316.service. Feb 9 23:04:00.050898 kubelet[2697]: I0209 23:04:00.050811 2697 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-e9037c933d" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 23:04:00.050698994 +0000 UTC m=+787.607454262 LastTransitionTime:2024-02-09 23:04:00.050698994 +0000 UTC m=+787.607454262 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 23:04:00.466024 sshd[7486]: Failed password for root from 218.92.0.29 port 12735 ssh2 Feb 9 23:04:00.942853 systemd-networkd[1409]: lxc_health: Link UP Feb 9 23:04:00.964644 systemd-networkd[1409]: lxc_health: Gained carrier Feb 9 23:04:00.964863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 23:04:01.371156 sshd[8000]: Invalid user kara from 216.10.245.180 port 33316 Feb 9 23:04:01.372348 sshd[8000]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:01.372538 sshd[8000]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:04:01.372556 sshd[8000]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=216.10.245.180 Feb 9 23:04:01.372739 sshd[8000]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:02.005964 sshd[7486]: Failed password for root from 218.92.0.29 port 12735 ssh2 Feb 9 23:04:02.037910 systemd-networkd[1409]: lxc_health: Gained IPv6LL Feb 9 23:04:03.229220 sshd[8000]: Failed password for invalid user kara from 216.10.245.180 port 33316 ssh2 Feb 9 23:04:03.518105 sshd[8000]: Received disconnect from 216.10.245.180 port 33316:11: Bye Bye [preauth] Feb 9 23:04:03.518105 sshd[8000]: Disconnected from invalid user kara 216.10.245.180 port 33316 [preauth] Feb 9 23:04:03.520334 systemd[1]: sshd@152-147.75.49.127:22-216.10.245.180:33316.service: Deactivated successfully. Feb 9 23:04:04.667812 sshd[7209]: Failed password for root from 218.92.0.22 port 10324 ssh2 Feb 9 23:04:05.215077 sshd[7486]: Failed password for root from 218.92.0.29 port 12735 ssh2 Feb 9 23:04:06.708565 sshd[7209]: Received disconnect from 218.92.0.22 port 10324:11: [preauth] Feb 9 23:04:06.708565 sshd[7209]: Disconnected from authenticating user root 218.92.0.22 port 10324 [preauth] Feb 9 23:04:06.709116 sshd[7209]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 23:04:06.711172 systemd[1]: sshd@150-147.75.49.127:22-218.92.0.22:10324.service: Deactivated successfully. Feb 9 23:04:06.930752 systemd[1]: Started sshd@153-147.75.49.127:22-218.92.0.22:56919.service. Feb 9 23:04:07.269162 sshd[7486]: Received disconnect from 218.92.0.29 port 12735:11: [preauth] Feb 9 23:04:07.269162 sshd[7486]: Disconnected from authenticating user root 218.92.0.29 port 12735 [preauth] Feb 9 23:04:07.269660 sshd[7486]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.29 user=root Feb 9 23:04:07.271733 systemd[1]: sshd@151-147.75.49.127:22-218.92.0.29:12735.service: Deactivated successfully. Feb 9 23:04:07.424727 systemd[1]: Started sshd@154-147.75.49.127:22-218.92.0.29:15948.service. Feb 9 23:04:08.085102 sshd[8321]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 23:04:08.430900 sshd[8325]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.29 user=root Feb 9 23:04:09.902101 sshd[8321]: Failed password for root from 218.92.0.22 port 56919 ssh2 Feb 9 23:04:10.247912 sshd[8325]: Failed password for root from 218.92.0.29 port 15948 ssh2 Feb 9 23:04:12.465904 sshd[8321]: Failed password for root from 218.92.0.22 port 56919 ssh2 Feb 9 23:04:12.789813 sshd[8325]: Failed password for root from 218.92.0.29 port 15948 ssh2 Feb 9 23:04:14.027491 sshd[8321]: Failed password for root from 218.92.0.22 port 56919 ssh2 Feb 9 23:04:14.328514 sshd[8325]: Failed password for root from 218.92.0.29 port 15948 ssh2 Feb 9 23:04:14.406006 systemd[1]: Started sshd@155-147.75.49.127:22-124.222.36.153:44006.service. Feb 9 23:04:14.966104 sshd[8321]: Received disconnect from 218.92.0.22 port 56919:11: [preauth] Feb 9 23:04:14.966104 sshd[8321]: Disconnected from authenticating user root 218.92.0.22 port 56919 [preauth] Feb 9 23:04:14.966666 sshd[8321]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 23:04:14.968755 systemd[1]: sshd@153-147.75.49.127:22-218.92.0.22:56919.service: Deactivated successfully. Feb 9 23:04:15.228944 sshd[8325]: Received disconnect from 218.92.0.29 port 15948:11: [preauth] Feb 9 23:04:15.228944 sshd[8325]: Disconnected from authenticating user root 218.92.0.29 port 15948 [preauth] Feb 9 23:04:15.229332 sshd[8325]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.29 user=root Feb 9 23:04:15.231390 systemd[1]: sshd@154-147.75.49.127:22-218.92.0.29:15948.service: Deactivated successfully. Feb 9 23:04:22.729338 systemd[1]: Started sshd@156-147.75.49.127:22-124.220.165.94:51580.service. Feb 9 23:04:23.593371 sshd[8533]: Invalid user transadmin from 124.220.165.94 port 51580 Feb 9 23:04:23.599305 sshd[8533]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:23.600373 sshd[8533]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:04:23.600461 sshd[8533]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.220.165.94 Feb 9 23:04:23.601359 sshd[8533]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:25.478078 sshd[8533]: Failed password for invalid user transadmin from 124.220.165.94 port 51580 ssh2 Feb 9 23:04:26.099492 sshd[8533]: Received disconnect from 124.220.165.94 port 51580:11: Bye Bye [preauth] Feb 9 23:04:26.099492 sshd[8533]: Disconnected from invalid user transadmin 124.220.165.94 port 51580 [preauth] Feb 9 23:04:26.102059 systemd[1]: sshd@156-147.75.49.127:22-124.220.165.94:51580.service: Deactivated successfully. Feb 9 23:04:26.411663 sshd[8414]: Connection closed by 124.222.36.153 port 44006 [preauth] Feb 9 23:04:26.413352 systemd[1]: sshd@155-147.75.49.127:22-124.222.36.153:44006.service: Deactivated successfully. Feb 9 23:04:26.455397 systemd[1]: Started sshd@157-147.75.49.127:22-157.230.254.228:42824.service. Feb 9 23:04:27.461317 sshd[8595]: Invalid user rezaul from 157.230.254.228 port 42824 Feb 9 23:04:27.467512 sshd[8595]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:27.468653 sshd[8595]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:04:27.468743 sshd[8595]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=157.230.254.228 Feb 9 23:04:27.469763 sshd[8595]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:29.562796 sshd[8595]: Failed password for invalid user rezaul from 157.230.254.228 port 42824 ssh2 Feb 9 23:04:31.520787 sshd[8595]: Received disconnect from 157.230.254.228 port 42824:11: Bye Bye [preauth] Feb 9 23:04:31.520787 sshd[8595]: Disconnected from invalid user rezaul 157.230.254.228 port 42824 [preauth] Feb 9 23:04:31.523492 systemd[1]: sshd@157-147.75.49.127:22-157.230.254.228:42824.service: Deactivated successfully. Feb 9 23:04:39.640391 systemd[1]: Started sshd@158-147.75.49.127:22-117.102.64.108:41674.service. Feb 9 23:04:40.724049 sshd[8804]: Invalid user raquel from 117.102.64.108 port 41674 Feb 9 23:04:40.730344 sshd[8804]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:40.731686 sshd[8804]: pam_unix(sshd:auth): check pass; user unknown Feb 9 23:04:40.731805 sshd[8804]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.102.64.108 Feb 9 23:04:40.732966 sshd[8804]: pam_faillock(sshd:auth): User unknown Feb 9 23:04:43.277532 sshd[8804]: Failed password for invalid user raquel from 117.102.64.108 port 41674 ssh2 Feb 9 23:04:44.120525 sshd[8804]: Received disconnect from 117.102.64.108 port 41674:11: Bye Bye [preauth] Feb 9 23:04:44.120525 sshd[8804]: Disconnected from invalid user raquel 117.102.64.108 port 41674 [preauth] Feb 9 23:04:44.123179 systemd[1]: sshd@158-147.75.49.127:22-117.102.64.108:41674.service: Deactivated successfully. Feb 9 23:04:49.937366 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 9 23:04:49.946230 systemd-tmpfiles[8928]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 23:04:49.946464 systemd-tmpfiles[8928]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 23:04:49.947234 systemd-tmpfiles[8928]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 23:04:49.958496 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 9 23:04:49.958819 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 9 23:04:49.960322 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 9 23:04:51.972628 sshd[7038]: pam_unix(sshd:session): session closed for user core Feb 9 23:04:51.974060 systemd[1]: sshd@149-147.75.49.127:22-139.178.89.65:32810.service: Deactivated successfully. Feb 9 23:04:51.974728 systemd-logind[1544]: Session 91 logged out. Waiting for processes to exit. Feb 9 23:04:51.974806 systemd[1]: session-91.scope: Deactivated successfully. Feb 9 23:04:51.975419 systemd-logind[1544]: Removed session 91. Feb 9 23:04:52.529093 env[1558]: time="2024-02-09T23:04:52.529036653Z" level=info msg="StopPodSandbox for \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\"" Feb 9 23:04:52.529409 env[1558]: time="2024-02-09T23:04:52.529111590Z" level=info msg="TearDown network for sandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" successfully" Feb 9 23:04:52.529409 env[1558]: time="2024-02-09T23:04:52.529142997Z" level=info msg="StopPodSandbox for \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" returns successfully" Feb 9 23:04:52.529409 env[1558]: time="2024-02-09T23:04:52.529355800Z" level=info msg="RemovePodSandbox for \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\"" Feb 9 23:04:52.529409 env[1558]: time="2024-02-09T23:04:52.529382658Z" level=info msg="Forcibly stopping sandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\"" Feb 9 23:04:52.529543 env[1558]: time="2024-02-09T23:04:52.529442600Z" level=info msg="TearDown network for sandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" successfully" Feb 9 23:04:52.531049 env[1558]: time="2024-02-09T23:04:52.531003458Z" level=info msg="RemovePodSandbox \"a4b452bd385e22c78cdc4309dc6e0ab1fd291a125df48e839a48852047f5dd6c\" returns successfully"