Feb 13 05:12:33.552068 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 05:12:33.552081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 05:12:33.552088 kernel: BIOS-provided physical RAM map: Feb 13 05:12:33.552092 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 05:12:33.552096 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 05:12:33.552100 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 05:12:33.552104 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 05:12:33.552108 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 05:12:33.552112 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819e2fff] usable Feb 13 05:12:33.552116 kernel: BIOS-e820: [mem 0x00000000819e3000-0x00000000819e3fff] ACPI NVS Feb 13 05:12:33.552121 kernel: BIOS-e820: [mem 0x00000000819e4000-0x00000000819e4fff] reserved Feb 13 05:12:33.552125 kernel: BIOS-e820: [mem 0x00000000819e5000-0x000000008afccfff] usable Feb 13 05:12:33.552128 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 05:12:33.552132 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 05:12:33.552137 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 05:12:33.552143 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 05:12:33.552147 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 05:12:33.552151 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 05:12:33.552155 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 05:12:33.552159 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 05:12:33.552163 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 05:12:33.552167 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 05:12:33.552172 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 05:12:33.552176 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 05:12:33.552180 kernel: NX (Execute Disable) protection: active Feb 13 05:12:33.552184 kernel: SMBIOS 3.2.1 present. Feb 13 05:12:33.552189 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 05:12:33.552193 kernel: tsc: Detected 3400.000 MHz processor Feb 13 05:12:33.552197 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 05:12:33.552202 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 05:12:33.552206 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 05:12:33.552211 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 05:12:33.552215 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 05:12:33.552219 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 05:12:33.552224 kernel: Using GB pages for direct mapping Feb 13 05:12:33.552228 kernel: ACPI: Early table checksum verification disabled Feb 13 05:12:33.552233 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 05:12:33.552237 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 05:12:33.552242 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 05:12:33.552246 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 05:12:33.552252 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 05:12:33.552257 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 05:12:33.552262 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 05:12:33.552267 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 05:12:33.552272 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 05:12:33.552276 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 05:12:33.552281 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 05:12:33.552286 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 05:12:33.552290 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 05:12:33.552295 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:12:33.552300 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 05:12:33.552305 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 05:12:33.552310 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:12:33.552314 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:12:33.552319 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 05:12:33.552323 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 05:12:33.552328 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:12:33.552333 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:12:33.552338 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 05:12:33.552343 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 05:12:33.552347 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 05:12:33.552352 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 05:12:33.552357 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 05:12:33.552361 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 05:12:33.552366 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 05:12:33.552370 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 05:12:33.552375 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 05:12:33.552380 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 05:12:33.552385 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 05:12:33.552390 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 05:12:33.552394 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 05:12:33.552399 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 05:12:33.552404 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 05:12:33.552408 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 05:12:33.552413 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 05:12:33.552418 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 05:12:33.552423 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 05:12:33.552428 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 05:12:33.552432 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 05:12:33.552437 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 05:12:33.552441 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 05:12:33.552446 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 05:12:33.552450 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 05:12:33.552455 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 05:12:33.552460 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 05:12:33.552465 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 05:12:33.552470 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 05:12:33.552474 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 05:12:33.552479 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 05:12:33.552483 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 05:12:33.552488 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 05:12:33.552493 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 05:12:33.552497 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 05:12:33.552503 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 05:12:33.552507 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 05:12:33.552512 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 05:12:33.552516 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 05:12:33.552521 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 05:12:33.552526 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 05:12:33.552530 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 05:12:33.552535 kernel: No NUMA configuration found Feb 13 05:12:33.552540 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 05:12:33.552545 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 05:12:33.552550 kernel: Zone ranges: Feb 13 05:12:33.552554 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 05:12:33.552559 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 05:12:33.552564 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 05:12:33.552568 kernel: Movable zone start for each node Feb 13 05:12:33.552573 kernel: Early memory node ranges Feb 13 05:12:33.552578 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 05:12:33.552584 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 05:12:33.552589 kernel: node 0: [mem 0x0000000040400000-0x00000000819e2fff] Feb 13 05:12:33.552595 kernel: node 0: [mem 0x00000000819e5000-0x000000008afccfff] Feb 13 05:12:33.552599 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 05:12:33.552604 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 05:12:33.552608 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 05:12:33.552613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 05:12:33.552618 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 05:12:33.552626 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 05:12:33.552632 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 05:12:33.552650 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 05:12:33.552655 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 05:12:33.552661 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 05:12:33.552666 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 05:12:33.552671 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 05:12:33.552676 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 05:12:33.552681 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 05:12:33.552685 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 05:12:33.552690 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 05:12:33.552696 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 05:12:33.552701 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 05:12:33.552705 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 05:12:33.552710 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 05:12:33.552715 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 05:12:33.552720 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 05:12:33.552725 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 05:12:33.552729 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 05:12:33.552734 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 05:12:33.552740 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 05:12:33.552745 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 05:12:33.552750 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 05:12:33.552754 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 05:12:33.552759 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 05:12:33.552764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 05:12:33.552769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 05:12:33.552774 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 05:12:33.552779 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 05:12:33.552784 kernel: TSC deadline timer available Feb 13 05:12:33.552789 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 05:12:33.552794 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 05:12:33.552799 kernel: Booting paravirtualized kernel on bare hardware Feb 13 05:12:33.552804 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 05:12:33.552809 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 05:12:33.552814 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 05:12:33.552819 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 05:12:33.552824 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 05:12:33.552829 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 05:12:33.552834 kernel: Policy zone: Normal Feb 13 05:12:33.552839 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 05:12:33.552845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 05:12:33.552849 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 05:12:33.552854 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 05:12:33.552859 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 05:12:33.552864 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 05:12:33.552870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 05:12:33.552875 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 05:12:33.552880 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 05:12:33.552885 kernel: rcu: Hierarchical RCU implementation. Feb 13 05:12:33.552890 kernel: rcu: RCU event tracing is enabled. Feb 13 05:12:33.552895 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 05:12:33.552900 kernel: Rude variant of Tasks RCU enabled. Feb 13 05:12:33.552905 kernel: Tracing variant of Tasks RCU enabled. Feb 13 05:12:33.552910 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 05:12:33.552915 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 05:12:33.552920 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 05:12:33.552925 kernel: random: crng init done Feb 13 05:12:33.552930 kernel: Console: colour dummy device 80x25 Feb 13 05:12:33.552934 kernel: printk: console [tty0] enabled Feb 13 05:12:33.552939 kernel: printk: console [ttyS1] enabled Feb 13 05:12:33.552944 kernel: ACPI: Core revision 20210730 Feb 13 05:12:33.552949 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 05:12:33.552954 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 05:12:33.552960 kernel: DMAR: Host address width 39 Feb 13 05:12:33.552965 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 05:12:33.552970 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 05:12:33.552975 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 05:12:33.552980 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 05:12:33.552985 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 05:12:33.552989 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 05:12:33.552994 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 05:12:33.552999 kernel: x2apic enabled Feb 13 05:12:33.553005 kernel: Switched APIC routing to cluster x2apic. Feb 13 05:12:33.553010 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 05:12:33.553015 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 05:12:33.553020 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 05:12:33.553025 kernel: process: using mwait in idle threads Feb 13 05:12:33.553029 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 05:12:33.553034 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 05:12:33.553039 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 05:12:33.553044 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 05:12:33.553050 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 05:12:33.553055 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 05:12:33.553059 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 05:12:33.553064 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 05:12:33.553069 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 05:12:33.553074 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 05:12:33.553078 kernel: TAA: Mitigation: TSX disabled Feb 13 05:12:33.553083 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 05:12:33.553088 kernel: SRBDS: Mitigation: Microcode Feb 13 05:12:33.553093 kernel: GDS: Vulnerable: No microcode Feb 13 05:12:33.553098 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 05:12:33.553103 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 05:12:33.553108 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 05:12:33.553113 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 05:12:33.553118 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 05:12:33.553123 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 05:12:33.553127 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 05:12:33.553132 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 05:12:33.553137 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 05:12:33.553142 kernel: Freeing SMP alternatives memory: 32K Feb 13 05:12:33.553147 kernel: pid_max: default: 32768 minimum: 301 Feb 13 05:12:33.553151 kernel: LSM: Security Framework initializing Feb 13 05:12:33.553156 kernel: SELinux: Initializing. Feb 13 05:12:33.553162 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 05:12:33.553167 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 05:12:33.553171 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 05:12:33.553176 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 05:12:33.553181 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 05:12:33.553186 kernel: ... version: 4 Feb 13 05:12:33.553191 kernel: ... bit width: 48 Feb 13 05:12:33.553196 kernel: ... generic registers: 4 Feb 13 05:12:33.553201 kernel: ... value mask: 0000ffffffffffff Feb 13 05:12:33.553206 kernel: ... max period: 00007fffffffffff Feb 13 05:12:33.553211 kernel: ... fixed-purpose events: 3 Feb 13 05:12:33.553216 kernel: ... event mask: 000000070000000f Feb 13 05:12:33.553221 kernel: signal: max sigframe size: 2032 Feb 13 05:12:33.553226 kernel: rcu: Hierarchical SRCU implementation. Feb 13 05:12:33.553230 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 05:12:33.553235 kernel: smp: Bringing up secondary CPUs ... Feb 13 05:12:33.553240 kernel: x86: Booting SMP configuration: Feb 13 05:12:33.553245 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 05:12:33.553250 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 05:12:33.553256 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 05:12:33.553261 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 05:12:33.553265 kernel: smpboot: Max logical packages: 1 Feb 13 05:12:33.553270 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 05:12:33.553275 kernel: devtmpfs: initialized Feb 13 05:12:33.553280 kernel: x86/mm: Memory block size: 128MB Feb 13 05:12:33.553285 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819e3000-0x819e3fff] (4096 bytes) Feb 13 05:12:33.553290 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 05:12:33.553295 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 05:12:33.553300 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 05:12:33.553305 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 05:12:33.553310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 05:12:33.553315 kernel: audit: initializing netlink subsys (disabled) Feb 13 05:12:33.553320 kernel: audit: type=2000 audit(1707801148.040:1): state=initialized audit_enabled=0 res=1 Feb 13 05:12:33.553325 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 05:12:33.553330 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 05:12:33.553334 kernel: cpuidle: using governor menu Feb 13 05:12:33.553340 kernel: ACPI: bus type PCI registered Feb 13 05:12:33.553345 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 05:12:33.553350 kernel: dca service started, version 1.12.1 Feb 13 05:12:33.553355 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 05:12:33.553360 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 05:12:33.553365 kernel: PCI: Using configuration type 1 for base access Feb 13 05:12:33.553369 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 05:12:33.553374 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 05:12:33.553379 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 05:12:33.553385 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 05:12:33.553390 kernel: ACPI: Added _OSI(Module Device) Feb 13 05:12:33.553395 kernel: ACPI: Added _OSI(Processor Device) Feb 13 05:12:33.553399 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 05:12:33.553404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 05:12:33.553409 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 05:12:33.553414 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 05:12:33.553419 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 05:12:33.553424 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 05:12:33.553429 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:12:33.553434 kernel: ACPI: SSDT 0xFFFF9A5200213300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 05:12:33.553439 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 05:12:33.553444 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:12:33.553449 kernel: ACPI: SSDT 0xFFFF9A5201AE2400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 05:12:33.553454 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:12:33.553459 kernel: ACPI: SSDT 0xFFFF9A5201A50800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 05:12:33.553464 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:12:33.553468 kernel: ACPI: SSDT 0xFFFF9A5201A57000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 05:12:33.553473 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:12:33.553479 kernel: ACPI: SSDT 0xFFFF9A520014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 05:12:33.553483 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:12:33.553488 kernel: ACPI: SSDT 0xFFFF9A5201AE5800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 05:12:33.553493 kernel: ACPI: Interpreter enabled Feb 13 05:12:33.553498 kernel: ACPI: PM: (supports S0 S5) Feb 13 05:12:33.553503 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 05:12:33.553508 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 05:12:33.553513 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 05:12:33.553517 kernel: HEST: Table parsing has been initialized. Feb 13 05:12:33.553523 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 05:12:33.553528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 05:12:33.553533 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 05:12:33.553538 kernel: ACPI: PM: Power Resource [USBC] Feb 13 05:12:33.553542 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 05:12:33.553547 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 05:12:33.553552 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 05:12:33.553557 kernel: ACPI: PM: Power Resource [WRST] Feb 13 05:12:33.553562 kernel: ACPI: PM: Power Resource [FN00] Feb 13 05:12:33.553567 kernel: ACPI: PM: Power Resource [FN01] Feb 13 05:12:33.553572 kernel: ACPI: PM: Power Resource [FN02] Feb 13 05:12:33.553577 kernel: ACPI: PM: Power Resource [FN03] Feb 13 05:12:33.553583 kernel: ACPI: PM: Power Resource [FN04] Feb 13 05:12:33.553606 kernel: ACPI: PM: Power Resource [PIN] Feb 13 05:12:33.553611 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 05:12:33.553675 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 05:12:33.553720 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 05:12:33.553763 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 05:12:33.553770 kernel: PCI host bridge to bus 0000:00 Feb 13 05:12:33.553813 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 05:12:33.553850 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 05:12:33.553887 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 05:12:33.553923 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 05:12:33.553959 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 05:12:33.553998 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 05:12:33.554048 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 05:12:33.554098 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 05:12:33.554141 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.554186 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 05:12:33.554228 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 05:12:33.554276 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 05:12:33.554319 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 05:12:33.554364 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 05:12:33.554406 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 05:12:33.554449 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 05:12:33.554494 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 05:12:33.554538 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 05:12:33.554578 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 05:12:33.554627 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 05:12:33.554668 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 05:12:33.554716 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 05:12:33.554757 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 05:12:33.554802 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 05:12:33.554845 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 05:12:33.554886 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 05:12:33.554930 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 05:12:33.554971 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 05:12:33.555012 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 05:12:33.555057 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 05:12:33.555100 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 05:12:33.555140 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 05:12:33.555184 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 05:12:33.555225 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 05:12:33.555265 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 05:12:33.555306 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 05:12:33.555346 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 05:12:33.555394 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 05:12:33.555436 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 05:12:33.555478 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 05:12:33.555523 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 05:12:33.555565 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.555616 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 05:12:33.555657 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.555705 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 05:12:33.555746 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.555792 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 05:12:33.555834 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.555880 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 05:12:33.555923 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.555967 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 05:12:33.556009 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 05:12:33.556055 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 05:12:33.556103 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 05:12:33.556144 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 05:12:33.556185 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 05:12:33.556231 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 05:12:33.556272 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 05:12:33.556320 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 05:12:33.556363 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 05:12:33.556409 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 05:12:33.556451 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 05:12:33.556495 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 05:12:33.556537 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 05:12:33.556586 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 05:12:33.556630 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 05:12:33.556677 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 05:12:33.556719 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 05:12:33.556761 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 05:12:33.556803 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 05:12:33.556859 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 05:12:33.556899 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 05:12:33.556940 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 05:12:33.556981 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 05:12:33.557029 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 05:12:33.557073 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 05:12:33.557115 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 05:12:33.557157 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 05:12:33.557198 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.557238 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 05:12:33.557279 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 05:12:33.557321 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 05:12:33.557369 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 05:12:33.557411 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 05:12:33.557454 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 05:12:33.557495 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 05:12:33.557537 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 05:12:33.557577 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 05:12:33.557660 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 05:12:33.557702 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 05:12:33.557746 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 05:12:33.557791 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 05:12:33.557835 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 05:12:33.557877 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 05:12:33.557918 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 05:12:33.557960 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 05:12:33.558001 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 05:12:33.558044 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 05:12:33.558091 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 05:12:33.558139 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 05:12:33.558184 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 05:12:33.558229 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 05:12:33.558273 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 05:12:33.558318 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 05:12:33.558398 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 05:12:33.558464 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 05:12:33.558506 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 05:12:33.558549 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 05:12:33.558618 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 05:12:33.558625 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 05:12:33.558631 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 05:12:33.558638 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 05:12:33.558643 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 05:12:33.558668 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 05:12:33.558673 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 05:12:33.558678 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 05:12:33.558683 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 05:12:33.558688 kernel: iommu: Default domain type: Translated Feb 13 05:12:33.558694 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 05:12:33.558737 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 05:12:33.558783 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 05:12:33.558828 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 05:12:33.558835 kernel: vgaarb: loaded Feb 13 05:12:33.558841 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 05:12:33.558846 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 05:12:33.558851 kernel: PTP clock support registered Feb 13 05:12:33.558856 kernel: PCI: Using ACPI for IRQ routing Feb 13 05:12:33.558861 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 05:12:33.558866 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 05:12:33.558873 kernel: e820: reserve RAM buffer [mem 0x819e3000-0x83ffffff] Feb 13 05:12:33.558878 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 05:12:33.558883 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 05:12:33.558888 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 05:12:33.558893 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 05:12:33.558898 kernel: clocksource: Switched to clocksource tsc-early Feb 13 05:12:33.558903 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 05:12:33.558908 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 05:12:33.558914 kernel: pnp: PnP ACPI init Feb 13 05:12:33.558956 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 05:12:33.558998 kernel: pnp 00:02: [dma 0 disabled] Feb 13 05:12:33.559038 kernel: pnp 00:03: [dma 0 disabled] Feb 13 05:12:33.559080 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 05:12:33.559117 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 05:12:33.559157 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 05:12:33.559199 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 05:12:33.559236 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 05:12:33.559273 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 05:12:33.559309 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 05:12:33.559345 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 05:12:33.559381 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 05:12:33.559417 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 05:12:33.559456 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 05:12:33.559495 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 05:12:33.559533 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 05:12:33.559568 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 05:12:33.559635 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 05:12:33.559692 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 05:12:33.559728 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 05:12:33.559767 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 05:12:33.559807 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 05:12:33.559814 kernel: pnp: PnP ACPI: found 10 devices Feb 13 05:12:33.559820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 05:12:33.559825 kernel: NET: Registered PF_INET protocol family Feb 13 05:12:33.559831 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 05:12:33.559836 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 05:12:33.559841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 05:12:33.559848 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 05:12:33.559853 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 05:12:33.559859 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 05:12:33.559864 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 05:12:33.559869 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 05:12:33.559874 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 05:12:33.559880 kernel: NET: Registered PF_XDP protocol family Feb 13 05:12:33.559921 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 05:12:33.559964 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 05:12:33.560004 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 05:12:33.560046 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 05:12:33.560089 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 05:12:33.560130 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 05:12:33.560171 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 05:12:33.560211 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 05:12:33.560252 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 05:12:33.560295 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 05:12:33.560335 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 05:12:33.560376 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 05:12:33.560416 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 05:12:33.560457 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 05:12:33.560499 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 05:12:33.560541 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 05:12:33.560584 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 05:12:33.560650 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 05:12:33.560713 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 05:12:33.560755 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 05:12:33.560797 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 05:12:33.560837 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 05:12:33.560878 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 05:12:33.560920 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 05:12:33.560956 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 05:12:33.560994 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 05:12:33.561029 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 05:12:33.561064 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 05:12:33.561099 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 05:12:33.561134 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 05:12:33.561177 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 05:12:33.561217 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 05:12:33.561260 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 05:12:33.561299 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 05:12:33.561341 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 05:12:33.561379 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 05:12:33.561420 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 05:12:33.561461 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 05:12:33.561501 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 05:12:33.561542 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 05:12:33.561549 kernel: PCI: CLS 64 bytes, default 64 Feb 13 05:12:33.561556 kernel: DMAR: No ATSR found Feb 13 05:12:33.561561 kernel: DMAR: No SATC found Feb 13 05:12:33.561566 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 05:12:33.561635 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 05:12:33.561700 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 05:12:33.561741 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 05:12:33.561782 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 05:12:33.561822 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 05:12:33.561862 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 05:12:33.561902 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 05:12:33.561942 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 05:12:33.561983 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 05:12:33.562025 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 05:12:33.562066 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 05:12:33.562106 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 05:12:33.562147 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 05:12:33.562187 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 05:12:33.562228 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 05:12:33.562269 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 05:12:33.562309 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 05:12:33.562351 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 05:12:33.562391 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 05:12:33.562432 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 05:12:33.562472 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 05:12:33.562514 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 05:12:33.562556 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 05:12:33.562619 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 05:12:33.562680 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 05:12:33.562725 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 05:12:33.562770 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 05:12:33.562777 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 05:12:33.562783 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 05:12:33.562788 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 05:12:33.562794 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 05:12:33.562799 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 05:12:33.562804 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 05:12:33.562810 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 05:12:33.562853 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 05:12:33.562861 kernel: Initialise system trusted keyrings Feb 13 05:12:33.562867 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 05:12:33.562872 kernel: Key type asymmetric registered Feb 13 05:12:33.562877 kernel: Asymmetric key parser 'x509' registered Feb 13 05:12:33.562882 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 05:12:33.562887 kernel: io scheduler mq-deadline registered Feb 13 05:12:33.562894 kernel: io scheduler kyber registered Feb 13 05:12:33.562899 kernel: io scheduler bfq registered Feb 13 05:12:33.562940 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 05:12:33.562981 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 05:12:33.563023 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 05:12:33.563063 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 05:12:33.563104 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 05:12:33.563145 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 05:12:33.563191 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 05:12:33.563199 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 05:12:33.563204 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 05:12:33.563210 kernel: pstore: Registered erst as persistent store backend Feb 13 05:12:33.563215 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 05:12:33.563220 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 05:12:33.563225 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 05:12:33.563230 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 05:12:33.563237 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 05:12:33.563281 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 05:12:33.563289 kernel: i8042: PNP: No PS/2 controller found. Feb 13 05:12:33.563325 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 05:12:33.563364 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 05:12:33.563400 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T05:12:32 UTC (1707801152) Feb 13 05:12:33.563437 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 05:12:33.563445 kernel: fail to initialize ptp_kvm Feb 13 05:12:33.563451 kernel: intel_pstate: Intel P-state driver initializing Feb 13 05:12:33.563457 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 05:12:33.563462 kernel: intel_pstate: HWP enabled Feb 13 05:12:33.563467 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 05:12:33.563473 kernel: vesafb: scrolling: redraw Feb 13 05:12:33.563478 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 05:12:33.563483 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000055470b39, using 768k, total 768k Feb 13 05:12:33.563488 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 05:12:33.563494 kernel: fb0: VESA VGA frame buffer device Feb 13 05:12:33.563500 kernel: NET: Registered PF_INET6 protocol family Feb 13 05:12:33.563505 kernel: Segment Routing with IPv6 Feb 13 05:12:33.563510 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 05:12:33.563516 kernel: NET: Registered PF_PACKET protocol family Feb 13 05:12:33.563521 kernel: Key type dns_resolver registered Feb 13 05:12:33.563526 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 05:12:33.563531 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 05:12:33.563536 kernel: IPI shorthand broadcast: enabled Feb 13 05:12:33.563541 kernel: sched_clock: Marking stable (1730466645, 1339451584)->(4490340045, -1420421816) Feb 13 05:12:33.563547 kernel: registered taskstats version 1 Feb 13 05:12:33.563553 kernel: Loading compiled-in X.509 certificates Feb 13 05:12:33.563558 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 05:12:33.563563 kernel: Key type .fscrypt registered Feb 13 05:12:33.563568 kernel: Key type fscrypt-provisioning registered Feb 13 05:12:33.563574 kernel: pstore: Using crash dump compression: deflate Feb 13 05:12:33.563579 kernel: ima: Allocated hash algorithm: sha1 Feb 13 05:12:33.563608 kernel: ima: No architecture policies found Feb 13 05:12:33.563613 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 05:12:33.563619 kernel: Write protecting the kernel read-only data: 28672k Feb 13 05:12:33.563625 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 05:12:33.563650 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 05:12:33.563655 kernel: Run /init as init process Feb 13 05:12:33.563660 kernel: with arguments: Feb 13 05:12:33.563665 kernel: /init Feb 13 05:12:33.563670 kernel: with environment: Feb 13 05:12:33.563675 kernel: HOME=/ Feb 13 05:12:33.563681 kernel: TERM=linux Feb 13 05:12:33.563686 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 05:12:33.563693 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 05:12:33.563699 systemd[1]: Detected architecture x86-64. Feb 13 05:12:33.563705 systemd[1]: Running in initrd. Feb 13 05:12:33.563710 systemd[1]: No hostname configured, using default hostname. Feb 13 05:12:33.563715 systemd[1]: Hostname set to . Feb 13 05:12:33.563721 systemd[1]: Initializing machine ID from random generator. Feb 13 05:12:33.563727 systemd[1]: Queued start job for default target initrd.target. Feb 13 05:12:33.563732 systemd[1]: Started systemd-ask-password-console.path. Feb 13 05:12:33.563738 systemd[1]: Reached target cryptsetup.target. Feb 13 05:12:33.563743 systemd[1]: Reached target paths.target. Feb 13 05:12:33.563748 systemd[1]: Reached target slices.target. Feb 13 05:12:33.563753 systemd[1]: Reached target swap.target. Feb 13 05:12:33.563759 systemd[1]: Reached target timers.target. Feb 13 05:12:33.563764 systemd[1]: Listening on iscsid.socket. Feb 13 05:12:33.563770 systemd[1]: Listening on iscsiuio.socket. Feb 13 05:12:33.563776 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 05:12:33.563781 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 05:12:33.563787 systemd[1]: Listening on systemd-journald.socket. Feb 13 05:12:33.563792 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 05:12:33.563798 systemd[1]: Listening on systemd-networkd.socket. Feb 13 05:12:33.563803 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 05:12:33.563808 kernel: clocksource: Switched to clocksource tsc Feb 13 05:12:33.563814 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 05:12:33.563820 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 05:12:33.563825 systemd[1]: Reached target sockets.target. Feb 13 05:12:33.563831 systemd[1]: Starting kmod-static-nodes.service... Feb 13 05:12:33.563836 systemd[1]: Finished network-cleanup.service. Feb 13 05:12:33.563842 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 05:12:33.563847 systemd[1]: Starting systemd-journald.service... Feb 13 05:12:33.563852 systemd[1]: Starting systemd-modules-load.service... Feb 13 05:12:33.563860 systemd-journald[269]: Journal started Feb 13 05:12:33.563886 systemd-journald[269]: Runtime Journal (/run/log/journal/4709d5d26f3240719b45fe8a0bbcd20e) is 8.0M, max 640.1M, 632.1M free. Feb 13 05:12:33.567159 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 05:12:33.596867 kernel: audit: type=1334 audit(1707801153.573:2): prog-id=6 op=LOAD Feb 13 05:12:33.596877 systemd[1]: Starting systemd-resolved.service... Feb 13 05:12:33.573000 audit: BPF prog-id=6 op=LOAD Feb 13 05:12:33.640586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 05:12:33.640615 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 05:12:33.673644 kernel: Bridge firewalling registered Feb 13 05:12:33.673661 systemd[1]: Started systemd-journald.service. Feb 13 05:12:33.688132 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 05:12:33.738382 kernel: audit: type=1130 audit(1707801153.696:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.694113 systemd-resolved[272]: Positive Trust Anchors: Feb 13 05:12:33.795367 kernel: SCSI subsystem initialized Feb 13 05:12:33.795377 kernel: audit: type=1130 audit(1707801153.750:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.694120 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 05:12:33.905029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 05:12:33.905044 kernel: audit: type=1130 audit(1707801153.821:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.905051 kernel: device-mapper: uevent: version 1.0.3 Feb 13 05:12:33.905058 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 05:12:33.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.694139 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 05:12:34.011811 kernel: audit: type=1130 audit(1707801153.922:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.695623 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 13 05:12:34.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.696857 systemd[1]: Started systemd-resolved.service. Feb 13 05:12:34.119134 kernel: audit: type=1130 audit(1707801154.019:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.119148 kernel: audit: type=1130 audit(1707801154.072:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:33.750749 systemd[1]: Finished kmod-static-nodes.service. Feb 13 05:12:33.821721 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 05:12:33.918592 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 13 05:12:33.922844 systemd[1]: Finished systemd-modules-load.service. Feb 13 05:12:34.019946 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 05:12:34.072844 systemd[1]: Reached target nss-lookup.target. Feb 13 05:12:34.128242 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 05:12:34.148376 systemd[1]: Starting systemd-sysctl.service... Feb 13 05:12:34.148797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 05:12:34.151738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 05:12:34.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.152401 systemd[1]: Finished systemd-sysctl.service. Feb 13 05:12:34.201790 kernel: audit: type=1130 audit(1707801154.151:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.213934 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 05:12:34.279681 kernel: audit: type=1130 audit(1707801154.213:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.271186 systemd[1]: Starting dracut-cmdline.service... Feb 13 05:12:34.293671 dracut-cmdline[294]: dracut-dracut-053 Feb 13 05:12:34.293671 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 05:12:34.293671 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 05:12:34.360668 kernel: Loading iSCSI transport class v2.0-870. Feb 13 05:12:34.360680 kernel: iscsi: registered transport (tcp) Feb 13 05:12:34.412413 kernel: iscsi: registered transport (qla4xxx) Feb 13 05:12:34.412467 kernel: QLogic iSCSI HBA Driver Feb 13 05:12:34.427845 systemd[1]: Finished dracut-cmdline.service. Feb 13 05:12:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:34.438275 systemd[1]: Starting dracut-pre-udev.service... Feb 13 05:12:34.494617 kernel: raid6: avx2x4 gen() 48086 MB/s Feb 13 05:12:34.530614 kernel: raid6: avx2x4 xor() 13842 MB/s Feb 13 05:12:34.565614 kernel: raid6: avx2x2 gen() 51124 MB/s Feb 13 05:12:34.600659 kernel: raid6: avx2x2 xor() 32150 MB/s Feb 13 05:12:34.635613 kernel: raid6: avx2x1 gen() 44570 MB/s Feb 13 05:12:34.670655 kernel: raid6: avx2x1 xor() 27926 MB/s Feb 13 05:12:34.704613 kernel: raid6: sse2x4 gen() 21355 MB/s Feb 13 05:12:34.738658 kernel: raid6: sse2x4 xor() 11975 MB/s Feb 13 05:12:34.772613 kernel: raid6: sse2x2 gen() 21679 MB/s Feb 13 05:12:34.806658 kernel: raid6: sse2x2 xor() 13450 MB/s Feb 13 05:12:34.840616 kernel: raid6: sse2x1 gen() 18263 MB/s Feb 13 05:12:34.892546 kernel: raid6: sse2x1 xor() 8936 MB/s Feb 13 05:12:34.892562 kernel: raid6: using algorithm avx2x2 gen() 51124 MB/s Feb 13 05:12:34.892569 kernel: raid6: .... xor() 32150 MB/s, rmw enabled Feb 13 05:12:34.910795 kernel: raid6: using avx2x2 recovery algorithm Feb 13 05:12:34.956621 kernel: xor: automatically using best checksumming function avx Feb 13 05:12:35.035592 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 05:12:35.040917 systemd[1]: Finished dracut-pre-udev.service. Feb 13 05:12:35.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:35.050000 audit: BPF prog-id=7 op=LOAD Feb 13 05:12:35.050000 audit: BPF prog-id=8 op=LOAD Feb 13 05:12:35.051570 systemd[1]: Starting systemd-udevd.service... Feb 13 05:12:35.060849 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 13 05:12:35.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:35.066810 systemd[1]: Started systemd-udevd.service. Feb 13 05:12:35.107711 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 13 05:12:35.083392 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 05:12:35.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:35.109874 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 05:12:35.125306 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 05:12:35.195058 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 05:12:35.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:35.224599 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 05:12:35.244598 kernel: ACPI: bus type USB registered Feb 13 05:12:35.244666 kernel: libata version 3.00 loaded. Feb 13 05:12:35.244677 kernel: usbcore: registered new interface driver usbfs Feb 13 05:12:35.280935 kernel: usbcore: registered new interface driver hub Feb 13 05:12:35.280980 kernel: usbcore: registered new device driver usb Feb 13 05:12:35.298604 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 05:12:35.331933 kernel: AES CTR mode by8 optimization enabled Feb 13 05:12:35.332589 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 05:12:35.366422 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 05:12:35.368588 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 05:12:35.404934 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 05:12:35.405012 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 05:12:35.405066 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 05:12:35.405119 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 05:12:35.413598 kernel: pps pps0: new PPS source ptp0 Feb 13 05:12:35.413674 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 05:12:35.413735 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 05:12:35.413788 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:24 Feb 13 05:12:35.413837 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 05:12:35.413886 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 05:12:35.443616 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 05:12:35.458587 kernel: pps pps1: new PPS source ptp1 Feb 13 05:12:35.458658 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 05:12:35.458715 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 05:12:35.458770 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 05:12:35.462588 kernel: scsi host0: ahci Feb 13 05:12:35.462664 kernel: scsi host1: ahci Feb 13 05:12:35.462805 kernel: scsi host2: ahci Feb 13 05:12:35.462924 kernel: scsi host3: ahci Feb 13 05:12:35.462974 kernel: scsi host4: ahci Feb 13 05:12:35.463027 kernel: scsi host5: ahci Feb 13 05:12:35.463076 kernel: scsi host6: ahci Feb 13 05:12:35.463124 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Feb 13 05:12:35.463132 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Feb 13 05:12:35.463140 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Feb 13 05:12:35.463147 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Feb 13 05:12:35.463154 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Feb 13 05:12:35.463160 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Feb 13 05:12:35.463166 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Feb 13 05:12:35.506901 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 05:12:35.506972 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 05:12:35.507026 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 05:12:35.521718 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 05:12:35.538777 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:25 Feb 13 05:12:35.570320 kernel: hub 1-0:1.0: USB hub found Feb 13 05:12:35.584337 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 05:12:35.584406 kernel: hub 1-0:1.0: 16 ports detected Feb 13 05:12:35.598891 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 05:12:35.627657 kernel: hub 2-0:1.0: USB hub found Feb 13 05:12:35.731643 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 05:12:35.731715 kernel: hub 2-0:1.0: 10 ports detected Feb 13 05:12:35.769602 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 05:12:35.769692 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 05:12:35.769701 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 05:12:35.769708 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 05:12:35.770629 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 05:12:35.770638 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 05:12:35.770647 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 13 05:12:35.770653 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU004, max UDMA/133 Feb 13 05:12:35.771629 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 05:12:35.771639 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 05:12:35.774584 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 05:12:35.774594 kernel: usb: port power management may be unreliable Feb 13 05:12:35.969135 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 05:12:35.969161 kernel: ata1.00: Features: NCQ-prio Feb 13 05:12:35.999644 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 05:12:35.999721 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 05:12:36.039740 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 05:12:36.039815 kernel: ata2.00: Features: NCQ-prio Feb 13 05:12:36.039824 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 05:12:36.071629 kernel: ata1.00: configured for UDMA/133 Feb 13 05:12:36.139623 kernel: hub 1-14:1.0: USB hub found Feb 13 05:12:36.139704 kernel: ata2.00: configured for UDMA/133 Feb 13 05:12:36.139714 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U004 PQ: 0 ANSI: 5 Feb 13 05:12:36.167743 kernel: hub 1-14:1.0: 4 ports detected Feb 13 05:12:36.168629 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 13 05:12:36.322601 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 05:12:36.341540 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:36.341557 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 05:12:36.355449 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 05:12:36.355527 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 05:12:36.355597 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 05:12:36.355686 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 05:12:36.355751 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 05:12:36.355819 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 05:12:36.355871 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 05:12:36.355923 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 05:12:36.355930 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 05:12:36.355938 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 05:12:36.408144 kernel: port_module: 9 callbacks suppressed Feb 13 05:12:36.408160 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 05:12:36.408224 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 05:12:36.485568 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 05:12:36.485599 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 05:12:36.501588 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 05:12:36.501690 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 05:12:36.530880 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 05:12:36.610647 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 05:12:36.610723 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 05:12:36.644636 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:36.690548 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 05:12:36.690565 kernel: GPT:9289727 != 937703087 Feb 13 05:12:36.690573 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 05:12:36.690645 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 05:12:36.690655 kernel: GPT:9289727 != 937703087 Feb 13 05:12:36.690662 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 05:12:36.690668 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 05:12:36.785025 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:36.785040 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 05:12:36.818615 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 05:12:36.826307 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 05:12:36.905806 kernel: usbcore: registered new interface driver usbhid Feb 13 05:12:36.905821 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (523) Feb 13 05:12:36.905828 kernel: usbhid: USB HID core driver Feb 13 05:12:36.905835 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 05:12:36.871473 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 05:12:36.925762 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 05:12:36.921696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 05:12:36.950738 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 05:12:37.028762 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 05:12:37.028850 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 05:12:37.028858 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 05:12:36.965971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 05:12:37.083699 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:37.066264 systemd[1]: Starting disk-uuid.service... Feb 13 05:12:37.118673 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 05:12:37.118684 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:37.118725 disk-uuid[688]: Primary Header is updated. Feb 13 05:12:37.118725 disk-uuid[688]: Secondary Entries is updated. Feb 13 05:12:37.118725 disk-uuid[688]: Secondary Header is updated. Feb 13 05:12:37.207662 kernel: GPT:disk_guids don't match. Feb 13 05:12:37.207674 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 05:12:37.207681 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 05:12:37.207687 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:37.207694 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 05:12:38.161952 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:12:38.181567 disk-uuid[689]: The operation has completed successfully. Feb 13 05:12:38.190675 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 05:12:38.217698 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 05:12:38.315132 kernel: audit: type=1130 audit(1707801158.225:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.315148 kernel: audit: type=1131 audit(1707801158.225:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.217743 systemd[1]: Finished disk-uuid.service. Feb 13 05:12:38.344629 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 05:12:38.228957 systemd[1]: Starting verity-setup.service... Feb 13 05:12:38.378708 systemd[1]: Found device dev-mapper-usr.device. Feb 13 05:12:38.388810 systemd[1]: Mounting sysusr-usr.mount... Feb 13 05:12:38.394940 systemd[1]: Finished verity-setup.service. Feb 13 05:12:38.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.454588 kernel: audit: type=1130 audit(1707801158.405:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.510586 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 05:12:38.510758 systemd[1]: Mounted sysusr-usr.mount. Feb 13 05:12:38.518888 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 05:12:38.519306 systemd[1]: Starting ignition-setup.service... Feb 13 05:12:38.615749 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 05:12:38.615768 kernel: BTRFS info (device sdb6): using free space tree Feb 13 05:12:38.615775 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 05:12:38.615782 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 05:12:38.527096 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 05:12:38.674605 kernel: audit: type=1130 audit(1707801158.624:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.609467 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 05:12:38.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.625242 systemd[1]: Finished ignition-setup.service. Feb 13 05:12:38.766192 kernel: audit: type=1130 audit(1707801158.683:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.766220 kernel: audit: type=1334 audit(1707801158.742:24): prog-id=9 op=LOAD Feb 13 05:12:38.742000 audit: BPF prog-id=9 op=LOAD Feb 13 05:12:38.684408 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 05:12:38.744393 systemd[1]: Starting systemd-networkd.service... Feb 13 05:12:38.785618 systemd-networkd[870]: lo: Link UP Feb 13 05:12:38.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.785620 systemd-networkd[870]: lo: Gained carrier Feb 13 05:12:38.865785 kernel: audit: type=1130 audit(1707801158.798:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.865683 ignition[866]: Ignition 2.14.0 Feb 13 05:12:38.785903 systemd-networkd[870]: Enumeration completed Feb 13 05:12:38.935764 kernel: audit: type=1130 audit(1707801158.885:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.865688 ignition[866]: Stage: fetch-offline Feb 13 05:12:38.785941 systemd[1]: Started systemd-networkd.service. Feb 13 05:12:39.029675 kernel: audit: type=1130 audit(1707801158.957:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:39.029764 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 05:12:38.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.865718 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 05:12:39.064813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 05:12:39.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:39.064921 iscsid[889]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 05:12:39.064921 iscsid[889]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 05:12:39.064921 iscsid[889]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 05:12:39.064921 iscsid[889]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 05:12:39.064921 iscsid[889]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 05:12:39.064921 iscsid[889]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 05:12:39.064921 iscsid[889]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 05:12:39.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.786521 systemd-networkd[870]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 05:12:38.865733 ignition[866]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 05:12:38.798747 systemd[1]: Reached target network.target. Feb 13 05:12:39.237623 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 05:12:38.868363 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 05:12:38.858163 systemd[1]: Starting iscsiuio.service... Feb 13 05:12:39.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:38.868427 ignition[866]: parsed url from cmdline: "" Feb 13 05:12:38.872743 systemd[1]: Started iscsiuio.service. Feb 13 05:12:38.868429 ignition[866]: no config URL provided Feb 13 05:12:38.886186 systemd[1]: Starting iscsid.service... Feb 13 05:12:38.868432 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 05:12:38.942443 unknown[866]: fetched base config from "system" Feb 13 05:12:38.872585 ignition[866]: parsing config with SHA512: 04dd78a8e67a9e49d9beca65f8318229fb0fd957d4b91e215140b9d06295fd2af34fc2b7c4087300187b2fded6547807e95181d5520475fde7de3c30c6ba88e9 Feb 13 05:12:38.942447 unknown[866]: fetched user config from "system" Feb 13 05:12:38.942814 ignition[866]: fetch-offline: fetch-offline passed Feb 13 05:12:38.943731 systemd[1]: Started iscsid.service. Feb 13 05:12:38.942817 ignition[866]: POST message to Packet Timeline Feb 13 05:12:38.957811 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 05:12:38.942821 ignition[866]: POST Status error: resource requires networking Feb 13 05:12:39.031257 systemd-networkd[870]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 05:12:38.942850 ignition[866]: Ignition finished successfully Feb 13 05:12:39.045136 systemd[1]: Starting dracut-initqueue.service... Feb 13 05:12:39.076836 ignition[899]: Ignition 2.14.0 Feb 13 05:12:39.071688 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 05:12:39.076839 ignition[899]: Stage: kargs Feb 13 05:12:39.072059 systemd[1]: Starting ignition-kargs.service... Feb 13 05:12:39.076896 ignition[899]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 05:12:39.090822 systemd[1]: Finished dracut-initqueue.service. Feb 13 05:12:39.076905 ignition[899]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 05:12:39.143849 systemd[1]: Reached target remote-fs-pre.target. Feb 13 05:12:39.078160 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 05:12:39.160751 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 05:12:39.079737 ignition[899]: kargs: kargs passed Feb 13 05:12:39.191924 systemd[1]: Reached target remote-fs.target. Feb 13 05:12:39.079740 ignition[899]: POST message to Packet Timeline Feb 13 05:12:39.211201 systemd[1]: Starting dracut-pre-mount.service... Feb 13 05:12:39.079750 ignition[899]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 05:12:39.238802 systemd-networkd[870]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 05:12:39.082704 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45799->[::1]:53: read: connection refused Feb 13 05:12:39.249783 systemd[1]: Finished dracut-pre-mount.service. Feb 13 05:12:39.283256 ignition[899]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 05:12:39.266836 systemd-networkd[870]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 05:12:39.283547 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54845->[::1]:53: read: connection refused Feb 13 05:12:39.295085 systemd-networkd[870]: enp1s0f1np1: Link UP Feb 13 05:12:39.295197 systemd-networkd[870]: enp1s0f1np1: Gained carrier Feb 13 05:12:39.311847 systemd-networkd[870]: enp1s0f0np0: Link UP Feb 13 05:12:39.311985 systemd-networkd[870]: eno2: Link UP Feb 13 05:12:39.312110 systemd-networkd[870]: eno1: Link UP Feb 13 05:12:39.684580 ignition[899]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 05:12:39.685894 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38926->[::1]:53: read: connection refused Feb 13 05:12:40.058980 systemd-networkd[870]: enp1s0f0np0: Gained carrier Feb 13 05:12:40.068809 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 05:12:40.094792 systemd-networkd[870]: enp1s0f0np0: DHCPv4 address 147.75.49.59/31, gateway 147.75.49.58 acquired from 145.40.83.140 Feb 13 05:12:40.484194 systemd-networkd[870]: enp1s0f1np1: Gained IPv6LL Feb 13 05:12:40.487061 ignition[899]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 05:12:40.488286 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49677->[::1]:53: read: connection refused Feb 13 05:12:41.252176 systemd-networkd[870]: enp1s0f0np0: Gained IPv6LL Feb 13 05:12:42.089870 ignition[899]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 05:12:42.091474 ignition[899]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41001->[::1]:53: read: connection refused Feb 13 05:12:45.294624 ignition[899]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 05:12:45.329773 ignition[899]: GET result: OK Feb 13 05:12:45.519916 ignition[899]: Ignition finished successfully Feb 13 05:12:45.524240 systemd[1]: Finished ignition-kargs.service. Feb 13 05:12:45.610633 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 05:12:45.610670 kernel: audit: type=1130 audit(1707801165.534:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:45.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:45.544085 ignition[917]: Ignition 2.14.0 Feb 13 05:12:45.536887 systemd[1]: Starting ignition-disks.service... Feb 13 05:12:45.544089 ignition[917]: Stage: disks Feb 13 05:12:45.544146 ignition[917]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 05:12:45.544156 ignition[917]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 05:12:45.545513 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 05:12:45.547192 ignition[917]: disks: disks passed Feb 13 05:12:45.547196 ignition[917]: POST message to Packet Timeline Feb 13 05:12:45.547207 ignition[917]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 05:12:45.570889 ignition[917]: GET result: OK Feb 13 05:12:45.825008 ignition[917]: Ignition finished successfully Feb 13 05:12:45.828144 systemd[1]: Finished ignition-disks.service. Feb 13 05:12:45.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:45.840203 systemd[1]: Reached target initrd-root-device.target. Feb 13 05:12:45.916799 kernel: audit: type=1130 audit(1707801165.839:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:45.902804 systemd[1]: Reached target local-fs-pre.target. Feb 13 05:12:45.902912 systemd[1]: Reached target local-fs.target. Feb 13 05:12:45.916920 systemd[1]: Reached target sysinit.target. Feb 13 05:12:45.943818 systemd[1]: Reached target basic.target. Feb 13 05:12:45.957522 systemd[1]: Starting systemd-fsck-root.service... Feb 13 05:12:45.976685 systemd-fsck[932]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 05:12:45.987575 systemd[1]: Finished systemd-fsck-root.service. Feb 13 05:12:45.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.000873 systemd[1]: Mounting sysroot.mount... Feb 13 05:12:46.076669 kernel: audit: type=1130 audit(1707801165.995:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.076683 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 05:12:46.086573 systemd[1]: Mounted sysroot.mount. Feb 13 05:12:46.093803 systemd[1]: Reached target initrd-root-fs.target. Feb 13 05:12:46.115450 systemd[1]: Mounting sysroot-usr.mount... Feb 13 05:12:46.123417 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 05:12:46.139523 systemd[1]: Starting flatcar-static-network.service... Feb 13 05:12:46.153813 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 05:12:46.153927 systemd[1]: Reached target ignition-diskful.target. Feb 13 05:12:46.172805 systemd[1]: Mounted sysroot-usr.mount. Feb 13 05:12:46.196051 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 05:12:46.325497 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (943) Feb 13 05:12:46.325518 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 05:12:46.325527 kernel: BTRFS info (device sdb6): using free space tree Feb 13 05:12:46.325535 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 05:12:46.325541 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 05:12:46.206986 systemd[1]: Starting initrd-setup-root.service... Feb 13 05:12:46.387587 kernel: audit: type=1130 audit(1707801166.334:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.387626 coreos-metadata[939]: Feb 13 05:12:46.256 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 05:12:46.409687 coreos-metadata[940]: Feb 13 05:12:46.256 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 05:12:46.409687 coreos-metadata[940]: Feb 13 05:12:46.405 INFO Fetch successful Feb 13 05:12:46.429732 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 05:12:46.251827 systemd[1]: Finished initrd-setup-root.service. Feb 13 05:12:46.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.519924 coreos-metadata[939]: Feb 13 05:12:46.405 INFO Fetch successful Feb 13 05:12:46.519924 coreos-metadata[939]: Feb 13 05:12:46.423 INFO wrote hostname ci-3510.3.2-a-69b5ddf616 to /sysroot/etc/hostname Feb 13 05:12:46.682676 kernel: audit: type=1130 audit(1707801166.463:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.682694 kernel: audit: type=1130 audit(1707801166.528:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.682703 kernel: audit: type=1131 audit(1707801166.528:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.682710 kernel: audit: type=1130 audit(1707801166.650:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.682798 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Feb 13 05:12:46.335850 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 05:12:46.726854 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 05:12:46.397263 systemd[1]: Starting ignition-mount.service... Feb 13 05:12:46.743823 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 05:12:46.418046 systemd[1]: Starting sysroot-boot.service... Feb 13 05:12:46.760742 ignition[1017]: INFO : Ignition 2.14.0 Feb 13 05:12:46.760742 ignition[1017]: INFO : Stage: mount Feb 13 05:12:46.760742 ignition[1017]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 05:12:46.760742 ignition[1017]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 05:12:46.760742 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 05:12:46.760742 ignition[1017]: INFO : mount: mount passed Feb 13 05:12:46.760742 ignition[1017]: INFO : POST message to Packet Timeline Feb 13 05:12:46.760742 ignition[1017]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 05:12:46.760742 ignition[1017]: INFO : GET result: OK Feb 13 05:12:47.033671 kernel: audit: type=1130 audit(1707801166.813:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:47.033763 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1032) Feb 13 05:12:47.033771 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 05:12:47.033778 kernel: BTRFS info (device sdb6): using free space tree Feb 13 05:12:47.033785 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 05:12:47.033792 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 05:12:46.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:46.445559 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 05:12:47.051707 ignition[1017]: INFO : Ignition finished successfully Feb 13 05:12:46.445602 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 05:12:46.445910 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 05:12:46.464010 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 05:12:47.088762 ignition[1051]: INFO : Ignition 2.14.0 Feb 13 05:12:47.088762 ignition[1051]: INFO : Stage: files Feb 13 05:12:47.088762 ignition[1051]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 05:12:47.088762 ignition[1051]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 05:12:47.088762 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 05:12:47.088762 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Feb 13 05:12:47.088762 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 05:12:47.088762 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 05:12:47.088762 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 05:12:47.088762 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 05:12:47.088762 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 05:12:47.088762 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 05:12:47.088762 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 05:12:46.464067 systemd[1]: Finished flatcar-static-network.service. Feb 13 05:12:46.529255 systemd[1]: Finished sysroot-boot.service. Feb 13 05:12:46.793216 systemd[1]: Finished ignition-mount.service. Feb 13 05:12:46.816164 systemd[1]: Starting ignition-files.service... Feb 13 05:12:46.880468 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 05:12:47.028747 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 05:12:47.052481 unknown[1051]: wrote ssh authorized keys file for user: core Feb 13 05:12:47.601352 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 05:12:47.668392 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 05:12:47.685893 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 05:12:47.685893 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 13 05:12:48.177948 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 05:12:48.308860 ignition[1051]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 13 05:12:48.308860 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 05:12:48.350819 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 05:12:48.350819 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 13 05:12:48.680834 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 05:12:48.787505 ignition[1051]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 13 05:12:48.811890 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 05:12:48.811890 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 05:12:48.811890 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 13 05:12:48.861796 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 05:12:49.087366 ignition[1051]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 13 05:12:49.120830 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 05:12:49.120830 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 05:12:49.120830 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 13 05:12:49.169751 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 05:12:49.571448 ignition[1051]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 13 05:12:49.571448 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 05:12:49.620736 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 13 05:12:49.620736 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 13 05:12:49.620736 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 13 05:12:49.946812 ignition[1051]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 13 05:12:49.946812 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 13 05:12:49.988795 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 05:12:49.988795 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 05:12:49.988795 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 05:12:49.988795 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 05:12:50.398338 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 05:12:50.434050 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 05:12:50.449786 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 05:12:50.672891 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1073) Feb 13 05:12:50.673007 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3380652689" Feb 13 05:12:50.673007 ignition[1051]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3380652689": device or resource busy Feb 13 05:12:50.673007 ignition[1051]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3380652689", trying btrfs: device or resource busy Feb 13 05:12:50.673007 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3380652689" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3380652689" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3380652689" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3380652689" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 13 05:12:50.673007 ignition[1051]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 13 05:12:51.316848 kernel: audit: type=1130 audit(1707801170.792:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.316943 kernel: audit: type=1130 audit(1707801170.924:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.316952 kernel: audit: type=1130 audit(1707801170.991:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.316959 kernel: audit: type=1131 audit(1707801170.991:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.316966 kernel: audit: type=1130 audit(1707801171.184:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.316972 kernel: audit: type=1131 audit(1707801171.184:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(20): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: op(20): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 05:12:51.317085 ignition[1051]: INFO : files: files passed Feb 13 05:12:51.842830 kernel: audit: type=1130 audit(1707801171.383:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.843005 kernel: audit: type=1131 audit(1707801171.541:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.777327 systemd[1]: Finished ignition-files.service. Feb 13 05:12:51.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.919441 ignition[1051]: INFO : POST message to Packet Timeline Feb 13 05:12:51.919441 ignition[1051]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 05:12:51.919441 ignition[1051]: INFO : GET result: OK Feb 13 05:12:51.919441 ignition[1051]: INFO : Ignition finished successfully Feb 13 05:12:52.011860 kernel: audit: type=1131 audit(1707801171.850:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.011876 kernel: audit: type=1131 audit(1707801171.935:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.800966 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 05:12:52.037824 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 05:12:50.866809 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 05:12:50.867170 systemd[1]: Starting ignition-quench.service... Feb 13 05:12:50.901849 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 05:12:52.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.925202 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 05:12:52.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.925351 systemd[1]: Finished ignition-quench.service. Feb 13 05:12:52.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:50.991900 systemd[1]: Reached target ignition-complete.target. Feb 13 05:12:51.126296 systemd[1]: Starting initrd-parse-etc.service... Feb 13 05:12:52.182898 iscsid[889]: iscsid shutting down. Feb 13 05:12:52.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.197827 ignition[1101]: INFO : Ignition 2.14.0 Feb 13 05:12:52.197827 ignition[1101]: INFO : Stage: umount Feb 13 05:12:52.197827 ignition[1101]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 05:12:52.197827 ignition[1101]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 05:12:52.197827 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 05:12:52.197827 ignition[1101]: INFO : umount: umount passed Feb 13 05:12:52.197827 ignition[1101]: INFO : POST message to Packet Timeline Feb 13 05:12:52.197827 ignition[1101]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 05:12:52.197827 ignition[1101]: INFO : GET result: OK Feb 13 05:12:52.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.165402 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 05:12:52.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.165458 systemd[1]: Finished initrd-parse-etc.service. Feb 13 05:12:51.184908 systemd[1]: Reached target initrd-fs.target. Feb 13 05:12:52.389911 ignition[1101]: INFO : Ignition finished successfully Feb 13 05:12:51.299464 systemd[1]: Reached target initrd.target. Feb 13 05:12:51.323846 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 05:12:52.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.324311 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 05:12:52.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.364069 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 05:12:52.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.461000 audit: BPF prog-id=6 op=UNLOAD Feb 13 05:12:51.385286 systemd[1]: Starting initrd-cleanup.service... Feb 13 05:12:51.452496 systemd[1]: Stopped target nss-lookup.target. Feb 13 05:12:52.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.468977 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 05:12:52.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.494963 systemd[1]: Stopped target timers.target. Feb 13 05:12:52.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.521011 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 05:12:52.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.521142 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 05:12:51.542281 systemd[1]: Stopped target initrd.target. Feb 13 05:12:52.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.617878 systemd[1]: Stopped target basic.target. Feb 13 05:12:52.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.630943 systemd[1]: Stopped target ignition-complete.target. Feb 13 05:12:52.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.649939 systemd[1]: Stopped target ignition-diskful.target. Feb 13 05:12:51.678909 systemd[1]: Stopped target initrd-root-device.target. Feb 13 05:12:51.701108 systemd[1]: Stopped target remote-fs.target. Feb 13 05:12:52.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.724134 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 05:12:51.747162 systemd[1]: Stopped target sysinit.target. Feb 13 05:12:51.768156 systemd[1]: Stopped target local-fs.target. Feb 13 05:12:52.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.789139 systemd[1]: Stopped target local-fs-pre.target. Feb 13 05:12:52.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.813184 systemd[1]: Stopped target swap.target. Feb 13 05:12:52.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.836047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 05:12:51.836403 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 05:12:52.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.851371 systemd[1]: Stopped target cryptsetup.target. Feb 13 05:12:52.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:51.927795 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 05:12:51.927858 systemd[1]: Stopped dracut-initqueue.service. Feb 13 05:12:51.936044 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 05:12:51.936183 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 05:12:52.004868 systemd[1]: Stopped target paths.target. Feb 13 05:12:52.026883 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 05:12:52.030803 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 05:12:52.046990 systemd[1]: Stopped target slices.target. Feb 13 05:12:52.067987 systemd[1]: Stopped target sockets.target. Feb 13 05:12:52.093037 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 05:12:52.093289 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 05:12:52.111295 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 05:12:52.111646 systemd[1]: Stopped ignition-files.service. Feb 13 05:12:52.127286 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 05:12:52.127667 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 05:12:52.147335 systemd[1]: Stopping ignition-mount.service... Feb 13 05:12:52.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:52.159983 systemd[1]: Stopping iscsid.service... Feb 13 05:12:52.174763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 05:12:52.174899 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 05:12:52.191580 systemd[1]: Stopping sysroot-boot.service... Feb 13 05:12:52.204737 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 05:12:52.204946 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 05:12:52.221266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 05:12:52.221565 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 05:12:52.244425 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 05:12:52.246539 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 05:12:52.246871 systemd[1]: Stopped iscsid.service. Feb 13 05:12:52.274282 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 05:12:52.274506 systemd[1]: Stopped sysroot-boot.service. Feb 13 05:12:52.292213 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 05:12:52.292461 systemd[1]: Closed iscsid.socket. Feb 13 05:12:52.306113 systemd[1]: Stopping iscsiuio.service... Feb 13 05:12:52.321432 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 05:12:52.321697 systemd[1]: Stopped iscsiuio.service. Feb 13 05:12:52.338468 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 05:12:52.338709 systemd[1]: Finished initrd-cleanup.service. Feb 13 05:12:52.355052 systemd[1]: Stopped target network.target. Feb 13 05:12:52.368953 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 05:12:52.369050 systemd[1]: Closed iscsiuio.socket. Feb 13 05:12:52.383227 systemd[1]: Stopping systemd-networkd.service... Feb 13 05:12:52.397123 systemd[1]: Stopping systemd-resolved.service... Feb 13 05:12:52.400736 systemd-networkd[870]: enp1s0f0np0: DHCPv6 lease lost Feb 13 05:12:52.406759 systemd-networkd[870]: enp1s0f1np1: DHCPv6 lease lost Feb 13 05:12:52.978000 audit: BPF prog-id=9 op=UNLOAD Feb 13 05:12:52.412488 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 05:12:52.412745 systemd[1]: Stopped systemd-resolved.service. Feb 13 05:12:52.428767 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 05:12:52.429009 systemd[1]: Stopped systemd-networkd.service. Feb 13 05:12:52.447220 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 05:12:52.979593 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 13 05:12:52.447487 systemd[1]: Stopped ignition-mount.service. Feb 13 05:12:52.462284 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 05:12:52.462365 systemd[1]: Closed systemd-networkd.socket. Feb 13 05:12:52.476864 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 05:12:52.477003 systemd[1]: Stopped ignition-disks.service. Feb 13 05:12:52.491917 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 05:12:52.492036 systemd[1]: Stopped ignition-kargs.service. Feb 13 05:12:52.507010 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 05:12:52.507149 systemd[1]: Stopped ignition-setup.service. Feb 13 05:12:52.523017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 05:12:52.523164 systemd[1]: Stopped initrd-setup-root.service. Feb 13 05:12:52.539709 systemd[1]: Stopping network-cleanup.service... Feb 13 05:12:52.552822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 05:12:52.553016 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 05:12:52.568017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 05:12:52.568153 systemd[1]: Stopped systemd-sysctl.service. Feb 13 05:12:52.583120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 05:12:52.583232 systemd[1]: Stopped systemd-modules-load.service. Feb 13 05:12:52.598194 systemd[1]: Stopping systemd-udevd.service... Feb 13 05:12:52.616406 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 05:12:52.617676 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 05:12:52.617978 systemd[1]: Stopped systemd-udevd.service. Feb 13 05:12:52.637078 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 05:12:52.637137 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 05:12:52.650810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 05:12:52.650852 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 05:12:52.666877 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 05:12:52.666959 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 05:12:52.682989 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 05:12:52.683114 systemd[1]: Stopped dracut-cmdline.service. Feb 13 05:12:52.697809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 05:12:52.697844 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 05:12:52.713228 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 05:12:52.727721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 05:12:52.727771 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 05:12:52.744150 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 05:12:52.744266 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 05:12:52.886973 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 05:12:52.887236 systemd[1]: Stopped network-cleanup.service. Feb 13 05:12:52.899137 systemd[1]: Reached target initrd-switch-root.target. Feb 13 05:12:52.915837 systemd[1]: Starting initrd-switch-root.service... Feb 13 05:12:52.932491 systemd[1]: Switching root. Feb 13 05:12:52.980678 systemd-journald[269]: Journal stopped Feb 13 05:12:56.898387 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 05:12:56.898399 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 05:12:56.898408 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 05:12:56.898413 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 05:12:56.898418 kernel: SELinux: policy capability open_perms=1 Feb 13 05:12:56.898424 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 05:12:56.898430 kernel: SELinux: policy capability always_check_network=0 Feb 13 05:12:56.898435 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 05:12:56.898441 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 05:12:56.898447 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 05:12:56.898452 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 05:12:56.898458 systemd[1]: Successfully loaded SELinux policy in 318.447ms. Feb 13 05:12:56.898465 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.760ms. Feb 13 05:12:56.898471 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 05:12:56.898479 systemd[1]: Detected architecture x86-64. Feb 13 05:12:56.898486 systemd[1]: Detected first boot. Feb 13 05:12:56.898491 systemd[1]: Hostname set to . Feb 13 05:12:56.898498 systemd[1]: Initializing machine ID from random generator. Feb 13 05:12:56.898504 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 05:12:56.898510 systemd[1]: Populated /etc with preset unit settings. Feb 13 05:12:56.898516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:12:56.898523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:12:56.898531 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:12:56.898537 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 05:12:56.898543 systemd[1]: Stopped initrd-switch-root.service. Feb 13 05:12:56.898549 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 05:12:56.898556 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 05:12:56.898563 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 05:12:56.898570 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 05:12:56.898576 systemd[1]: Created slice system-getty.slice. Feb 13 05:12:56.898584 systemd[1]: Created slice system-modprobe.slice. Feb 13 05:12:56.898591 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 05:12:56.898597 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 05:12:56.898603 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 05:12:56.898609 systemd[1]: Created slice user.slice. Feb 13 05:12:56.898615 systemd[1]: Started systemd-ask-password-console.path. Feb 13 05:12:56.898622 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 05:12:56.898629 systemd[1]: Set up automount boot.automount. Feb 13 05:12:56.898635 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 05:12:56.898641 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 05:12:56.898649 systemd[1]: Stopped target initrd-fs.target. Feb 13 05:12:56.898655 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 05:12:56.898662 systemd[1]: Reached target integritysetup.target. Feb 13 05:12:56.898668 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 05:12:56.898676 systemd[1]: Reached target remote-fs.target. Feb 13 05:12:56.898682 systemd[1]: Reached target slices.target. Feb 13 05:12:56.898688 systemd[1]: Reached target swap.target. Feb 13 05:12:56.898696 systemd[1]: Reached target torcx.target. Feb 13 05:12:56.898703 systemd[1]: Reached target veritysetup.target. Feb 13 05:12:56.898709 systemd[1]: Listening on systemd-coredump.socket. Feb 13 05:12:56.898715 systemd[1]: Listening on systemd-initctl.socket. Feb 13 05:12:56.898722 systemd[1]: Listening on systemd-networkd.socket. Feb 13 05:12:56.898730 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 05:12:56.898736 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 05:12:56.898743 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 05:12:56.898749 systemd[1]: Mounting dev-hugepages.mount... Feb 13 05:12:56.898756 systemd[1]: Mounting dev-mqueue.mount... Feb 13 05:12:56.898762 systemd[1]: Mounting media.mount... Feb 13 05:12:56.898770 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 05:12:56.898777 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 05:12:56.898783 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 05:12:56.898790 systemd[1]: Mounting tmp.mount... Feb 13 05:12:56.898796 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 05:12:56.898803 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 05:12:56.898810 systemd[1]: Starting kmod-static-nodes.service... Feb 13 05:12:56.898816 systemd[1]: Starting modprobe@configfs.service... Feb 13 05:12:56.898823 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 05:12:56.898830 systemd[1]: Starting modprobe@drm.service... Feb 13 05:12:56.898837 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 05:12:56.898844 systemd[1]: Starting modprobe@fuse.service... Feb 13 05:12:56.898850 kernel: fuse: init (API version 7.34) Feb 13 05:12:56.898856 systemd[1]: Starting modprobe@loop.service... Feb 13 05:12:56.898862 kernel: loop: module loaded Feb 13 05:12:56.898869 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 05:12:56.898875 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 05:12:56.898883 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 05:12:56.898890 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 05:12:56.898896 kernel: kauditd_printk_skb: 71 callbacks suppressed Feb 13 05:12:56.898902 kernel: audit: type=1131 audit(1707801176.539:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.898908 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 05:12:56.898915 kernel: audit: type=1131 audit(1707801176.627:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.898921 systemd[1]: Stopped systemd-journald.service. Feb 13 05:12:56.898927 kernel: audit: type=1130 audit(1707801176.691:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.898934 kernel: audit: type=1131 audit(1707801176.691:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.898940 kernel: audit: type=1334 audit(1707801176.777:118): prog-id=21 op=LOAD Feb 13 05:12:56.898946 kernel: audit: type=1334 audit(1707801176.795:119): prog-id=22 op=LOAD Feb 13 05:12:56.898952 kernel: audit: type=1334 audit(1707801176.813:120): prog-id=23 op=LOAD Feb 13 05:12:56.898958 kernel: audit: type=1334 audit(1707801176.831:121): prog-id=19 op=UNLOAD Feb 13 05:12:56.898963 systemd[1]: Starting systemd-journald.service... Feb 13 05:12:56.898970 kernel: audit: type=1334 audit(1707801176.831:122): prog-id=20 op=UNLOAD Feb 13 05:12:56.898976 systemd[1]: Starting systemd-modules-load.service... Feb 13 05:12:56.898984 kernel: audit: type=1305 audit(1707801176.894:123): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 05:12:56.898991 systemd-journald[1251]: Journal started Feb 13 05:12:56.899015 systemd-journald[1251]: Runtime Journal (/run/log/journal/4123a265904b434f9fc91dd79d963322) is 8.0M, max 640.1M, 632.1M free. Feb 13 05:12:53.400000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 05:12:53.672000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 05:12:53.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 05:12:53.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 05:12:53.674000 audit: BPF prog-id=10 op=LOAD Feb 13 05:12:53.674000 audit: BPF prog-id=10 op=UNLOAD Feb 13 05:12:53.674000 audit: BPF prog-id=11 op=LOAD Feb 13 05:12:53.674000 audit: BPF prog-id=11 op=UNLOAD Feb 13 05:12:53.744000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 05:12:53.744000 audit[1141]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d989c a1=c00015adf8 a2=c000163ac0 a3=32 items=0 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:53.744000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 05:12:53.769000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 05:12:53.769000 audit[1141]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9975 a2=1ed a3=0 items=2 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:53.769000 audit: CWD cwd="/" Feb 13 05:12:53.769000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:53.769000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:53.769000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 05:12:55.284000 audit: BPF prog-id=12 op=LOAD Feb 13 05:12:55.284000 audit: BPF prog-id=3 op=UNLOAD Feb 13 05:12:55.284000 audit: BPF prog-id=13 op=LOAD Feb 13 05:12:55.284000 audit: BPF prog-id=14 op=LOAD Feb 13 05:12:55.284000 audit: BPF prog-id=4 op=UNLOAD Feb 13 05:12:55.284000 audit: BPF prog-id=5 op=UNLOAD Feb 13 05:12:55.284000 audit: BPF prog-id=15 op=LOAD Feb 13 05:12:55.284000 audit: BPF prog-id=12 op=UNLOAD Feb 13 05:12:55.285000 audit: BPF prog-id=16 op=LOAD Feb 13 05:12:55.285000 audit: BPF prog-id=17 op=LOAD Feb 13 05:12:55.285000 audit: BPF prog-id=13 op=UNLOAD Feb 13 05:12:55.285000 audit: BPF prog-id=14 op=UNLOAD Feb 13 05:12:55.285000 audit: BPF prog-id=18 op=LOAD Feb 13 05:12:55.285000 audit: BPF prog-id=15 op=UNLOAD Feb 13 05:12:55.285000 audit: BPF prog-id=19 op=LOAD Feb 13 05:12:55.285000 audit: BPF prog-id=20 op=LOAD Feb 13 05:12:55.285000 audit: BPF prog-id=16 op=UNLOAD Feb 13 05:12:55.285000 audit: BPF prog-id=17 op=UNLOAD Feb 13 05:12:55.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:55.334000 audit: BPF prog-id=18 op=UNLOAD Feb 13 05:12:55.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:55.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:56.777000 audit: BPF prog-id=21 op=LOAD Feb 13 05:12:56.795000 audit: BPF prog-id=22 op=LOAD Feb 13 05:12:56.813000 audit: BPF prog-id=23 op=LOAD Feb 13 05:12:56.831000 audit: BPF prog-id=19 op=UNLOAD Feb 13 05:12:56.831000 audit: BPF prog-id=20 op=UNLOAD Feb 13 05:12:56.894000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 05:12:53.741830 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:12:55.282877 systemd[1]: Queued start job for default target multi-user.target. Feb 13 05:12:53.742288 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 05:12:55.286520 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 05:12:53.742304 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 05:12:53.742330 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 05:12:53.742339 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 05:12:53.742364 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 05:12:53.742374 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 05:12:53.742522 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 05:12:53.742554 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 05:12:53.742565 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 05:12:53.743098 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 05:12:53.743126 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 05:12:53.743141 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 05:12:53.743153 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 05:12:53.743166 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 05:12:53.743177 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 05:12:54.931717 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:12:54.931856 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:12:54.931910 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:12:54.932002 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:12:54.932032 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 05:12:54.932065 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T05:12:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 05:12:56.894000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe1ec69d80 a2=4000 a3=7ffe1ec69e1c items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:56.894000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 05:12:56.976769 systemd[1]: Starting systemd-network-generator.service... Feb 13 05:12:57.003633 systemd[1]: Starting systemd-remount-fs.service... Feb 13 05:12:57.030658 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 05:12:57.073871 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 05:12:57.073892 systemd[1]: Stopped verity-setup.service. Feb 13 05:12:57.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.118625 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 05:12:57.138769 systemd[1]: Started systemd-journald.service. Feb 13 05:12:57.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.147222 systemd[1]: Mounted dev-hugepages.mount. Feb 13 05:12:57.154848 systemd[1]: Mounted dev-mqueue.mount. Feb 13 05:12:57.161855 systemd[1]: Mounted media.mount. Feb 13 05:12:57.168847 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 05:12:57.177821 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 05:12:57.186809 systemd[1]: Mounted tmp.mount. Feb 13 05:12:57.193894 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 05:12:57.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.201945 systemd[1]: Finished kmod-static-nodes.service. Feb 13 05:12:57.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.210938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 05:12:57.211045 systemd[1]: Finished modprobe@configfs.service. Feb 13 05:12:57.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.220014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 05:12:57.220145 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 05:12:57.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.229133 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 05:12:57.229327 systemd[1]: Finished modprobe@drm.service. Feb 13 05:12:57.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.238229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 05:12:57.238471 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 05:12:57.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.247437 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 05:12:57.247769 systemd[1]: Finished modprobe@fuse.service. Feb 13 05:12:57.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.256403 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 05:12:57.256721 systemd[1]: Finished modprobe@loop.service. Feb 13 05:12:57.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.266410 systemd[1]: Finished systemd-modules-load.service. Feb 13 05:12:57.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.276369 systemd[1]: Finished systemd-network-generator.service. Feb 13 05:12:57.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.286350 systemd[1]: Finished systemd-remount-fs.service. Feb 13 05:12:57.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.295419 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 05:12:57.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.305872 systemd[1]: Reached target network-pre.target. Feb 13 05:12:57.316374 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 05:12:57.328218 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 05:12:57.335849 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 05:12:57.339101 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 05:12:57.348090 systemd[1]: Starting systemd-journal-flush.service... Feb 13 05:12:57.356889 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 05:12:57.359290 systemd[1]: Starting systemd-random-seed.service... Feb 13 05:12:57.360148 systemd-journald[1251]: Time spent on flushing to /var/log/journal/4123a265904b434f9fc91dd79d963322 is 15.771ms for 1630 entries. Feb 13 05:12:57.360148 systemd-journald[1251]: System Journal (/var/log/journal/4123a265904b434f9fc91dd79d963322) is 8.0M, max 195.6M, 187.6M free. Feb 13 05:12:57.400246 systemd-journald[1251]: Received client request to flush runtime journal. Feb 13 05:12:57.374712 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 05:12:57.375177 systemd[1]: Starting systemd-sysctl.service... Feb 13 05:12:57.389171 systemd[1]: Starting systemd-sysusers.service... Feb 13 05:12:57.396316 systemd[1]: Starting systemd-udev-settle.service... Feb 13 05:12:57.403729 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 05:12:57.411765 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 05:12:57.419828 systemd[1]: Finished systemd-journal-flush.service. Feb 13 05:12:57.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.427849 systemd[1]: Finished systemd-random-seed.service. Feb 13 05:12:57.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.435821 systemd[1]: Finished systemd-sysctl.service. Feb 13 05:12:57.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.443775 systemd[1]: Finished systemd-sysusers.service. Feb 13 05:12:57.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.452786 systemd[1]: Reached target first-boot-complete.target. Feb 13 05:12:57.460960 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 05:12:57.660182 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 05:12:57.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.668000 audit: BPF prog-id=24 op=LOAD Feb 13 05:12:57.668000 audit: BPF prog-id=25 op=LOAD Feb 13 05:12:57.668000 audit: BPF prog-id=7 op=UNLOAD Feb 13 05:12:57.668000 audit: BPF prog-id=8 op=UNLOAD Feb 13 05:12:57.669889 systemd[1]: Starting systemd-udevd.service... Feb 13 05:12:57.681371 systemd-udevd[1269]: Using default interface naming scheme 'v252'. Feb 13 05:12:57.700900 systemd[1]: Started systemd-udevd.service. Feb 13 05:12:57.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.710689 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 05:12:57.710000 audit: BPF prog-id=26 op=LOAD Feb 13 05:12:57.712003 systemd[1]: Starting systemd-networkd.service... Feb 13 05:12:57.738599 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 05:12:57.738700 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 05:12:57.780818 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1288) Feb 13 05:12:57.780893 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 05:12:57.801000 audit: BPF prog-id=27 op=LOAD Feb 13 05:12:57.801000 audit: BPF prog-id=28 op=LOAD Feb 13 05:12:57.801000 audit: BPF prog-id=29 op=LOAD Feb 13 05:12:57.802772 systemd[1]: Starting systemd-userdbd.service... Feb 13 05:12:57.807588 kernel: IPMI message handler: version 39.2 Feb 13 05:12:57.807627 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 05:12:57.807646 kernel: ACPI: button: Power Button [PWRF] Feb 13 05:12:57.843815 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 05:12:57.754000 audit[1278]: AVC avc: denied { confidentiality } for pid=1278 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 05:12:57.754000 audit[1278]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fb96591a010 a1=4d8bc a2=7fb967644bc5 a3=5 items=42 ppid=1269 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:57.881613 kernel: ipmi device interface Feb 13 05:12:57.754000 audit: CWD cwd="/" Feb 13 05:12:57.754000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=1 name=(null) inode=25520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=2 name=(null) inode=25520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=3 name=(null) inode=25521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=4 name=(null) inode=25520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=5 name=(null) inode=25522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=6 name=(null) inode=25520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=7 name=(null) inode=25523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=8 name=(null) inode=25523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=9 name=(null) inode=25524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=10 name=(null) inode=25523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=11 name=(null) inode=25525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=12 name=(null) inode=25523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=13 name=(null) inode=25526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=14 name=(null) inode=25523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=15 name=(null) inode=25527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=16 name=(null) inode=25523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=17 name=(null) inode=25528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=18 name=(null) inode=25520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=19 name=(null) inode=25529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=20 name=(null) inode=25529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=21 name=(null) inode=25530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=22 name=(null) inode=25529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=23 name=(null) inode=25531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=24 name=(null) inode=25529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=25 name=(null) inode=25532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=26 name=(null) inode=25529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=27 name=(null) inode=25533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=28 name=(null) inode=25529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=29 name=(null) inode=25534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=30 name=(null) inode=25520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=31 name=(null) inode=25535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=32 name=(null) inode=25535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=33 name=(null) inode=25536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=34 name=(null) inode=25535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=35 name=(null) inode=25537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=36 name=(null) inode=25535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=37 name=(null) inode=25538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=38 name=(null) inode=25535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=39 name=(null) inode=25539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=40 name=(null) inode=25535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PATH item=41 name=(null) inode=25540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:12:57.754000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 05:12:57.922847 systemd[1]: Started systemd-userdbd.service. Feb 13 05:12:57.939782 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 05:12:57.939977 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 05:12:57.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:57.962589 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Feb 13 05:12:57.968588 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 05:12:57.968721 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 05:12:57.968832 kernel: ipmi_si: IPMI System Interface driver Feb 13 05:12:57.968850 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 05:12:57.968941 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 05:12:57.968965 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 05:12:57.968989 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 05:12:57.969092 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 05:12:58.128592 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 05:12:58.186682 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 05:12:58.186881 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 05:12:58.186911 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 05:12:58.269266 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 05:12:58.269388 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 05:12:58.269461 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 05:12:58.326641 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 05:12:58.357890 systemd-networkd[1301]: bond0: netdev ready Feb 13 05:12:58.360379 systemd-networkd[1301]: lo: Link UP Feb 13 05:12:58.360382 systemd-networkd[1301]: lo: Gained carrier Feb 13 05:12:58.361190 systemd-networkd[1301]: Enumeration completed Feb 13 05:12:58.361298 systemd[1]: Started systemd-networkd.service. Feb 13 05:12:58.361498 systemd-networkd[1301]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 05:12:58.365907 systemd-networkd[1301]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bd.network. Feb 13 05:12:58.368569 kernel: intel_rapl_common: Found RAPL domain package Feb 13 05:12:58.368599 kernel: intel_rapl_common: Found RAPL domain core Feb 13 05:12:58.368613 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 05:12:58.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:58.403587 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 05:12:58.434589 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 05:12:58.436803 systemd[1]: Finished systemd-udev-settle.service. Feb 13 05:12:58.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:58.445309 systemd[1]: Starting lvm2-activation-early.service... Feb 13 05:12:58.460598 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 05:12:58.486001 systemd[1]: Finished lvm2-activation-early.service. Feb 13 05:12:58.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:58.494716 systemd[1]: Reached target cryptsetup.target. Feb 13 05:12:58.503247 systemd[1]: Starting lvm2-activation.service... Feb 13 05:12:58.505409 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 05:12:58.535008 systemd[1]: Finished lvm2-activation.service. Feb 13 05:12:58.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:58.543714 systemd[1]: Reached target local-fs-pre.target. Feb 13 05:12:58.551678 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 05:12:58.551692 systemd[1]: Reached target local-fs.target. Feb 13 05:12:58.559669 systemd[1]: Reached target machines.target. Feb 13 05:12:58.568274 systemd[1]: Starting ldconfig.service... Feb 13 05:12:58.576247 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 05:12:58.576271 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 05:12:58.576838 systemd[1]: Starting systemd-boot-update.service... Feb 13 05:12:58.585055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 05:12:58.596162 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 05:12:58.596306 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 05:12:58.596329 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 05:12:58.596886 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 05:12:58.597088 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1377 (bootctl) Feb 13 05:12:58.597798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 05:12:58.617000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 05:12:58.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:58.785098 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 05:12:58.859622 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 05:12:58.884629 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 05:12:58.884672 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 05:12:58.905615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 05:12:58.923337 systemd-networkd[1301]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Feb 13 05:12:59.032731 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 05:12:59.094612 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 05:12:59.118641 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 05:12:59.120512 systemd-networkd[1301]: bond0: Link UP Feb 13 05:12:59.136715 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 05:12:59.158042 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 05:12:59.158073 kernel: bond0: active interface up! Feb 13 05:12:59.179595 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Feb 13 05:12:59.179565 systemd-networkd[1301]: enp1s0f1np1: Link UP Feb 13 05:12:59.179760 systemd-networkd[1301]: enp1s0f1np1: Gained carrier Feb 13 05:12:59.180805 systemd-networkd[1301]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Feb 13 05:12:59.183274 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 05:12:59.189246 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 05:12:59.189590 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 05:12:59.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:59.211165 systemd-fsck[1386]: fsck.fat 4.2 (2021-01-31) Feb 13 05:12:59.211165 systemd-fsck[1386]: /dev/sdb1: 789 files, 115339/258078 clusters Feb 13 05:12:59.212017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 05:12:59.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:59.223463 systemd[1]: Mounting boot.mount... Feb 13 05:12:59.235288 systemd[1]: Mounted boot.mount. Feb 13 05:12:59.252991 systemd[1]: Finished systemd-boot-update.service. Feb 13 05:12:59.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:59.263867 systemd-networkd[1301]: bond0: Gained carrier Feb 13 05:12:59.263993 systemd-networkd[1301]: enp1s0f0np0: Link UP Feb 13 05:12:59.264123 systemd-networkd[1301]: enp1s0f0np0: Gained carrier Feb 13 05:12:59.269875 systemd-networkd[1301]: enp1s0f1np1: Link DOWN Feb 13 05:12:59.269878 systemd-networkd[1301]: enp1s0f1np1: Lost carrier Feb 13 05:12:59.280862 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 05:12:59.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:59.299397 systemd[1]: Starting audit-rules.service... Feb 13 05:12:59.305592 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.318000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 05:12:59.318000 audit[1405]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee3339940 a2=420 a3=0 items=0 ppid=1390 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:59.318000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 05:12:59.319638 augenrules[1405]: No rules Feb 13 05:12:59.321381 systemd[1]: Starting clean-ca-certificates.service... Feb 13 05:12:59.328585 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.345261 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 05:12:59.350586 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.372600 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.372713 systemd[1]: Starting systemd-resolved.service... Feb 13 05:12:59.393949 systemd[1]: Starting systemd-timesyncd.service... Feb 13 05:12:59.394585 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.410206 systemd[1]: Starting systemd-update-utmp.service... Feb 13 05:12:59.416586 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.418036 ldconfig[1376]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 05:12:59.432462 systemd[1]: Finished ldconfig.service. Feb 13 05:12:59.437586 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.437606 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 05:12:59.452585 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 05:12:59.456872 systemd-networkd[1301]: enp1s0f1np1: Link UP Feb 13 05:12:59.456875 systemd-networkd[1301]: enp1s0f1np1: Gained carrier Feb 13 05:12:59.471587 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 13 05:12:59.493765 systemd[1]: Finished audit-rules.service. Feb 13 05:12:59.500764 systemd[1]: Finished clean-ca-certificates.service. Feb 13 05:12:59.508728 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 05:12:59.520446 systemd[1]: Starting systemd-update-done.service... Feb 13 05:12:59.527622 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 05:12:59.527858 systemd[1]: Finished systemd-update-utmp.service. Feb 13 05:12:59.535736 systemd[1]: Finished systemd-update-done.service. Feb 13 05:12:59.546258 systemd[1]: Started systemd-timesyncd.service. Feb 13 05:12:59.547644 systemd-resolved[1412]: Positive Trust Anchors: Feb 13 05:12:59.547651 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 05:12:59.547669 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 05:12:59.551718 systemd-resolved[1412]: Using system hostname 'ci-3510.3.2-a-69b5ddf616'. Feb 13 05:12:59.554708 systemd[1]: Started systemd-resolved.service. Feb 13 05:12:59.562698 systemd[1]: Reached target network.target. Feb 13 05:12:59.570663 systemd[1]: Reached target nss-lookup.target. Feb 13 05:12:59.585626 systemd[1]: Reached target sysinit.target. Feb 13 05:12:59.589588 kernel: bond0: (slave enp1s0f1np1): link status up again after 100 ms Feb 13 05:12:59.605657 systemd[1]: Started motdgen.path. Feb 13 05:12:59.608588 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 05:12:59.614677 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 05:12:59.624665 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 05:12:59.632659 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 05:12:59.632675 systemd[1]: Reached target paths.target. Feb 13 05:12:59.639659 systemd[1]: Reached target time-set.target. Feb 13 05:12:59.647711 systemd[1]: Started logrotate.timer. Feb 13 05:12:59.654703 systemd[1]: Started mdadm.timer. Feb 13 05:12:59.661657 systemd[1]: Reached target timers.target. Feb 13 05:12:59.668787 systemd[1]: Listening on dbus.socket. Feb 13 05:12:59.676191 systemd[1]: Starting docker.socket... Feb 13 05:12:59.683937 systemd[1]: Listening on sshd.socket. Feb 13 05:12:59.690668 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 05:12:59.690875 systemd[1]: Listening on docker.socket. Feb 13 05:12:59.697660 systemd[1]: Reached target sockets.target. Feb 13 05:12:59.705615 systemd[1]: Reached target basic.target. Feb 13 05:12:59.712634 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 05:12:59.712645 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 05:12:59.713072 systemd[1]: Starting containerd.service... Feb 13 05:12:59.720003 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 05:12:59.728081 systemd[1]: Starting coreos-metadata.service... Feb 13 05:12:59.735137 systemd[1]: Starting dbus.service... Feb 13 05:12:59.741143 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 05:12:59.746458 jq[1428]: false Feb 13 05:12:59.748143 systemd[1]: Starting extend-filesystems.service... Feb 13 05:12:59.748733 coreos-metadata[1421]: Feb 13 05:12:59.748 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 05:12:59.753896 dbus-daemon[1427]: [system] SELinux support is enabled Feb 13 05:12:59.754668 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 05:12:59.755258 systemd[1]: Starting motdgen.service... Feb 13 05:12:59.756017 extend-filesystems[1429]: Found sda Feb 13 05:12:59.756017 extend-filesystems[1429]: Found sdb Feb 13 05:12:59.796699 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 13 05:12:59.763332 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 05:12:59.796777 coreos-metadata[1424]: Feb 13 05:12:59.756 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb1 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb2 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb3 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found usr Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb4 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb6 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb7 Feb 13 05:12:59.796872 extend-filesystems[1429]: Found sdb9 Feb 13 05:12:59.796872 extend-filesystems[1429]: Checking size of /dev/sdb9 Feb 13 05:12:59.796872 extend-filesystems[1429]: Resized partition /dev/sdb9 Feb 13 05:12:59.785223 systemd[1]: Starting prepare-critools.service... Feb 13 05:12:59.927763 extend-filesystems[1445]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 05:12:59.804247 systemd[1]: Starting prepare-helm.service... Feb 13 05:12:59.818209 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 05:12:59.832178 systemd[1]: Starting sshd-keygen.service... Feb 13 05:12:59.839971 systemd[1]: Starting systemd-logind.service... Feb 13 05:12:59.856621 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 05:12:59.943043 update_engine[1460]: I0213 05:12:59.917954 1460 main.cc:92] Flatcar Update Engine starting Feb 13 05:12:59.943043 update_engine[1460]: I0213 05:12:59.921615 1460 update_check_scheduler.cc:74] Next update check in 10m10s Feb 13 05:12:59.857212 systemd[1]: Starting tcsd.service... Feb 13 05:12:59.943224 jq[1461]: true Feb 13 05:12:59.860057 systemd-logind[1458]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 05:12:59.860066 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 05:12:59.943483 tar[1463]: ./ Feb 13 05:12:59.943483 tar[1463]: ./loopback Feb 13 05:12:59.860074 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 05:12:59.860174 systemd-logind[1458]: New seat seat0. Feb 13 05:12:59.869010 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 05:12:59.869401 systemd[1]: Starting update-engine.service... Feb 13 05:12:59.883303 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 05:12:59.897965 systemd[1]: Started dbus.service. Feb 13 05:12:59.921628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 05:12:59.921735 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 05:12:59.921898 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 05:12:59.921994 systemd[1]: Finished motdgen.service. Feb 13 05:12:59.935787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 05:12:59.935896 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 05:12:59.954268 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 05:12:59.954920 tar[1464]: crictl Feb 13 05:12:59.955815 jq[1469]: true Feb 13 05:12:59.956762 tar[1465]: linux-amd64/helm Feb 13 05:12:59.958558 tar[1463]: ./bandwidth Feb 13 05:12:59.959508 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 05:12:59.959649 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 05:12:59.962495 systemd[1]: Started update-engine.service. Feb 13 05:12:59.963475 env[1470]: time="2024-02-13T05:12:59.963446762Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 05:12:59.972027 env[1470]: time="2024-02-13T05:12:59.972007851Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 05:12:59.974268 env[1470]: time="2024-02-13T05:12:59.974258889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:12:59.974676 systemd[1]: Started systemd-logind.service. Feb 13 05:12:59.974967 env[1470]: time="2024-02-13T05:12:59.974952723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 05:12:59.974967 env[1470]: time="2024-02-13T05:12:59.974966621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:12:59.975093 env[1470]: time="2024-02-13T05:12:59.975081779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 05:12:59.975093 env[1470]: time="2024-02-13T05:12:59.975092018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 05:12:59.975133 env[1470]: time="2024-02-13T05:12:59.975099558Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 05:12:59.975133 env[1470]: time="2024-02-13T05:12:59.975104974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 05:12:59.975170 env[1470]: time="2024-02-13T05:12:59.975150966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:12:59.977166 env[1470]: time="2024-02-13T05:12:59.977126664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:12:59.977212 env[1470]: time="2024-02-13T05:12:59.977197500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 05:12:59.977212 env[1470]: time="2024-02-13T05:12:59.977208023Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 05:12:59.979422 env[1470]: time="2024-02-13T05:12:59.979379659Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 05:12:59.979422 env[1470]: time="2024-02-13T05:12:59.979395943Z" level=info msg="metadata content store policy set" policy=shared Feb 13 05:12:59.984325 systemd[1]: Started locksmithd.service. Feb 13 05:12:59.988318 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 05:12:59.990706 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 05:12:59.990787 systemd[1]: Reached target system-config.target. Feb 13 05:12:59.991787 tar[1463]: ./ptp Feb 13 05:12:59.991899 env[1470]: time="2024-02-13T05:12:59.991882557Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 05:12:59.991937 env[1470]: time="2024-02-13T05:12:59.991902647Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 05:12:59.991937 env[1470]: time="2024-02-13T05:12:59.991910377Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 05:12:59.991937 env[1470]: time="2024-02-13T05:12:59.991929358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991939021Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991946727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991957576Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991965864Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991972822Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991980031Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.991992 env[1470]: time="2024-02-13T05:12:59.991986937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.992160 env[1470]: time="2024-02-13T05:12:59.991993902Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 05:12:59.992160 env[1470]: time="2024-02-13T05:12:59.992045244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 05:12:59.992160 env[1470]: time="2024-02-13T05:12:59.992090666Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 05:12:59.992245 env[1470]: time="2024-02-13T05:12:59.992237074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 05:12:59.992267 env[1470]: time="2024-02-13T05:12:59.992251402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992267 env[1470]: time="2024-02-13T05:12:59.992258643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 05:12:59.992312 env[1470]: time="2024-02-13T05:12:59.992285741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992312 env[1470]: time="2024-02-13T05:12:59.992293443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992312 env[1470]: time="2024-02-13T05:12:59.992300646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992312 env[1470]: time="2024-02-13T05:12:59.992310472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992372 env[1470]: time="2024-02-13T05:12:59.992317150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992372 env[1470]: time="2024-02-13T05:12:59.992325366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992372 env[1470]: time="2024-02-13T05:12:59.992332559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992372 env[1470]: time="2024-02-13T05:12:59.992339480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992372 env[1470]: time="2024-02-13T05:12:59.992346416Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 05:12:59.992448 env[1470]: time="2024-02-13T05:12:59.992408340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992448 env[1470]: time="2024-02-13T05:12:59.992416774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992448 env[1470]: time="2024-02-13T05:12:59.992423450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992448 env[1470]: time="2024-02-13T05:12:59.992429371Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 05:12:59.992448 env[1470]: time="2024-02-13T05:12:59.992437963Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 05:12:59.992448 env[1470]: time="2024-02-13T05:12:59.992444022Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 05:12:59.992556 env[1470]: time="2024-02-13T05:12:59.992454416Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 05:12:59.992556 env[1470]: time="2024-02-13T05:12:59.992476064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 05:12:59.992622 env[1470]: time="2024-02-13T05:12:59.992596841Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.992629127Z" level=info msg="Connect containerd service" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.992648961Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.992925017Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993019828Z" level=info msg="Start subscribing containerd event" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993043512Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993061056Z" level=info msg="Start recovering state" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993068233Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993094425Z" level=info msg="containerd successfully booted in 0.030036s" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993112014Z" level=info msg="Start event monitor" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993126447Z" level=info msg="Start snapshots syncer" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993134921Z" level=info msg="Start cni network conf syncer for default" Feb 13 05:12:59.994429 env[1470]: time="2024-02-13T05:12:59.993141642Z" level=info msg="Start streaming server" Feb 13 05:12:59.998715 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 05:12:59.998828 systemd[1]: Reached target user-config.target. Feb 13 05:13:00.008191 systemd[1]: Started containerd.service. Feb 13 05:13:00.014924 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 05:13:00.015469 tar[1463]: ./vlan Feb 13 05:13:00.037391 tar[1463]: ./host-device Feb 13 05:13:00.041545 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 05:13:00.058709 tar[1463]: ./tuning Feb 13 05:13:00.077575 tar[1463]: ./vrf Feb 13 05:13:00.097339 tar[1463]: ./sbr Feb 13 05:13:00.116666 tar[1463]: ./tap Feb 13 05:13:00.138769 tar[1463]: ./dhcp Feb 13 05:13:00.195048 tar[1463]: ./static Feb 13 05:13:00.199944 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 05:13:00.211066 tar[1463]: ./firewall Feb 13 05:13:00.211817 systemd[1]: Finished sshd-keygen.service. Feb 13 05:13:00.221060 systemd[1]: Starting issuegen.service... Feb 13 05:13:00.228910 systemd[1]: Finished prepare-critools.service. Feb 13 05:13:00.230381 tar[1465]: linux-amd64/LICENSE Feb 13 05:13:00.230428 tar[1465]: linux-amd64/README.md Feb 13 05:13:00.235715 tar[1463]: ./macvlan Feb 13 05:13:00.237446 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 05:13:00.237550 systemd[1]: Finished issuegen.service. Feb 13 05:13:00.244943 systemd[1]: Finished prepare-helm.service. Feb 13 05:13:00.253468 systemd[1]: Starting systemd-user-sessions.service... Feb 13 05:13:00.257729 tar[1463]: ./dummy Feb 13 05:13:00.261847 systemd[1]: Finished systemd-user-sessions.service. Feb 13 05:13:00.279412 tar[1463]: ./bridge Feb 13 05:13:00.284564 systemd[1]: Started getty@tty1.service. Feb 13 05:13:00.284630 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 13 05:13:00.292372 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 05:13:00.300761 systemd[1]: Reached target getty.target. Feb 13 05:13:00.312907 extend-filesystems[1445]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 13 05:13:00.312907 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 05:13:00.312907 extend-filesystems[1445]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 13 05:13:00.352648 extend-filesystems[1429]: Resized filesystem in /dev/sdb9 Feb 13 05:13:00.360692 tar[1463]: ./ipvlan Feb 13 05:13:00.360692 tar[1463]: ./portmap Feb 13 05:13:00.313281 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 05:13:00.313365 systemd[1]: Finished extend-filesystems.service. Feb 13 05:13:00.374157 tar[1463]: ./host-local Feb 13 05:13:00.398171 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 05:13:00.515661 systemd-networkd[1301]: bond0: Gained IPv6LL Feb 13 05:13:01.787771 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 05:13:05.307952 login[1531]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 05:13:05.312280 login[1530]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 05:13:05.315280 systemd-logind[1458]: New session 1 of user core. Feb 13 05:13:05.315989 systemd[1]: Created slice user-500.slice. Feb 13 05:13:05.316525 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 05:13:05.317804 systemd-logind[1458]: New session 2 of user core. Feb 13 05:13:05.321815 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 05:13:05.322482 systemd[1]: Starting user@500.service... Feb 13 05:13:05.324358 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:13:05.483101 systemd[1538]: Queued start job for default target default.target. Feb 13 05:13:05.484466 systemd[1538]: Reached target paths.target. Feb 13 05:13:05.484567 systemd[1538]: Reached target sockets.target. Feb 13 05:13:05.484669 systemd[1538]: Reached target timers.target. Feb 13 05:13:05.484741 systemd[1538]: Reached target basic.target. Feb 13 05:13:05.484909 systemd[1538]: Reached target default.target. Feb 13 05:13:05.485031 systemd[1538]: Startup finished in 157ms. Feb 13 05:13:05.485089 systemd[1]: Started user@500.service. Feb 13 05:13:05.487845 systemd[1]: Started session-1.scope. Feb 13 05:13:05.489640 systemd[1]: Started session-2.scope. Feb 13 05:13:05.700929 coreos-metadata[1424]: Feb 13 05:13:05.700 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 05:13:05.701722 coreos-metadata[1421]: Feb 13 05:13:05.700 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 05:13:06.701345 coreos-metadata[1424]: Feb 13 05:13:06.701 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 05:13:06.702121 coreos-metadata[1421]: Feb 13 05:13:06.701 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 05:13:07.233986 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 05:13:07.234136 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 05:13:07.773379 coreos-metadata[1424]: Feb 13 05:13:07.773 INFO Fetch successful Feb 13 05:13:07.773615 coreos-metadata[1421]: Feb 13 05:13:07.773 INFO Fetch successful Feb 13 05:13:07.795258 systemd[1]: Finished coreos-metadata.service. Feb 13 05:13:07.796149 systemd[1]: Started packet-phone-home.service. Feb 13 05:13:07.796243 unknown[1421]: wrote ssh authorized keys file for user: core Feb 13 05:13:07.801852 curl[1560]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 05:13:07.801852 curl[1560]: Dload Upload Total Spent Left Speed Feb 13 05:13:07.808576 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 13 05:13:07.808786 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 05:13:07.808975 systemd[1]: Reached target multi-user.target. Feb 13 05:13:07.809598 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 05:13:07.813718 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 05:13:07.813786 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 05:13:07.813956 systemd[1]: Startup finished in 1.899s (kernel) + 20.230s (initrd) + 14.753s (userspace) = 36.883s. Feb 13 05:13:07.911428 systemd-timesyncd[1413]: Contacted time server 45.63.54.13:123 (0.flatcar.pool.ntp.org). Feb 13 05:13:07.911557 systemd-timesyncd[1413]: Initial clock synchronization to Tue 2024-02-13 05:13:08.176388 UTC. Feb 13 05:13:08.003351 curl[1560]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 05:13:08.005875 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 05:13:09.047475 systemd[1]: Created slice system-sshd.slice. Feb 13 05:13:09.048096 systemd[1]: Started sshd@0-147.75.49.59:22-139.178.68.195:55226.service. Feb 13 05:13:09.090665 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 55226 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:13:09.091801 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:13:09.095658 systemd-logind[1458]: New session 3 of user core. Feb 13 05:13:09.096687 systemd[1]: Started session-3.scope. Feb 13 05:13:09.153167 systemd[1]: Started sshd@1-147.75.49.59:22-139.178.68.195:55234.service. Feb 13 05:13:09.183400 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 55234 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:13:09.184098 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:13:09.186276 systemd-logind[1458]: New session 4 of user core. Feb 13 05:13:09.186789 systemd[1]: Started session-4.scope. Feb 13 05:13:09.240769 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 13 05:13:09.242259 systemd[1]: sshd@1-147.75.49.59:22-139.178.68.195:55234.service: Deactivated successfully. Feb 13 05:13:09.242563 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 05:13:09.242988 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Feb 13 05:13:09.243463 systemd[1]: Started sshd@2-147.75.49.59:22-139.178.68.195:55244.service. Feb 13 05:13:09.243986 systemd-logind[1458]: Removed session 4. Feb 13 05:13:09.275062 sshd[1575]: Accepted publickey for core from 139.178.68.195 port 55244 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:13:09.276136 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:13:09.279675 systemd-logind[1458]: New session 5 of user core. Feb 13 05:13:09.280574 systemd[1]: Started session-5.scope. Feb 13 05:13:09.336026 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 13 05:13:09.337587 systemd[1]: sshd@2-147.75.49.59:22-139.178.68.195:55244.service: Deactivated successfully. Feb 13 05:13:09.337930 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 05:13:09.338308 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Feb 13 05:13:09.338816 systemd[1]: Started sshd@3-147.75.49.59:22-139.178.68.195:55252.service. Feb 13 05:13:09.339256 systemd-logind[1458]: Removed session 5. Feb 13 05:13:09.370129 sshd[1581]: Accepted publickey for core from 139.178.68.195 port 55252 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:13:09.371204 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:13:09.374589 systemd-logind[1458]: New session 6 of user core. Feb 13 05:13:09.375392 systemd[1]: Started session-6.scope. Feb 13 05:13:09.433504 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 13 05:13:09.435279 systemd[1]: sshd@3-147.75.49.59:22-139.178.68.195:55252.service: Deactivated successfully. Feb 13 05:13:09.435686 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 05:13:09.436079 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Feb 13 05:13:09.436645 systemd[1]: Started sshd@4-147.75.49.59:22-139.178.68.195:55268.service. Feb 13 05:13:09.437146 systemd-logind[1458]: Removed session 6. Feb 13 05:13:09.468614 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 55268 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:13:09.469733 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:13:09.473765 systemd-logind[1458]: New session 7 of user core. Feb 13 05:13:09.475199 systemd[1]: Started session-7.scope. Feb 13 05:13:09.561654 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 05:13:09.562280 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 05:13:13.154185 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 05:13:13.158575 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 05:13:13.158876 systemd[1]: Reached target network-online.target. Feb 13 05:13:13.159691 systemd[1]: Starting docker.service... Feb 13 05:13:13.181349 env[1611]: time="2024-02-13T05:13:13.181290906Z" level=info msg="Starting up" Feb 13 05:13:13.181980 env[1611]: time="2024-02-13T05:13:13.181970806Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 05:13:13.181980 env[1611]: time="2024-02-13T05:13:13.181978998Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 05:13:13.182041 env[1611]: time="2024-02-13T05:13:13.181988994Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 05:13:13.182041 env[1611]: time="2024-02-13T05:13:13.181994973Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 05:13:13.183025 env[1611]: time="2024-02-13T05:13:13.182973609Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 05:13:13.183025 env[1611]: time="2024-02-13T05:13:13.182982756Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 05:13:13.183025 env[1611]: time="2024-02-13T05:13:13.182993993Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 05:13:13.183025 env[1611]: time="2024-02-13T05:13:13.183000885Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 05:13:13.199022 env[1611]: time="2024-02-13T05:13:13.198976218Z" level=info msg="Loading containers: start." Feb 13 05:13:13.277627 kernel: Initializing XFRM netlink socket Feb 13 05:13:13.310646 env[1611]: time="2024-02-13T05:13:13.310623910Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 13 05:13:13.442230 systemd-networkd[1301]: docker0: Link UP Feb 13 05:13:13.461032 env[1611]: time="2024-02-13T05:13:13.460934393Z" level=info msg="Loading containers: done." Feb 13 05:13:13.481271 env[1611]: time="2024-02-13T05:13:13.481185850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 05:13:13.481431 env[1611]: time="2024-02-13T05:13:13.481410889Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 13 05:13:13.481470 env[1611]: time="2024-02-13T05:13:13.481456804Z" level=info msg="Daemon has completed initialization" Feb 13 05:13:13.482180 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3238192501-merged.mount: Deactivated successfully. Feb 13 05:13:13.487875 systemd[1]: Started docker.service. Feb 13 05:13:13.491141 env[1611]: time="2024-02-13T05:13:13.491085092Z" level=info msg="API listen on /run/docker.sock" Feb 13 05:13:13.500963 systemd[1]: Reloading. Feb 13 05:13:13.531694 /usr/lib/systemd/system-generators/torcx-generator[1765]: time="2024-02-13T05:13:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:13:13.531714 /usr/lib/systemd/system-generators/torcx-generator[1765]: time="2024-02-13T05:13:13Z" level=info msg="torcx already run" Feb 13 05:13:13.583846 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:13:13.583854 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:13:13.594928 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:13:13.650383 systemd[1]: Started kubelet.service. Feb 13 05:13:13.673765 kubelet[1824]: E0213 05:13:13.673710 1824 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 13 05:13:13.675205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 05:13:13.675275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 05:13:14.380136 env[1470]: time="2024-02-13T05:13:14.380100760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 13 05:13:14.974932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407595198.mount: Deactivated successfully. Feb 13 05:13:16.272553 env[1470]: time="2024-02-13T05:13:16.272502819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:16.273312 env[1470]: time="2024-02-13T05:13:16.273296110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:16.274209 env[1470]: time="2024-02-13T05:13:16.274196232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:16.275290 env[1470]: time="2024-02-13T05:13:16.275250982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:16.275767 env[1470]: time="2024-02-13T05:13:16.275751782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 13 05:13:16.281862 env[1470]: time="2024-02-13T05:13:16.281799122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 13 05:13:17.970851 env[1470]: time="2024-02-13T05:13:17.970770487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:17.972016 env[1470]: time="2024-02-13T05:13:17.971960034Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:17.974063 env[1470]: time="2024-02-13T05:13:17.973999564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:17.976138 env[1470]: time="2024-02-13T05:13:17.976071885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:17.977194 env[1470]: time="2024-02-13T05:13:17.977134241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 13 05:13:17.987404 env[1470]: time="2024-02-13T05:13:17.987350142Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 13 05:13:19.249436 env[1470]: time="2024-02-13T05:13:19.249401210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:19.250168 env[1470]: time="2024-02-13T05:13:19.250154665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:19.251376 env[1470]: time="2024-02-13T05:13:19.251362955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:19.252358 env[1470]: time="2024-02-13T05:13:19.252345609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:19.253122 env[1470]: time="2024-02-13T05:13:19.253110003Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 13 05:13:19.258709 env[1470]: time="2024-02-13T05:13:19.258682400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 13 05:13:20.132227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2813148037.mount: Deactivated successfully. Feb 13 05:13:20.465002 env[1470]: time="2024-02-13T05:13:20.464875287Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.465607 env[1470]: time="2024-02-13T05:13:20.465572032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.466585 env[1470]: time="2024-02-13T05:13:20.466543976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.467172 env[1470]: time="2024-02-13T05:13:20.467131962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.467487 env[1470]: time="2024-02-13T05:13:20.467451558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 13 05:13:20.476395 env[1470]: time="2024-02-13T05:13:20.476351345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 05:13:20.966519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202569919.mount: Deactivated successfully. Feb 13 05:13:20.967885 env[1470]: time="2024-02-13T05:13:20.967827701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.968460 env[1470]: time="2024-02-13T05:13:20.968427158Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.969149 env[1470]: time="2024-02-13T05:13:20.969104192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.971088 env[1470]: time="2024-02-13T05:13:20.971023877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:20.971464 env[1470]: time="2024-02-13T05:13:20.971421435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 05:13:20.976557 env[1470]: time="2024-02-13T05:13:20.976517696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 13 05:13:21.628409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290396030.mount: Deactivated successfully. Feb 13 05:13:23.836772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 05:13:23.836911 systemd[1]: Stopped kubelet.service. Feb 13 05:13:23.837828 systemd[1]: Started kubelet.service. Feb 13 05:13:23.863214 kubelet[1911]: E0213 05:13:23.863137 1911 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 13 05:13:23.865274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 05:13:23.865344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 05:13:24.512097 env[1470]: time="2024-02-13T05:13:24.512039633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:24.512727 env[1470]: time="2024-02-13T05:13:24.512694974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:24.513565 env[1470]: time="2024-02-13T05:13:24.513551852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:24.514572 env[1470]: time="2024-02-13T05:13:24.514558192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:24.515103 env[1470]: time="2024-02-13T05:13:24.515074086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 13 05:13:24.520988 env[1470]: time="2024-02-13T05:13:24.520956873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 13 05:13:25.063097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670933637.mount: Deactivated successfully. Feb 13 05:13:25.551577 env[1470]: time="2024-02-13T05:13:25.551522925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:25.552121 env[1470]: time="2024-02-13T05:13:25.552087486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:25.552928 env[1470]: time="2024-02-13T05:13:25.552879263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:25.553555 env[1470]: time="2024-02-13T05:13:25.553520335Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:25.553867 env[1470]: time="2024-02-13T05:13:25.553831407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 13 05:13:27.586562 systemd[1]: Stopped kubelet.service. Feb 13 05:13:27.596770 systemd[1]: Reloading. Feb 13 05:13:27.627348 /usr/lib/systemd/system-generators/torcx-generator[2074]: time="2024-02-13T05:13:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:13:27.627367 /usr/lib/systemd/system-generators/torcx-generator[2074]: time="2024-02-13T05:13:27Z" level=info msg="torcx already run" Feb 13 05:13:27.693120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:13:27.693132 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:13:27.708327 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:13:27.766490 systemd[1]: Started kubelet.service. Feb 13 05:13:27.788916 kubelet[2134]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 05:13:27.788916 kubelet[2134]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 05:13:27.788916 kubelet[2134]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 05:13:27.788916 kubelet[2134]: I0213 05:13:27.788912 2134 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 05:13:27.946541 kubelet[2134]: I0213 05:13:27.946476 2134 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 13 05:13:27.946541 kubelet[2134]: I0213 05:13:27.946484 2134 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 05:13:27.946637 kubelet[2134]: I0213 05:13:27.946573 2134 server.go:837] "Client rotation is on, will bootstrap in background" Feb 13 05:13:27.948758 kubelet[2134]: I0213 05:13:27.948701 2134 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 05:13:27.949266 kubelet[2134]: E0213 05:13:27.949230 2134 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.49.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.966814 kubelet[2134]: I0213 05:13:27.966776 2134 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 05:13:27.966912 kubelet[2134]: I0213 05:13:27.966875 2134 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 05:13:27.966912 kubelet[2134]: I0213 05:13:27.966911 2134 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 05:13:27.966998 kubelet[2134]: I0213 05:13:27.966921 2134 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 05:13:27.966998 kubelet[2134]: I0213 05:13:27.966927 2134 container_manager_linux.go:302] "Creating device plugin manager" Feb 13 05:13:27.966998 kubelet[2134]: I0213 05:13:27.966977 2134 state_mem.go:36] "Initialized new in-memory state store" Feb 13 05:13:27.968273 kubelet[2134]: I0213 05:13:27.968237 2134 kubelet.go:405] "Attempting to sync node with API server" Feb 13 05:13:27.968273 kubelet[2134]: I0213 05:13:27.968247 2134 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 05:13:27.968273 kubelet[2134]: I0213 05:13:27.968258 2134 kubelet.go:309] "Adding apiserver pod source" Feb 13 05:13:27.968273 kubelet[2134]: I0213 05:13:27.968265 2134 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 05:13:27.968503 kubelet[2134]: I0213 05:13:27.968494 2134 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 05:13:27.968549 kubelet[2134]: W0213 05:13:27.968533 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.49.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.968572 kubelet[2134]: E0213 05:13:27.968564 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.49.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.968643 kubelet[2134]: W0213 05:13:27.968600 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.49.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-69b5ddf616&limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.968643 kubelet[2134]: E0213 05:13:27.968631 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.49.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-69b5ddf616&limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.968643 kubelet[2134]: W0213 05:13:27.968643 2134 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 05:13:27.968884 kubelet[2134]: I0213 05:13:27.968850 2134 server.go:1168] "Started kubelet" Feb 13 05:13:27.968922 kubelet[2134]: I0213 05:13:27.968907 2134 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 05:13:27.968950 kubelet[2134]: I0213 05:13:27.968944 2134 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 05:13:27.969168 kubelet[2134]: E0213 05:13:27.969157 2134 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 05:13:27.969199 kubelet[2134]: E0213 05:13:27.969171 2134 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 05:13:27.969222 kubelet[2134]: E0213 05:13:27.969156 2134 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-69b5ddf616.17b354267bf73cbd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-69b5ddf616", UID:"ci-3510.3.2-a-69b5ddf616", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-69b5ddf616"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 13, 27, 968840893, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 13, 27, 968840893, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://147.75.49.59:6443/api/v1/namespaces/default/events": dial tcp 147.75.49.59:6443: connect: connection refused'(may retry after sleeping) Feb 13 05:13:27.969466 kubelet[2134]: I0213 05:13:27.969430 2134 server.go:461] "Adding debug handlers to kubelet server" Feb 13 05:13:27.978798 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 05:13:27.978833 kubelet[2134]: I0213 05:13:27.978820 2134 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 05:13:27.978878 kubelet[2134]: I0213 05:13:27.978867 2134 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 13 05:13:27.978911 kubelet[2134]: E0213 05:13:27.978884 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:27.978911 kubelet[2134]: I0213 05:13:27.978893 2134 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 13 05:13:27.979454 kubelet[2134]: E0213 05:13:27.979441 2134 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.49.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-69b5ddf616?timeout=10s\": dial tcp 147.75.49.59:6443: connect: connection refused" interval="200ms" Feb 13 05:13:27.980254 kubelet[2134]: W0213 05:13:27.980219 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.49.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.980332 kubelet[2134]: E0213 05:13:27.980269 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.49.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.986568 kubelet[2134]: I0213 05:13:27.986555 2134 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 05:13:27.987064 kubelet[2134]: I0213 05:13:27.987022 2134 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 05:13:27.987064 kubelet[2134]: I0213 05:13:27.987033 2134 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 13 05:13:27.987064 kubelet[2134]: I0213 05:13:27.987043 2134 kubelet.go:2257] "Starting kubelet main sync loop" Feb 13 05:13:27.987145 kubelet[2134]: E0213 05:13:27.987074 2134 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 05:13:27.987327 kubelet[2134]: W0213 05:13:27.987289 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.49.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:27.987327 kubelet[2134]: E0213 05:13:27.987318 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.49.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.022046 kubelet[2134]: I0213 05:13:28.021983 2134 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 05:13:28.022046 kubelet[2134]: I0213 05:13:28.022032 2134 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 05:13:28.022377 kubelet[2134]: I0213 05:13:28.022072 2134 state_mem.go:36] "Initialized new in-memory state store" Feb 13 05:13:28.023559 kubelet[2134]: I0213 05:13:28.023484 2134 policy_none.go:49] "None policy: Start" Feb 13 05:13:28.024706 kubelet[2134]: I0213 05:13:28.024624 2134 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 05:13:28.024706 kubelet[2134]: I0213 05:13:28.024685 2134 state_mem.go:35] "Initializing new in-memory state store" Feb 13 05:13:28.034515 systemd[1]: Created slice kubepods.slice. Feb 13 05:13:28.044193 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 05:13:28.075226 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 05:13:28.077944 kubelet[2134]: I0213 05:13:28.077855 2134 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 05:13:28.078474 kubelet[2134]: I0213 05:13:28.078391 2134 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 05:13:28.079295 kubelet[2134]: E0213 05:13:28.079217 2134 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:28.082676 kubelet[2134]: I0213 05:13:28.082619 2134 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.083407 kubelet[2134]: E0213 05:13:28.083322 2134 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.59:6443/api/v1/nodes\": dial tcp 147.75.49.59:6443: connect: connection refused" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.087630 kubelet[2134]: I0213 05:13:28.087531 2134 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:28.090873 kubelet[2134]: I0213 05:13:28.090795 2134 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:28.094359 kubelet[2134]: I0213 05:13:28.094283 2134 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:28.112120 systemd[1]: Created slice kubepods-burstable-pod4f7568ad2d910364c29f8afff4bcba79.slice. Feb 13 05:13:28.161541 systemd[1]: Created slice kubepods-burstable-pod5507ba218e5aca09c48937b82e95e692.slice. Feb 13 05:13:28.180380 kubelet[2134]: E0213 05:13:28.180296 2134 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.49.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-69b5ddf616?timeout=10s\": dial tcp 147.75.49.59:6443: connect: connection refused" interval="400ms" Feb 13 05:13:28.185190 systemd[1]: Created slice kubepods-burstable-pod05a477bbc5bc4d1db6c3032cfa370095.slice. Feb 13 05:13:28.279955 kubelet[2134]: I0213 05:13:28.279750 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.279955 kubelet[2134]: I0213 05:13:28.279874 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.279955 kubelet[2134]: I0213 05:13:28.279944 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.280408 kubelet[2134]: I0213 05:13:28.280003 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05a477bbc5bc4d1db6c3032cfa370095-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-69b5ddf616\" (UID: \"05a477bbc5bc4d1db6c3032cfa370095\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.280408 kubelet[2134]: I0213 05:13:28.280192 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.280408 kubelet[2134]: I0213 05:13:28.280355 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7568ad2d910364c29f8afff4bcba79-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" (UID: \"4f7568ad2d910364c29f8afff4bcba79\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.280728 kubelet[2134]: I0213 05:13:28.280494 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7568ad2d910364c29f8afff4bcba79-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" (UID: \"4f7568ad2d910364c29f8afff4bcba79\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.280728 kubelet[2134]: I0213 05:13:28.280601 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7568ad2d910364c29f8afff4bcba79-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" (UID: \"4f7568ad2d910364c29f8afff4bcba79\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.280728 kubelet[2134]: I0213 05:13:28.280711 2134 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.289204 kubelet[2134]: I0213 05:13:28.289143 2134 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.290745 kubelet[2134]: E0213 05:13:28.290697 2134 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.59:6443/api/v1/nodes\": dial tcp 147.75.49.59:6443: connect: connection refused" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.456577 env[1470]: time="2024-02-13T05:13:28.456447012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-69b5ddf616,Uid:4f7568ad2d910364c29f8afff4bcba79,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:28.467606 env[1470]: time="2024-02-13T05:13:28.467494980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-69b5ddf616,Uid:5507ba218e5aca09c48937b82e95e692,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:28.490626 env[1470]: time="2024-02-13T05:13:28.490509993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-69b5ddf616,Uid:05a477bbc5bc4d1db6c3032cfa370095,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:28.582198 kubelet[2134]: E0213 05:13:28.581986 2134 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.49.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-69b5ddf616?timeout=10s\": dial tcp 147.75.49.59:6443: connect: connection refused" interval="800ms" Feb 13 05:13:28.695037 kubelet[2134]: I0213 05:13:28.694979 2134 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.695697 kubelet[2134]: E0213 05:13:28.695658 2134 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.49.59:6443/api/v1/nodes\": dial tcp 147.75.49.59:6443: connect: connection refused" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:28.858172 kubelet[2134]: W0213 05:13:28.858010 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.49.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.858172 kubelet[2134]: E0213 05:13:28.858147 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.49.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.957027 kubelet[2134]: W0213 05:13:28.956903 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.49.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-69b5ddf616&limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.957027 kubelet[2134]: E0213 05:13:28.957043 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.49.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-69b5ddf616&limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.978574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239073166.mount: Deactivated successfully. Feb 13 05:13:28.979817 env[1470]: time="2024-02-13T05:13:28.979780223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.979867 kubelet[2134]: W0213 05:13:28.979842 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.49.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.979867 kubelet[2134]: E0213 05:13:28.979867 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.49.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:28.980871 env[1470]: time="2024-02-13T05:13:28.980854947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.981515 env[1470]: time="2024-02-13T05:13:28.981504381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.982125 env[1470]: time="2024-02-13T05:13:28.982115650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.982508 env[1470]: time="2024-02-13T05:13:28.982494188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.984793 env[1470]: time="2024-02-13T05:13:28.984779755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.986752 env[1470]: time="2024-02-13T05:13:28.986709575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.987181 env[1470]: time="2024-02-13T05:13:28.987171921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.988388 env[1470]: time="2024-02-13T05:13:28.988376637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.989324 env[1470]: time="2024-02-13T05:13:28.989311875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.989779 env[1470]: time="2024-02-13T05:13:28.989769415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.990130 env[1470]: time="2024-02-13T05:13:28.990121865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:28.995198 env[1470]: time="2024-02-13T05:13:28.995167389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:13:28.995198 env[1470]: time="2024-02-13T05:13:28.995187066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:13:28.995198 env[1470]: time="2024-02-13T05:13:28.995198642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:13:28.995349 env[1470]: time="2024-02-13T05:13:28.995257406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b7527d9ba3014fc77ce5649bfed31f6b44d57718684d7043c2f6d62f8c9ebcb pid=2191 runtime=io.containerd.runc.v2 Feb 13 05:13:28.995692 env[1470]: time="2024-02-13T05:13:28.995669103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:13:28.995692 env[1470]: time="2024-02-13T05:13:28.995686397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:13:28.995766 env[1470]: time="2024-02-13T05:13:28.995693677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:13:28.995798 env[1470]: time="2024-02-13T05:13:28.995758063Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a2fe45ad39ebdc2c513196601f30c94bc1abed9dd09458eea19dde27c07c505 pid=2192 runtime=io.containerd.runc.v2 Feb 13 05:13:28.997371 env[1470]: time="2024-02-13T05:13:28.997334794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:13:28.997371 env[1470]: time="2024-02-13T05:13:28.997355815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:13:28.997371 env[1470]: time="2024-02-13T05:13:28.997362959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:13:28.997487 env[1470]: time="2024-02-13T05:13:28.997434140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e12ca535b2e37775e6616e9edba7b0c15aa7439b35694a779815dd800f7fadde pid=2218 runtime=io.containerd.runc.v2 Feb 13 05:13:29.002389 systemd[1]: Started cri-containerd-2b7527d9ba3014fc77ce5649bfed31f6b44d57718684d7043c2f6d62f8c9ebcb.scope. Feb 13 05:13:29.003618 systemd[1]: Started cri-containerd-4a2fe45ad39ebdc2c513196601f30c94bc1abed9dd09458eea19dde27c07c505.scope. Feb 13 05:13:29.005724 systemd[1]: Started cri-containerd-e12ca535b2e37775e6616e9edba7b0c15aa7439b35694a779815dd800f7fadde.scope. Feb 13 05:13:29.028229 env[1470]: time="2024-02-13T05:13:29.028177150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-69b5ddf616,Uid:4f7568ad2d910364c29f8afff4bcba79,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a2fe45ad39ebdc2c513196601f30c94bc1abed9dd09458eea19dde27c07c505\"" Feb 13 05:13:29.028855 env[1470]: time="2024-02-13T05:13:29.028833851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-69b5ddf616,Uid:05a477bbc5bc4d1db6c3032cfa370095,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b7527d9ba3014fc77ce5649bfed31f6b44d57718684d7043c2f6d62f8c9ebcb\"" Feb 13 05:13:29.030233 env[1470]: time="2024-02-13T05:13:29.030215356Z" level=info msg="CreateContainer within sandbox \"2b7527d9ba3014fc77ce5649bfed31f6b44d57718684d7043c2f6d62f8c9ebcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 05:13:29.030305 env[1470]: time="2024-02-13T05:13:29.030284000Z" level=info msg="CreateContainer within sandbox \"4a2fe45ad39ebdc2c513196601f30c94bc1abed9dd09458eea19dde27c07c505\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 05:13:29.030339 env[1470]: time="2024-02-13T05:13:29.030311845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-69b5ddf616,Uid:5507ba218e5aca09c48937b82e95e692,Namespace:kube-system,Attempt:0,} returns sandbox id \"e12ca535b2e37775e6616e9edba7b0c15aa7439b35694a779815dd800f7fadde\"" Feb 13 05:13:29.031369 env[1470]: time="2024-02-13T05:13:29.031356464Z" level=info msg="CreateContainer within sandbox \"e12ca535b2e37775e6616e9edba7b0c15aa7439b35694a779815dd800f7fadde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 05:13:29.036525 env[1470]: time="2024-02-13T05:13:29.036477973Z" level=info msg="CreateContainer within sandbox \"2b7527d9ba3014fc77ce5649bfed31f6b44d57718684d7043c2f6d62f8c9ebcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08032c96f1dd20d5b4b99232b7eb2fe4be6c89ca3d113a2fd28c8a9e7b0a5506\"" Feb 13 05:13:29.036762 kubelet[2134]: W0213 05:13:29.036708 2134 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.49.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:29.036762 kubelet[2134]: E0213 05:13:29.036740 2134 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.49.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.49.59:6443: connect: connection refused Feb 13 05:13:29.036836 env[1470]: time="2024-02-13T05:13:29.036757527Z" level=info msg="StartContainer for \"08032c96f1dd20d5b4b99232b7eb2fe4be6c89ca3d113a2fd28c8a9e7b0a5506\"" Feb 13 05:13:29.037927 env[1470]: time="2024-02-13T05:13:29.037909772Z" level=info msg="CreateContainer within sandbox \"4a2fe45ad39ebdc2c513196601f30c94bc1abed9dd09458eea19dde27c07c505\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64b2ed6e910a5c1f6a3a2b8503e99ad336f9ff07a490e4be0c2391594cc0b141\"" Feb 13 05:13:29.038146 env[1470]: time="2024-02-13T05:13:29.038130818Z" level=info msg="StartContainer for \"64b2ed6e910a5c1f6a3a2b8503e99ad336f9ff07a490e4be0c2391594cc0b141\"" Feb 13 05:13:29.039452 env[1470]: time="2024-02-13T05:13:29.039436257Z" level=info msg="CreateContainer within sandbox \"e12ca535b2e37775e6616e9edba7b0c15aa7439b35694a779815dd800f7fadde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"956cc83a99d3e3f54453ba036b69b3781126d5ca3012284a8b30b05e65a44737\"" Feb 13 05:13:29.039604 env[1470]: time="2024-02-13T05:13:29.039593106Z" level=info msg="StartContainer for \"956cc83a99d3e3f54453ba036b69b3781126d5ca3012284a8b30b05e65a44737\"" Feb 13 05:13:29.046475 systemd[1]: Started cri-containerd-08032c96f1dd20d5b4b99232b7eb2fe4be6c89ca3d113a2fd28c8a9e7b0a5506.scope. Feb 13 05:13:29.047358 systemd[1]: Started cri-containerd-64b2ed6e910a5c1f6a3a2b8503e99ad336f9ff07a490e4be0c2391594cc0b141.scope. Feb 13 05:13:29.049245 systemd[1]: Started cri-containerd-956cc83a99d3e3f54453ba036b69b3781126d5ca3012284a8b30b05e65a44737.scope. Feb 13 05:13:29.073243 env[1470]: time="2024-02-13T05:13:29.073206626Z" level=info msg="StartContainer for \"08032c96f1dd20d5b4b99232b7eb2fe4be6c89ca3d113a2fd28c8a9e7b0a5506\" returns successfully" Feb 13 05:13:29.074634 env[1470]: time="2024-02-13T05:13:29.074617220Z" level=info msg="StartContainer for \"64b2ed6e910a5c1f6a3a2b8503e99ad336f9ff07a490e4be0c2391594cc0b141\" returns successfully" Feb 13 05:13:29.076324 env[1470]: time="2024-02-13T05:13:29.076307547Z" level=info msg="StartContainer for \"956cc83a99d3e3f54453ba036b69b3781126d5ca3012284a8b30b05e65a44737\" returns successfully" Feb 13 05:13:29.498489 kubelet[2134]: I0213 05:13:29.498458 2134 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:29.763515 kubelet[2134]: E0213 05:13:29.763463 2134 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-69b5ddf616\" not found" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:29.862981 kubelet[2134]: I0213 05:13:29.862961 2134 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:29.868470 kubelet[2134]: E0213 05:13:29.868457 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:29.969392 kubelet[2134]: E0213 05:13:29.969316 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.069873 kubelet[2134]: E0213 05:13:30.069698 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.170202 kubelet[2134]: E0213 05:13:30.170104 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.270972 kubelet[2134]: E0213 05:13:30.270875 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.371209 kubelet[2134]: E0213 05:13:30.371120 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.471953 kubelet[2134]: E0213 05:13:30.471846 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.572853 kubelet[2134]: E0213 05:13:30.572761 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.673212 kubelet[2134]: E0213 05:13:30.673006 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.773854 kubelet[2134]: E0213 05:13:30.773784 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.874245 kubelet[2134]: E0213 05:13:30.874149 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:30.974833 kubelet[2134]: E0213 05:13:30.974646 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:31.074830 kubelet[2134]: E0213 05:13:31.074745 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:31.175827 kubelet[2134]: E0213 05:13:31.175737 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:31.276780 kubelet[2134]: E0213 05:13:31.276640 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:31.377395 kubelet[2134]: E0213 05:13:31.377317 2134 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-69b5ddf616\" not found" Feb 13 05:13:31.970463 kubelet[2134]: I0213 05:13:31.970365 2134 apiserver.go:52] "Watching apiserver" Feb 13 05:13:31.980015 kubelet[2134]: I0213 05:13:31.979942 2134 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 13 05:13:32.005765 kubelet[2134]: I0213 05:13:32.005680 2134 reconciler.go:41] "Reconciler: start to sync state" Feb 13 05:13:32.020261 kubelet[2134]: W0213 05:13:32.020214 2134 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 05:13:32.912837 systemd[1]: Reloading. Feb 13 05:13:32.949711 /usr/lib/systemd/system-generators/torcx-generator[2469]: time="2024-02-13T05:13:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:13:32.949733 /usr/lib/systemd/system-generators/torcx-generator[2469]: time="2024-02-13T05:13:32Z" level=info msg="torcx already run" Feb 13 05:13:33.011127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:13:33.011139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:13:33.026920 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:13:33.094906 kubelet[2134]: I0213 05:13:33.094893 2134 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 05:13:33.094919 systemd[1]: Stopping kubelet.service... Feb 13 05:13:33.112012 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 05:13:33.112114 systemd[1]: Stopped kubelet.service. Feb 13 05:13:33.112977 systemd[1]: Started kubelet.service. Feb 13 05:13:33.135367 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 05:13:33.135367 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 05:13:33.135367 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 05:13:33.135620 kubelet[2531]: I0213 05:13:33.135359 2531 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 05:13:33.138840 kubelet[2531]: I0213 05:13:33.138794 2531 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 13 05:13:33.138840 kubelet[2531]: I0213 05:13:33.138808 2531 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 05:13:33.138974 kubelet[2531]: I0213 05:13:33.138931 2531 server.go:837] "Client rotation is on, will bootstrap in background" Feb 13 05:13:33.139844 kubelet[2531]: I0213 05:13:33.139813 2531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 05:13:33.140363 kubelet[2531]: I0213 05:13:33.140353 2531 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 05:13:33.157174 kubelet[2531]: I0213 05:13:33.157138 2531 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 05:13:33.157284 kubelet[2531]: I0213 05:13:33.157249 2531 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 05:13:33.157325 kubelet[2531]: I0213 05:13:33.157293 2531 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 05:13:33.157325 kubelet[2531]: I0213 05:13:33.157305 2531 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 05:13:33.157325 kubelet[2531]: I0213 05:13:33.157311 2531 container_manager_linux.go:302] "Creating device plugin manager" Feb 13 05:13:33.157422 kubelet[2531]: I0213 05:13:33.157326 2531 state_mem.go:36] "Initialized new in-memory state store" Feb 13 05:13:33.158654 kubelet[2531]: I0213 05:13:33.158618 2531 kubelet.go:405] "Attempting to sync node with API server" Feb 13 05:13:33.158654 kubelet[2531]: I0213 05:13:33.158629 2531 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 05:13:33.158654 kubelet[2531]: I0213 05:13:33.158639 2531 kubelet.go:309] "Adding apiserver pod source" Feb 13 05:13:33.158654 kubelet[2531]: I0213 05:13:33.158646 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 05:13:33.158916 kubelet[2531]: I0213 05:13:33.158869 2531 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 05:13:33.159485 kubelet[2531]: I0213 05:13:33.159473 2531 server.go:1168] "Started kubelet" Feb 13 05:13:33.159739 kubelet[2531]: I0213 05:13:33.159726 2531 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 05:13:33.159793 kubelet[2531]: I0213 05:13:33.159779 2531 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 05:13:33.160185 kubelet[2531]: E0213 05:13:33.160173 2531 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 05:13:33.160228 kubelet[2531]: E0213 05:13:33.160203 2531 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 05:13:33.160788 kubelet[2531]: I0213 05:13:33.160778 2531 server.go:461] "Adding debug handlers to kubelet server" Feb 13 05:13:33.160994 kubelet[2531]: I0213 05:13:33.160986 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 05:13:33.161044 kubelet[2531]: I0213 05:13:33.161032 2531 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 13 05:13:33.161090 kubelet[2531]: I0213 05:13:33.161082 2531 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 13 05:13:33.166616 kubelet[2531]: I0213 05:13:33.166541 2531 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 05:13:33.167099 kubelet[2531]: I0213 05:13:33.167087 2531 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 05:13:33.167161 kubelet[2531]: I0213 05:13:33.167107 2531 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 13 05:13:33.167161 kubelet[2531]: I0213 05:13:33.167125 2531 kubelet.go:2257] "Starting kubelet main sync loop" Feb 13 05:13:33.167215 kubelet[2531]: E0213 05:13:33.167171 2531 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 05:13:33.181468 kubelet[2531]: I0213 05:13:33.181421 2531 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 05:13:33.181468 kubelet[2531]: I0213 05:13:33.181433 2531 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 05:13:33.181468 kubelet[2531]: I0213 05:13:33.181445 2531 state_mem.go:36] "Initialized new in-memory state store" Feb 13 05:13:33.181594 kubelet[2531]: I0213 05:13:33.181539 2531 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 05:13:33.181594 kubelet[2531]: I0213 05:13:33.181547 2531 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 13 05:13:33.181594 kubelet[2531]: I0213 05:13:33.181551 2531 policy_none.go:49] "None policy: Start" Feb 13 05:13:33.182311 kubelet[2531]: I0213 05:13:33.182295 2531 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 05:13:33.182382 kubelet[2531]: I0213 05:13:33.182367 2531 state_mem.go:35] "Initializing new in-memory state store" Feb 13 05:13:33.182497 kubelet[2531]: I0213 05:13:33.182489 2531 state_mem.go:75] "Updated machine memory state" Feb 13 05:13:33.184476 kubelet[2531]: I0213 05:13:33.184468 2531 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 05:13:33.184606 kubelet[2531]: I0213 05:13:33.184599 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 05:13:33.267405 kubelet[2531]: I0213 05:13:33.267299 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:33.267721 kubelet[2531]: I0213 05:13:33.267470 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:33.267721 kubelet[2531]: I0213 05:13:33.267561 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:33.268264 kubelet[2531]: I0213 05:13:33.268213 2531 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.276119 kubelet[2531]: W0213 05:13:33.276052 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 05:13:33.277469 kubelet[2531]: W0213 05:13:33.277416 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 05:13:33.278726 kubelet[2531]: W0213 05:13:33.278643 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 05:13:33.278997 kubelet[2531]: E0213 05:13:33.278781 2531 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.283099 kubelet[2531]: I0213 05:13:33.283008 2531 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.283311 kubelet[2531]: I0213 05:13:33.283213 2531 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.292841 sudo[2572]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 05:13:33.293722 sudo[2572]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 13 05:13:33.362560 kubelet[2531]: I0213 05:13:33.362513 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362560 kubelet[2531]: I0213 05:13:33.362540 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362560 kubelet[2531]: I0213 05:13:33.362559 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362760 kubelet[2531]: I0213 05:13:33.362592 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7568ad2d910364c29f8afff4bcba79-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" (UID: \"4f7568ad2d910364c29f8afff4bcba79\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362760 kubelet[2531]: I0213 05:13:33.362636 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362760 kubelet[2531]: I0213 05:13:33.362677 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5507ba218e5aca09c48937b82e95e692-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" (UID: \"5507ba218e5aca09c48937b82e95e692\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362760 kubelet[2531]: I0213 05:13:33.362711 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05a477bbc5bc4d1db6c3032cfa370095-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-69b5ddf616\" (UID: \"05a477bbc5bc4d1db6c3032cfa370095\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362760 kubelet[2531]: I0213 05:13:33.362731 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7568ad2d910364c29f8afff4bcba79-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" (UID: \"4f7568ad2d910364c29f8afff4bcba79\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.362878 kubelet[2531]: I0213 05:13:33.362750 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7568ad2d910364c29f8afff4bcba79-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" (UID: \"4f7568ad2d910364c29f8afff4bcba79\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:33.670612 sudo[2572]: pam_unix(sudo:session): session closed for user root Feb 13 05:13:34.159651 kubelet[2531]: I0213 05:13:34.159566 2531 apiserver.go:52] "Watching apiserver" Feb 13 05:13:34.181392 kubelet[2531]: W0213 05:13:34.181334 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 05:13:34.181736 kubelet[2531]: W0213 05:13:34.181408 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 05:13:34.181736 kubelet[2531]: E0213 05:13:34.181502 2531 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-69b5ddf616\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:34.181736 kubelet[2531]: E0213 05:13:34.181532 2531 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-69b5ddf616\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" Feb 13 05:13:34.218837 kubelet[2531]: I0213 05:13:34.218797 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-69b5ddf616" podStartSLOduration=2.218727108 podCreationTimestamp="2024-02-13 05:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:13:34.209520242 +0000 UTC m=+1.094921312" watchObservedRunningTime="2024-02-13 05:13:34.218727108 +0000 UTC m=+1.104128166" Feb 13 05:13:34.225304 kubelet[2531]: I0213 05:13:34.225256 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-69b5ddf616" podStartSLOduration=1.225215864 podCreationTimestamp="2024-02-13 05:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:13:34.218904678 +0000 UTC m=+1.104305726" watchObservedRunningTime="2024-02-13 05:13:34.225215864 +0000 UTC m=+1.110616914" Feb 13 05:13:34.225450 kubelet[2531]: I0213 05:13:34.225347 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-69b5ddf616" podStartSLOduration=1.225319382 podCreationTimestamp="2024-02-13 05:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:13:34.22512163 +0000 UTC m=+1.110522680" watchObservedRunningTime="2024-02-13 05:13:34.225319382 +0000 UTC m=+1.110720429" Feb 13 05:13:34.262861 kubelet[2531]: I0213 05:13:34.262794 2531 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 13 05:13:34.268668 kubelet[2531]: I0213 05:13:34.268598 2531 reconciler.go:41] "Reconciler: start to sync state" Feb 13 05:13:34.508057 sudo[1591]: pam_unix(sudo:session): session closed for user root Feb 13 05:13:34.508948 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 13 05:13:34.510226 systemd[1]: sshd@4-147.75.49.59:22-139.178.68.195:55268.service: Deactivated successfully. Feb 13 05:13:34.510685 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 05:13:34.510778 systemd[1]: session-7.scope: Consumed 3.130s CPU time. Feb 13 05:13:34.511157 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Feb 13 05:13:34.511676 systemd-logind[1458]: Removed session 7. Feb 13 05:13:44.942174 update_engine[1460]: I0213 05:13:44.942059 1460 update_attempter.cc:509] Updating boot flags... Feb 13 05:13:46.238444 kubelet[2531]: I0213 05:13:46.238397 2531 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 05:13:46.239387 env[1470]: time="2024-02-13T05:13:46.239291045Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 05:13:46.240022 kubelet[2531]: I0213 05:13:46.239786 2531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 05:13:47.107781 kubelet[2531]: I0213 05:13:47.107721 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:47.113059 kubelet[2531]: I0213 05:13:47.113028 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:47.116445 systemd[1]: Created slice kubepods-besteffort-pod7636bedd_74dd_4618_b0e6_d3ab04ecc479.slice. Feb 13 05:13:47.132202 systemd[1]: Created slice kubepods-burstable-podbf32717c_6e57_49a3_aac5_0f3e83a5ac1a.slice. Feb 13 05:13:47.157868 kubelet[2531]: I0213 05:13:47.157821 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl5l7\" (UniqueName: \"kubernetes.io/projected/7636bedd-74dd-4618-b0e6-d3ab04ecc479-kube-api-access-tl5l7\") pod \"kube-proxy-2fsjt\" (UID: \"7636bedd-74dd-4618-b0e6-d3ab04ecc479\") " pod="kube-system/kube-proxy-2fsjt" Feb 13 05:13:47.157868 kubelet[2531]: I0213 05:13:47.157846 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-cgroup\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.157868 kubelet[2531]: I0213 05:13:47.157864 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-config-path\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158016 kubelet[2531]: I0213 05:13:47.157879 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7636bedd-74dd-4618-b0e6-d3ab04ecc479-lib-modules\") pod \"kube-proxy-2fsjt\" (UID: \"7636bedd-74dd-4618-b0e6-d3ab04ecc479\") " pod="kube-system/kube-proxy-2fsjt" Feb 13 05:13:47.158016 kubelet[2531]: I0213 05:13:47.157916 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-run\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158016 kubelet[2531]: I0213 05:13:47.157953 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cni-path\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158016 kubelet[2531]: I0213 05:13:47.157978 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-bpf-maps\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158016 kubelet[2531]: I0213 05:13:47.158000 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-net\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158016 kubelet[2531]: I0213 05:13:47.158013 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-lib-modules\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158139 kubelet[2531]: I0213 05:13:47.158037 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-clustermesh-secrets\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158139 kubelet[2531]: I0213 05:13:47.158055 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-kernel\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158139 kubelet[2531]: I0213 05:13:47.158082 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-etc-cni-netd\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158139 kubelet[2531]: I0213 05:13:47.158107 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7636bedd-74dd-4618-b0e6-d3ab04ecc479-kube-proxy\") pod \"kube-proxy-2fsjt\" (UID: \"7636bedd-74dd-4618-b0e6-d3ab04ecc479\") " pod="kube-system/kube-proxy-2fsjt" Feb 13 05:13:47.158139 kubelet[2531]: I0213 05:13:47.158124 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7636bedd-74dd-4618-b0e6-d3ab04ecc479-xtables-lock\") pod \"kube-proxy-2fsjt\" (UID: \"7636bedd-74dd-4618-b0e6-d3ab04ecc479\") " pod="kube-system/kube-proxy-2fsjt" Feb 13 05:13:47.158139 kubelet[2531]: I0213 05:13:47.158137 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hostproc\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158259 kubelet[2531]: I0213 05:13:47.158153 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hubble-tls\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158259 kubelet[2531]: I0213 05:13:47.158166 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-xtables-lock\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.158259 kubelet[2531]: I0213 05:13:47.158199 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2htk\" (UniqueName: \"kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-kube-api-access-r2htk\") pod \"cilium-fdt2q\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " pod="kube-system/cilium-fdt2q" Feb 13 05:13:47.250979 kubelet[2531]: I0213 05:13:47.250915 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:47.265367 systemd[1]: Created slice kubepods-besteffort-podf4a16a98_dda8_4c2a_bb92_0e54f199f0eb.slice. Feb 13 05:13:47.360958 kubelet[2531]: I0213 05:13:47.360741 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-cilium-config-path\") pod \"cilium-operator-574c4bb98d-bggkk\" (UID: \"f4a16a98-dda8-4c2a-bb92-0e54f199f0eb\") " pod="kube-system/cilium-operator-574c4bb98d-bggkk" Feb 13 05:13:47.360958 kubelet[2531]: I0213 05:13:47.360860 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shccd\" (UniqueName: \"kubernetes.io/projected/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-kube-api-access-shccd\") pod \"cilium-operator-574c4bb98d-bggkk\" (UID: \"f4a16a98-dda8-4c2a-bb92-0e54f199f0eb\") " pod="kube-system/cilium-operator-574c4bb98d-bggkk" Feb 13 05:13:47.431305 env[1470]: time="2024-02-13T05:13:47.431176456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2fsjt,Uid:7636bedd-74dd-4618-b0e6-d3ab04ecc479,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:47.446547 env[1470]: time="2024-02-13T05:13:47.446465330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdt2q,Uid:bf32717c-6e57-49a3-aac5-0f3e83a5ac1a,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:47.457555 env[1470]: time="2024-02-13T05:13:47.457385369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:13:47.457555 env[1470]: time="2024-02-13T05:13:47.457492098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:13:47.458041 env[1470]: time="2024-02-13T05:13:47.457556061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:13:47.458041 env[1470]: time="2024-02-13T05:13:47.457953169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d95c6d0299eb326fd9dcfd2fc4a5a0564103137183678aaa05124da06266da5b pid=2701 runtime=io.containerd.runc.v2 Feb 13 05:13:47.471680 env[1470]: time="2024-02-13T05:13:47.469080004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:13:47.471680 env[1470]: time="2024-02-13T05:13:47.469233578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:13:47.471680 env[1470]: time="2024-02-13T05:13:47.469300507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:13:47.472128 env[1470]: time="2024-02-13T05:13:47.471751201Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39 pid=2718 runtime=io.containerd.runc.v2 Feb 13 05:13:47.479693 systemd[1]: Started cri-containerd-d95c6d0299eb326fd9dcfd2fc4a5a0564103137183678aaa05124da06266da5b.scope. Feb 13 05:13:47.485906 systemd[1]: Started cri-containerd-7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39.scope. Feb 13 05:13:47.498438 env[1470]: time="2024-02-13T05:13:47.498401739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2fsjt,Uid:7636bedd-74dd-4618-b0e6-d3ab04ecc479,Namespace:kube-system,Attempt:0,} returns sandbox id \"d95c6d0299eb326fd9dcfd2fc4a5a0564103137183678aaa05124da06266da5b\"" Feb 13 05:13:47.500321 env[1470]: time="2024-02-13T05:13:47.500289158Z" level=info msg="CreateContainer within sandbox \"d95c6d0299eb326fd9dcfd2fc4a5a0564103137183678aaa05124da06266da5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 05:13:47.503231 env[1470]: time="2024-02-13T05:13:47.503205252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdt2q,Uid:bf32717c-6e57-49a3-aac5-0f3e83a5ac1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\"" Feb 13 05:13:47.504315 env[1470]: time="2024-02-13T05:13:47.504290859Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 05:13:47.507912 env[1470]: time="2024-02-13T05:13:47.507856853Z" level=info msg="CreateContainer within sandbox \"d95c6d0299eb326fd9dcfd2fc4a5a0564103137183678aaa05124da06266da5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"221f58f59ca2774cc530c1aabf22d11c5ff74dbec570f62952f7e21a83176fba\"" Feb 13 05:13:47.508266 env[1470]: time="2024-02-13T05:13:47.508242140Z" level=info msg="StartContainer for \"221f58f59ca2774cc530c1aabf22d11c5ff74dbec570f62952f7e21a83176fba\"" Feb 13 05:13:47.520016 systemd[1]: Started cri-containerd-221f58f59ca2774cc530c1aabf22d11c5ff74dbec570f62952f7e21a83176fba.scope. Feb 13 05:13:47.540774 env[1470]: time="2024-02-13T05:13:47.540704252Z" level=info msg="StartContainer for \"221f58f59ca2774cc530c1aabf22d11c5ff74dbec570f62952f7e21a83176fba\" returns successfully" Feb 13 05:13:47.577893 env[1470]: time="2024-02-13T05:13:47.577823415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-bggkk,Uid:f4a16a98-dda8-4c2a-bb92-0e54f199f0eb,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:47.586702 env[1470]: time="2024-02-13T05:13:47.586615021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:13:47.586702 env[1470]: time="2024-02-13T05:13:47.586653865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:13:47.586702 env[1470]: time="2024-02-13T05:13:47.586672641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:13:47.586872 env[1470]: time="2024-02-13T05:13:47.586793475Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84 pid=2853 runtime=io.containerd.runc.v2 Feb 13 05:13:47.595667 systemd[1]: Started cri-containerd-1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84.scope. Feb 13 05:13:47.630062 env[1470]: time="2024-02-13T05:13:47.629979143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-bggkk,Uid:f4a16a98-dda8-4c2a-bb92-0e54f199f0eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\"" Feb 13 05:13:48.232289 kubelet[2531]: I0213 05:13:48.232189 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2fsjt" podStartSLOduration=1.232100556 podCreationTimestamp="2024-02-13 05:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:13:48.231559748 +0000 UTC m=+15.116960854" watchObservedRunningTime="2024-02-13 05:13:48.232100556 +0000 UTC m=+15.117501641" Feb 13 05:13:50.890178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562570553.mount: Deactivated successfully. Feb 13 05:13:52.568832 env[1470]: time="2024-02-13T05:13:52.568788340Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:52.569411 env[1470]: time="2024-02-13T05:13:52.569371112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:52.570352 env[1470]: time="2024-02-13T05:13:52.570308161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:52.571206 env[1470]: time="2024-02-13T05:13:52.571165043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 05:13:52.571573 env[1470]: time="2024-02-13T05:13:52.571560610Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 05:13:52.572224 env[1470]: time="2024-02-13T05:13:52.572182780Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 05:13:52.576908 env[1470]: time="2024-02-13T05:13:52.576872622Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\"" Feb 13 05:13:52.577258 env[1470]: time="2024-02-13T05:13:52.577242932Z" level=info msg="StartContainer for \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\"" Feb 13 05:13:52.587433 systemd[1]: Started cri-containerd-2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f.scope. Feb 13 05:13:52.599115 env[1470]: time="2024-02-13T05:13:52.599091265Z" level=info msg="StartContainer for \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\" returns successfully" Feb 13 05:13:52.604516 systemd[1]: cri-containerd-2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f.scope: Deactivated successfully. Feb 13 05:13:53.580846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f-rootfs.mount: Deactivated successfully. Feb 13 05:13:53.701053 env[1470]: time="2024-02-13T05:13:53.700909571Z" level=info msg="shim disconnected" id=2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f Feb 13 05:13:53.701053 env[1470]: time="2024-02-13T05:13:53.701016238Z" level=warning msg="cleaning up after shim disconnected" id=2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f namespace=k8s.io Feb 13 05:13:53.701053 env[1470]: time="2024-02-13T05:13:53.701043250Z" level=info msg="cleaning up dead shim" Feb 13 05:13:53.716571 env[1470]: time="2024-02-13T05:13:53.716469024Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:13:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3030 runtime=io.containerd.runc.v2\n" Feb 13 05:13:54.227919 env[1470]: time="2024-02-13T05:13:54.227897492Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 05:13:54.260450 env[1470]: time="2024-02-13T05:13:54.260423941Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\"" Feb 13 05:13:54.260849 env[1470]: time="2024-02-13T05:13:54.260804007Z" level=info msg="StartContainer for \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\"" Feb 13 05:13:54.261103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581193785.mount: Deactivated successfully. Feb 13 05:13:54.269279 systemd[1]: Started cri-containerd-8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe.scope. Feb 13 05:13:54.280429 env[1470]: time="2024-02-13T05:13:54.280405126Z" level=info msg="StartContainer for \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\" returns successfully" Feb 13 05:13:54.287219 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 05:13:54.287373 systemd[1]: Stopped systemd-sysctl.service. Feb 13 05:13:54.287536 systemd[1]: Stopping systemd-sysctl.service... Feb 13 05:13:54.288458 systemd[1]: Starting systemd-sysctl.service... Feb 13 05:13:54.288677 systemd[1]: cri-containerd-8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe.scope: Deactivated successfully. Feb 13 05:13:54.292939 systemd[1]: Finished systemd-sysctl.service. Feb 13 05:13:54.320090 env[1470]: time="2024-02-13T05:13:54.320009580Z" level=info msg="shim disconnected" id=8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe Feb 13 05:13:54.320090 env[1470]: time="2024-02-13T05:13:54.320041628Z" level=warning msg="cleaning up after shim disconnected" id=8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe namespace=k8s.io Feb 13 05:13:54.320090 env[1470]: time="2024-02-13T05:13:54.320049750Z" level=info msg="cleaning up dead shim" Feb 13 05:13:54.325193 env[1470]: time="2024-02-13T05:13:54.325145292Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:13:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3092 runtime=io.containerd.runc.v2\n" Feb 13 05:13:54.576515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe-rootfs.mount: Deactivated successfully. Feb 13 05:13:54.825225 env[1470]: time="2024-02-13T05:13:54.825136051Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:54.825845 env[1470]: time="2024-02-13T05:13:54.825795005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:54.826869 env[1470]: time="2024-02-13T05:13:54.826818472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:13:54.827076 env[1470]: time="2024-02-13T05:13:54.827060126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 05:13:54.828121 env[1470]: time="2024-02-13T05:13:54.828106185Z" level=info msg="CreateContainer within sandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 05:13:54.833094 env[1470]: time="2024-02-13T05:13:54.833047212Z" level=info msg="CreateContainer within sandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\"" Feb 13 05:13:54.833422 env[1470]: time="2024-02-13T05:13:54.833409125Z" level=info msg="StartContainer for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\"" Feb 13 05:13:54.833962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882236656.mount: Deactivated successfully. Feb 13 05:13:54.842324 systemd[1]: Started cri-containerd-e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc.scope. Feb 13 05:13:54.854898 env[1470]: time="2024-02-13T05:13:54.854865778Z" level=info msg="StartContainer for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" returns successfully" Feb 13 05:13:55.230550 env[1470]: time="2024-02-13T05:13:55.230509114Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 05:13:55.236759 kubelet[2531]: I0213 05:13:55.236740 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-bggkk" podStartSLOduration=1.040201844 podCreationTimestamp="2024-02-13 05:13:47 +0000 UTC" firstStartedPulling="2024-02-13 05:13:47.630702546 +0000 UTC m=+14.516103588" lastFinishedPulling="2024-02-13 05:13:54.827204688 +0000 UTC m=+21.712605725" observedRunningTime="2024-02-13 05:13:55.236310903 +0000 UTC m=+22.121711940" watchObservedRunningTime="2024-02-13 05:13:55.236703981 +0000 UTC m=+22.122105014" Feb 13 05:13:55.237301 env[1470]: time="2024-02-13T05:13:55.237279688Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\"" Feb 13 05:13:55.237670 env[1470]: time="2024-02-13T05:13:55.237600064Z" level=info msg="StartContainer for \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\"" Feb 13 05:13:55.250479 systemd[1]: Started cri-containerd-a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e.scope. Feb 13 05:13:55.275300 env[1470]: time="2024-02-13T05:13:55.275272260Z" level=info msg="StartContainer for \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\" returns successfully" Feb 13 05:13:55.276647 systemd[1]: cri-containerd-a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e.scope: Deactivated successfully. Feb 13 05:13:55.422641 env[1470]: time="2024-02-13T05:13:55.422574311Z" level=info msg="shim disconnected" id=a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e Feb 13 05:13:55.422641 env[1470]: time="2024-02-13T05:13:55.422639663Z" level=warning msg="cleaning up after shim disconnected" id=a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e namespace=k8s.io Feb 13 05:13:55.422641 env[1470]: time="2024-02-13T05:13:55.422646596Z" level=info msg="cleaning up dead shim" Feb 13 05:13:55.426498 env[1470]: time="2024-02-13T05:13:55.426479857Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:13:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3199 runtime=io.containerd.runc.v2\n" Feb 13 05:13:56.243193 env[1470]: time="2024-02-13T05:13:56.243106248Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 05:13:56.255201 env[1470]: time="2024-02-13T05:13:56.255133900Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\"" Feb 13 05:13:56.255547 env[1470]: time="2024-02-13T05:13:56.255532258Z" level=info msg="StartContainer for \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\"" Feb 13 05:13:56.264506 systemd[1]: Started cri-containerd-12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122.scope. Feb 13 05:13:56.276439 env[1470]: time="2024-02-13T05:13:56.276413055Z" level=info msg="StartContainer for \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\" returns successfully" Feb 13 05:13:56.276834 systemd[1]: cri-containerd-12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122.scope: Deactivated successfully. Feb 13 05:13:56.285967 env[1470]: time="2024-02-13T05:13:56.285908008Z" level=info msg="shim disconnected" id=12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122 Feb 13 05:13:56.285967 env[1470]: time="2024-02-13T05:13:56.285933523Z" level=warning msg="cleaning up after shim disconnected" id=12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122 namespace=k8s.io Feb 13 05:13:56.285967 env[1470]: time="2024-02-13T05:13:56.285939444Z" level=info msg="cleaning up dead shim" Feb 13 05:13:56.289358 env[1470]: time="2024-02-13T05:13:56.289341008Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:13:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3252 runtime=io.containerd.runc.v2\n" Feb 13 05:13:56.578126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122-rootfs.mount: Deactivated successfully. Feb 13 05:13:57.256319 env[1470]: time="2024-02-13T05:13:57.256224190Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 05:13:57.279413 env[1470]: time="2024-02-13T05:13:57.279261560Z" level=info msg="CreateContainer within sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\"" Feb 13 05:13:57.280400 env[1470]: time="2024-02-13T05:13:57.280319032Z" level=info msg="StartContainer for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\"" Feb 13 05:13:57.306845 systemd[1]: Started cri-containerd-475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd.scope. Feb 13 05:13:57.337050 env[1470]: time="2024-02-13T05:13:57.336981177Z" level=info msg="StartContainer for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" returns successfully" Feb 13 05:13:57.412599 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 05:13:57.445314 kubelet[2531]: I0213 05:13:57.445299 2531 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 05:13:57.468892 kubelet[2531]: I0213 05:13:57.468870 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:57.469029 kubelet[2531]: I0213 05:13:57.469013 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:13:57.472128 systemd[1]: Created slice kubepods-burstable-pod817e2927_36f1_4751_997a_55681a53c611.slice. Feb 13 05:13:57.474649 systemd[1]: Created slice kubepods-burstable-pod685e3f2e_8c36_4b4d_ba6b_acdfe122af06.slice. Feb 13 05:13:57.537076 kubelet[2531]: I0213 05:13:57.536988 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d4wl\" (UniqueName: \"kubernetes.io/projected/685e3f2e-8c36-4b4d-ba6b-acdfe122af06-kube-api-access-9d4wl\") pod \"coredns-5d78c9869d-fd485\" (UID: \"685e3f2e-8c36-4b4d-ba6b-acdfe122af06\") " pod="kube-system/coredns-5d78c9869d-fd485" Feb 13 05:13:57.537076 kubelet[2531]: I0213 05:13:57.537019 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817e2927-36f1-4751-997a-55681a53c611-config-volume\") pod \"coredns-5d78c9869d-n8zmq\" (UID: \"817e2927-36f1-4751-997a-55681a53c611\") " pod="kube-system/coredns-5d78c9869d-n8zmq" Feb 13 05:13:57.537076 kubelet[2531]: I0213 05:13:57.537035 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kzx\" (UniqueName: \"kubernetes.io/projected/817e2927-36f1-4751-997a-55681a53c611-kube-api-access-d6kzx\") pod \"coredns-5d78c9869d-n8zmq\" (UID: \"817e2927-36f1-4751-997a-55681a53c611\") " pod="kube-system/coredns-5d78c9869d-n8zmq" Feb 13 05:13:57.537076 kubelet[2531]: I0213 05:13:57.537048 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/685e3f2e-8c36-4b4d-ba6b-acdfe122af06-config-volume\") pod \"coredns-5d78c9869d-fd485\" (UID: \"685e3f2e-8c36-4b4d-ba6b-acdfe122af06\") " pod="kube-system/coredns-5d78c9869d-fd485" Feb 13 05:13:57.549655 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 05:13:57.775410 env[1470]: time="2024-02-13T05:13:57.775275425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-n8zmq,Uid:817e2927-36f1-4751-997a-55681a53c611,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:57.777513 env[1470]: time="2024-02-13T05:13:57.777372338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-fd485,Uid:685e3f2e-8c36-4b4d-ba6b-acdfe122af06,Namespace:kube-system,Attempt:0,}" Feb 13 05:13:58.278561 kubelet[2531]: I0213 05:13:58.278532 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fdt2q" podStartSLOduration=6.211002865 podCreationTimestamp="2024-02-13 05:13:47 +0000 UTC" firstStartedPulling="2024-02-13 05:13:47.50394293 +0000 UTC m=+14.389343975" lastFinishedPulling="2024-02-13 05:13:52.571436016 +0000 UTC m=+19.456837052" observedRunningTime="2024-02-13 05:13:58.278211613 +0000 UTC m=+25.163612652" watchObservedRunningTime="2024-02-13 05:13:58.278495942 +0000 UTC m=+25.163896983" Feb 13 05:13:59.144152 systemd-networkd[1301]: cilium_host: Link UP Feb 13 05:13:59.144264 systemd-networkd[1301]: cilium_net: Link UP Feb 13 05:13:59.151218 systemd-networkd[1301]: cilium_net: Gained carrier Feb 13 05:13:59.158390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 13 05:13:59.158421 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 05:13:59.158429 systemd-networkd[1301]: cilium_host: Gained carrier Feb 13 05:13:59.203539 systemd-networkd[1301]: cilium_vxlan: Link UP Feb 13 05:13:59.203542 systemd-networkd[1301]: cilium_vxlan: Gained carrier Feb 13 05:13:59.333596 kernel: NET: Registered PF_ALG protocol family Feb 13 05:13:59.364765 systemd-networkd[1301]: cilium_host: Gained IPv6LL Feb 13 05:13:59.843741 systemd-networkd[1301]: cilium_net: Gained IPv6LL Feb 13 05:13:59.852351 systemd-networkd[1301]: lxc_health: Link UP Feb 13 05:13:59.877487 systemd-networkd[1301]: lxc_health: Gained carrier Feb 13 05:13:59.877629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 05:14:00.325432 systemd-networkd[1301]: lxcfcd5b087610a: Link UP Feb 13 05:14:00.363595 kernel: eth0: renamed from tmpe46c6 Feb 13 05:14:00.369831 systemd-networkd[1301]: lxc5d5c2fae0975: Link UP Feb 13 05:14:00.374659 kernel: eth0: renamed from tmpbc0d4 Feb 13 05:14:00.400604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfcd5b087610a: link becomes ready Feb 13 05:14:00.400729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 05:14:00.414915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5d5c2fae0975: link becomes ready Feb 13 05:14:00.414934 systemd-networkd[1301]: lxcfcd5b087610a: Gained carrier Feb 13 05:14:00.415073 systemd-networkd[1301]: lxc5d5c2fae0975: Gained carrier Feb 13 05:14:00.418143 systemd[1]: Started sshd@5-147.75.49.59:22-180.101.88.197:51483.service. Feb 13 05:14:00.803735 systemd-networkd[1301]: cilium_vxlan: Gained IPv6LL Feb 13 05:14:01.123802 systemd-networkd[1301]: lxc_health: Gained IPv6LL Feb 13 05:14:01.355941 sshd[3916]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:14:02.147781 systemd-networkd[1301]: lxc5d5c2fae0975: Gained IPv6LL Feb 13 05:14:02.275740 systemd-networkd[1301]: lxcfcd5b087610a: Gained IPv6LL Feb 13 05:14:02.530811 sshd[3916]: Failed password for root from 180.101.88.197 port 51483 ssh2 Feb 13 05:14:02.685346 env[1470]: time="2024-02-13T05:14:02.685315153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:14:02.685346 env[1470]: time="2024-02-13T05:14:02.685335272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:14:02.685346 env[1470]: time="2024-02-13T05:14:02.685342513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:14:02.685625 env[1470]: time="2024-02-13T05:14:02.685412875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e46c6bc32eea852916c86841a2bfaa7fe1423996cb7c86892673e93026351025 pid=3952 runtime=io.containerd.runc.v2 Feb 13 05:14:02.685625 env[1470]: time="2024-02-13T05:14:02.685547670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:14:02.685625 env[1470]: time="2024-02-13T05:14:02.685563847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:14:02.685625 env[1470]: time="2024-02-13T05:14:02.685571101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:14:02.685708 env[1470]: time="2024-02-13T05:14:02.685639167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc0d4c3ae689bd3fa391b2859fa6aee66dca362ee0300fb11df2df5a679de1af pid=3954 runtime=io.containerd.runc.v2 Feb 13 05:14:02.694889 systemd[1]: Started cri-containerd-bc0d4c3ae689bd3fa391b2859fa6aee66dca362ee0300fb11df2df5a679de1af.scope. Feb 13 05:14:02.695500 systemd[1]: Started cri-containerd-e46c6bc32eea852916c86841a2bfaa7fe1423996cb7c86892673e93026351025.scope. Feb 13 05:14:02.716657 env[1470]: time="2024-02-13T05:14:02.716621291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-n8zmq,Uid:817e2927-36f1-4751-997a-55681a53c611,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc0d4c3ae689bd3fa391b2859fa6aee66dca362ee0300fb11df2df5a679de1af\"" Feb 13 05:14:02.717093 env[1470]: time="2024-02-13T05:14:02.717078104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-fd485,Uid:685e3f2e-8c36-4b4d-ba6b-acdfe122af06,Namespace:kube-system,Attempt:0,} returns sandbox id \"e46c6bc32eea852916c86841a2bfaa7fe1423996cb7c86892673e93026351025\"" Feb 13 05:14:02.718087 env[1470]: time="2024-02-13T05:14:02.718073829Z" level=info msg="CreateContainer within sandbox \"bc0d4c3ae689bd3fa391b2859fa6aee66dca362ee0300fb11df2df5a679de1af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 05:14:02.718128 env[1470]: time="2024-02-13T05:14:02.718109875Z" level=info msg="CreateContainer within sandbox \"e46c6bc32eea852916c86841a2bfaa7fe1423996cb7c86892673e93026351025\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 05:14:02.723089 env[1470]: time="2024-02-13T05:14:02.723043129Z" level=info msg="CreateContainer within sandbox \"bc0d4c3ae689bd3fa391b2859fa6aee66dca362ee0300fb11df2df5a679de1af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7581f38703ef4613f9708cde91646b85153cd66df67552d750266e51f8bd1c3\"" Feb 13 05:14:02.723282 env[1470]: time="2024-02-13T05:14:02.723268854Z" level=info msg="StartContainer for \"f7581f38703ef4613f9708cde91646b85153cd66df67552d750266e51f8bd1c3\"" Feb 13 05:14:02.723895 env[1470]: time="2024-02-13T05:14:02.723880410Z" level=info msg="CreateContainer within sandbox \"e46c6bc32eea852916c86841a2bfaa7fe1423996cb7c86892673e93026351025\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe2968f91307ccc66af13bd520af32eb0143bc8310b7e48aade6d0f9a6ec3979\"" Feb 13 05:14:02.724069 env[1470]: time="2024-02-13T05:14:02.724053727Z" level=info msg="StartContainer for \"fe2968f91307ccc66af13bd520af32eb0143bc8310b7e48aade6d0f9a6ec3979\"" Feb 13 05:14:02.767258 systemd[1]: Started cri-containerd-f7581f38703ef4613f9708cde91646b85153cd66df67552d750266e51f8bd1c3.scope. Feb 13 05:14:02.774655 systemd[1]: Started cri-containerd-fe2968f91307ccc66af13bd520af32eb0143bc8310b7e48aade6d0f9a6ec3979.scope. Feb 13 05:14:02.805797 env[1470]: time="2024-02-13T05:14:02.805701116Z" level=info msg="StartContainer for \"f7581f38703ef4613f9708cde91646b85153cd66df67552d750266e51f8bd1c3\" returns successfully" Feb 13 05:14:02.809067 env[1470]: time="2024-02-13T05:14:02.809022095Z" level=info msg="StartContainer for \"fe2968f91307ccc66af13bd520af32eb0143bc8310b7e48aade6d0f9a6ec3979\" returns successfully" Feb 13 05:14:03.292200 kubelet[2531]: I0213 05:14:03.292133 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-fd485" podStartSLOduration=16.292048336 podCreationTimestamp="2024-02-13 05:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:14:03.290421022 +0000 UTC m=+30.175822135" watchObservedRunningTime="2024-02-13 05:14:03.292048336 +0000 UTC m=+30.177449415" Feb 13 05:14:03.310615 kubelet[2531]: I0213 05:14:03.310556 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-n8zmq" podStartSLOduration=16.310476976 podCreationTimestamp="2024-02-13 05:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:14:03.309691863 +0000 UTC m=+30.195092953" watchObservedRunningTime="2024-02-13 05:14:03.310476976 +0000 UTC m=+30.195878072" Feb 13 05:14:04.428836 sshd[3916]: Failed password for root from 180.101.88.197 port 51483 ssh2 Feb 13 05:14:07.552769 sshd[3916]: Failed password for root from 180.101.88.197 port 51483 ssh2 Feb 13 05:14:08.206088 sshd[3916]: Received disconnect from 180.101.88.197 port 51483:11: [preauth] Feb 13 05:14:08.206088 sshd[3916]: Disconnected from authenticating user root 180.101.88.197 port 51483 [preauth] Feb 13 05:14:08.206670 sshd[3916]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:14:08.208762 systemd[1]: sshd@5-147.75.49.59:22-180.101.88.197:51483.service: Deactivated successfully. Feb 13 05:14:08.344971 systemd[1]: Started sshd@6-147.75.49.59:22-180.101.88.197:48306.service. Feb 13 05:14:09.249520 sshd[4127]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:14:09.474746 systemd[1]: Started sshd@7-147.75.49.59:22-119.91.214.145:53440.service. Feb 13 05:14:10.855851 sshd[4127]: Failed password for root from 180.101.88.197 port 48306 ssh2 Feb 13 05:14:11.259006 sshd[4130]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:14:11.259253 sshd[4130]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:14:13.473195 sshd[4130]: Failed password for root from 119.91.214.145 port 53440 ssh2 Feb 13 05:14:13.994196 sshd[4130]: Received disconnect from 119.91.214.145 port 53440:11: Bye Bye [preauth] Feb 13 05:14:13.994196 sshd[4130]: Disconnected from authenticating user root 119.91.214.145 port 53440 [preauth] Feb 13 05:14:13.996787 systemd[1]: sshd@7-147.75.49.59:22-119.91.214.145:53440.service: Deactivated successfully. Feb 13 05:14:14.171179 sshd[4127]: Failed password for root from 180.101.88.197 port 48306 ssh2 Feb 13 05:14:16.291453 sshd[4127]: Failed password for root from 180.101.88.197 port 48306 ssh2 Feb 13 05:14:17.371569 sshd[4127]: Received disconnect from 180.101.88.197 port 48306:11: [preauth] Feb 13 05:14:17.371569 sshd[4127]: Disconnected from authenticating user root 180.101.88.197 port 48306 [preauth] Feb 13 05:14:17.372160 sshd[4127]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:14:17.374190 systemd[1]: sshd@6-147.75.49.59:22-180.101.88.197:48306.service: Deactivated successfully. Feb 13 05:14:17.526924 systemd[1]: Started sshd@8-147.75.49.59:22-180.101.88.197:43275.service. Feb 13 05:14:18.450042 systemd[1]: Started sshd@9-147.75.49.59:22-85.209.11.27:16870.service. Feb 13 05:14:18.464577 sshd[4137]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:14:20.265622 sshd[4142]: Invalid user admin from 85.209.11.27 port 16870 Feb 13 05:14:20.779249 sshd[4142]: pam_faillock(sshd:auth): User unknown Feb 13 05:14:20.780423 sshd[4142]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:14:20.780513 sshd[4142]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.27 Feb 13 05:14:20.781508 sshd[4142]: pam_faillock(sshd:auth): User unknown Feb 13 05:14:20.974296 sshd[4137]: Failed password for root from 180.101.88.197 port 43275 ssh2 Feb 13 05:14:22.232421 sshd[4142]: Failed password for invalid user admin from 85.209.11.27 port 16870 ssh2 Feb 13 05:14:22.764757 sshd[4137]: Failed password for root from 180.101.88.197 port 43275 ssh2 Feb 13 05:14:22.845977 sshd[4142]: Connection closed by invalid user admin 85.209.11.27 port 16870 [preauth] Feb 13 05:14:22.848422 systemd[1]: sshd@9-147.75.49.59:22-85.209.11.27:16870.service: Deactivated successfully. Feb 13 05:14:25.753869 sshd[4137]: Failed password for root from 180.101.88.197 port 43275 ssh2 Feb 13 05:14:26.605383 sshd[4137]: Received disconnect from 180.101.88.197 port 43275:11: [preauth] Feb 13 05:14:26.605383 sshd[4137]: Disconnected from authenticating user root 180.101.88.197 port 43275 [preauth] Feb 13 05:14:26.605961 sshd[4137]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:14:26.607966 systemd[1]: sshd@8-147.75.49.59:22-180.101.88.197:43275.service: Deactivated successfully. Feb 13 05:14:37.296058 systemd[1]: Started sshd@10-147.75.49.59:22-159.223.87.202:59418.service. Feb 13 05:14:38.340109 sshd[4151]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:14:40.261832 sshd[4151]: Failed password for root from 159.223.87.202 port 59418 ssh2 Feb 13 05:14:41.093876 sshd[4151]: Received disconnect from 159.223.87.202 port 59418:11: Bye Bye [preauth] Feb 13 05:14:41.093876 sshd[4151]: Disconnected from authenticating user root 159.223.87.202 port 59418 [preauth] Feb 13 05:14:41.096412 systemd[1]: sshd@10-147.75.49.59:22-159.223.87.202:59418.service: Deactivated successfully. Feb 13 05:15:13.740822 systemd[1]: Started sshd@11-147.75.49.59:22-75.51.10.234:43986.service. Feb 13 05:15:14.067244 sshd[4158]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:15:15.597975 sshd[4158]: Failed password for root from 75.51.10.234 port 43986 ssh2 Feb 13 05:15:16.677457 sshd[4158]: Received disconnect from 75.51.10.234 port 43986:11: Bye Bye [preauth] Feb 13 05:15:16.677457 sshd[4158]: Disconnected from authenticating user root 75.51.10.234 port 43986 [preauth] Feb 13 05:15:16.679938 systemd[1]: sshd@11-147.75.49.59:22-75.51.10.234:43986.service: Deactivated successfully. Feb 13 05:15:51.593214 systemd[1]: Started sshd@12-147.75.49.59:22-218.92.0.56:26427.service. Feb 13 05:15:52.576986 sshd[4170]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 13 05:15:54.660168 sshd[4170]: Failed password for root from 218.92.0.56 port 26427 ssh2 Feb 13 05:15:57.791911 sshd[4170]: Failed password for root from 218.92.0.56 port 26427 ssh2 Feb 13 05:16:00.257066 sshd[4170]: Failed password for root from 218.92.0.56 port 26427 ssh2 Feb 13 05:16:00.738418 sshd[4170]: Received disconnect from 218.92.0.56 port 26427:11: [preauth] Feb 13 05:16:00.738418 sshd[4170]: Disconnected from authenticating user root 218.92.0.56 port 26427 [preauth] Feb 13 05:16:00.739039 sshd[4170]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 13 05:16:00.741117 systemd[1]: sshd@12-147.75.49.59:22-218.92.0.56:26427.service: Deactivated successfully. Feb 13 05:16:00.910223 systemd[1]: Started sshd@13-147.75.49.59:22-218.92.0.56:28742.service. Feb 13 05:16:01.956280 sshd[4174]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 13 05:16:04.606942 sshd[4174]: Failed password for root from 218.92.0.56 port 28742 ssh2 Feb 13 05:16:09.736462 sshd[4174]: Failed password for root from 218.92.0.56 port 28742 ssh2 Feb 13 05:16:11.736346 sshd[4174]: Failed password for root from 218.92.0.56 port 28742 ssh2 Feb 13 05:16:12.716009 sshd[4174]: Received disconnect from 218.92.0.56 port 28742:11: [preauth] Feb 13 05:16:12.716009 sshd[4174]: Disconnected from authenticating user root 218.92.0.56 port 28742 [preauth] Feb 13 05:16:12.716632 sshd[4174]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 13 05:16:12.718671 systemd[1]: sshd@13-147.75.49.59:22-218.92.0.56:28742.service: Deactivated successfully. Feb 13 05:16:12.874466 systemd[1]: Started sshd@14-147.75.49.59:22-218.92.0.56:41895.service. Feb 13 05:16:13.877306 sshd[4178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 13 05:16:16.507790 sshd[4178]: Failed password for root from 218.92.0.56 port 41895 ssh2 Feb 13 05:16:21.628569 sshd[4178]: Failed password for root from 218.92.0.56 port 41895 ssh2 Feb 13 05:16:23.625080 sshd[4178]: Failed password for root from 218.92.0.56 port 41895 ssh2 Feb 13 05:16:24.621456 sshd[4178]: Received disconnect from 218.92.0.56 port 41895:11: [preauth] Feb 13 05:16:24.621456 sshd[4178]: Disconnected from authenticating user root 218.92.0.56 port 41895 [preauth] Feb 13 05:16:24.622012 sshd[4178]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 13 05:16:24.624260 systemd[1]: sshd@14-147.75.49.59:22-218.92.0.56:41895.service: Deactivated successfully. Feb 13 05:16:36.378046 systemd[1]: Started sshd@15-147.75.49.59:22-190.107.30.118:48170.service. Feb 13 05:16:37.935503 sshd[4186]: Invalid user test from 190.107.30.118 port 48170 Feb 13 05:16:37.941621 sshd[4186]: pam_faillock(sshd:auth): User unknown Feb 13 05:16:37.942557 sshd[4186]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:16:37.942671 sshd[4186]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=190.107.30.118 Feb 13 05:16:37.943551 sshd[4186]: pam_faillock(sshd:auth): User unknown Feb 13 05:16:40.201801 sshd[4186]: Failed password for invalid user test from 190.107.30.118 port 48170 ssh2 Feb 13 05:16:40.616221 sshd[4188]: pam_faillock(sshd:auth): User unknown Feb 13 05:16:40.623078 sshd[4186]: Postponed keyboard-interactive for invalid user test from 190.107.30.118 port 48170 ssh2 [preauth] Feb 13 05:16:41.001805 sshd[4188]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:16:41.002977 sshd[4188]: pam_faillock(sshd:auth): User unknown Feb 13 05:16:43.005075 sshd[4186]: PAM: Permission denied for illegal user test from 190.107.30.118 Feb 13 05:16:43.006065 sshd[4186]: Failed keyboard-interactive/pam for invalid user test from 190.107.30.118 port 48170 ssh2 Feb 13 05:16:43.349993 sshd[4186]: Connection closed by invalid user test 190.107.30.118 port 48170 [preauth] Feb 13 05:16:43.352658 systemd[1]: sshd@15-147.75.49.59:22-190.107.30.118:48170.service: Deactivated successfully. Feb 13 05:17:50.588155 systemd[1]: Started sshd@16-147.75.49.59:22-104.248.146.70:51646.service. Feb 13 05:17:51.648978 sshd[4199]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:17:53.731893 sshd[4199]: Failed password for root from 104.248.146.70 port 51646 ssh2 Feb 13 05:17:54.396651 sshd[4199]: Received disconnect from 104.248.146.70 port 51646:11: Bye Bye [preauth] Feb 13 05:17:54.396651 sshd[4199]: Disconnected from authenticating user root 104.248.146.70 port 51646 [preauth] Feb 13 05:17:54.399243 systemd[1]: sshd@16-147.75.49.59:22-104.248.146.70:51646.service: Deactivated successfully. Feb 13 05:19:40.203451 systemd[1]: Started sshd@17-147.75.49.59:22-85.209.11.254:57474.service. Feb 13 05:19:43.470371 sshd[4218]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.254 user=root Feb 13 05:19:45.598900 sshd[4218]: Failed password for root from 85.209.11.254 port 57474 ssh2 Feb 13 05:19:46.249003 sshd[4218]: Connection closed by authenticating user root 85.209.11.254 port 57474 [preauth] Feb 13 05:19:46.251471 systemd[1]: sshd@17-147.75.49.59:22-85.209.11.254:57474.service: Deactivated successfully. Feb 13 05:20:19.387667 systemd[1]: Started sshd@18-147.75.49.59:22-159.223.87.202:52206.service. Feb 13 05:20:20.428793 sshd[4229]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:20:22.299741 sshd[4229]: Failed password for root from 159.223.87.202 port 52206 ssh2 Feb 13 05:20:23.182687 sshd[4229]: Received disconnect from 159.223.87.202 port 52206:11: Bye Bye [preauth] Feb 13 05:20:23.182687 sshd[4229]: Disconnected from authenticating user root 159.223.87.202 port 52206 [preauth] Feb 13 05:20:23.185182 systemd[1]: sshd@18-147.75.49.59:22-159.223.87.202:52206.service: Deactivated successfully. Feb 13 05:20:27.522272 systemd[1]: Started sshd@19-147.75.49.59:22-75.51.10.234:35042.service. Feb 13 05:20:27.848440 sshd[4234]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:20:29.680828 sshd[4234]: Failed password for root from 75.51.10.234 port 35042 ssh2 Feb 13 05:20:30.458026 sshd[4234]: Received disconnect from 75.51.10.234 port 35042:11: Bye Bye [preauth] Feb 13 05:20:30.458026 sshd[4234]: Disconnected from authenticating user root 75.51.10.234 port 35042 [preauth] Feb 13 05:20:30.460607 systemd[1]: sshd@19-147.75.49.59:22-75.51.10.234:35042.service: Deactivated successfully. Feb 13 05:21:27.134787 systemd[1]: Started sshd@20-147.75.49.59:22-75.51.10.234:53768.service. Feb 13 05:21:27.457759 sshd[4245]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:21:27.628169 systemd[1]: Started sshd@21-147.75.49.59:22-159.223.87.202:43196.service. Feb 13 05:21:29.016574 sshd[4248]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:21:29.016831 sshd[4248]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:21:29.194753 sshd[4245]: Failed password for root from 75.51.10.234 port 53768 ssh2 Feb 13 05:21:30.066686 sshd[4245]: Received disconnect from 75.51.10.234 port 53768:11: Bye Bye [preauth] Feb 13 05:21:30.066686 sshd[4245]: Disconnected from authenticating user root 75.51.10.234 port 53768 [preauth] Feb 13 05:21:30.068984 systemd[1]: sshd@20-147.75.49.59:22-75.51.10.234:53768.service: Deactivated successfully. Feb 13 05:21:31.360828 sshd[4248]: Failed password for root from 159.223.87.202 port 43196 ssh2 Feb 13 05:21:31.839862 sshd[4248]: Received disconnect from 159.223.87.202 port 43196:11: Bye Bye [preauth] Feb 13 05:21:31.839862 sshd[4248]: Disconnected from authenticating user root 159.223.87.202 port 43196 [preauth] Feb 13 05:21:31.842357 systemd[1]: sshd@21-147.75.49.59:22-159.223.87.202:43196.service: Deactivated successfully. Feb 13 05:21:37.373797 systemd[1]: Started sshd@22-147.75.49.59:22-119.91.214.145:59004.service. Feb 13 05:21:39.171572 sshd[4255]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:21:40.888413 sshd[4255]: Failed password for root from 119.91.214.145 port 59004 ssh2 Feb 13 05:21:41.914276 sshd[4255]: Received disconnect from 119.91.214.145 port 59004:11: Bye Bye [preauth] Feb 13 05:21:41.914276 sshd[4255]: Disconnected from authenticating user root 119.91.214.145 port 59004 [preauth] Feb 13 05:21:41.916858 systemd[1]: sshd@22-147.75.49.59:22-119.91.214.145:59004.service: Deactivated successfully. Feb 13 05:22:25.631428 systemd[1]: Started sshd@23-147.75.49.59:22-75.51.10.234:44246.service. Feb 13 05:22:25.964156 sshd[4263]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:22:25.964396 sshd[4263]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:22:28.664687 sshd[4263]: Failed password for root from 75.51.10.234 port 44246 ssh2 Feb 13 05:22:31.147400 sshd[4263]: Received disconnect from 75.51.10.234 port 44246:11: Bye Bye [preauth] Feb 13 05:22:31.147400 sshd[4263]: Disconnected from authenticating user root 75.51.10.234 port 44246 [preauth] Feb 13 05:22:31.150142 systemd[1]: sshd@23-147.75.49.59:22-75.51.10.234:44246.service: Deactivated successfully. Feb 13 05:22:36.154675 systemd[1]: Started sshd@24-147.75.49.59:22-159.223.87.202:34188.service. Feb 13 05:22:37.153132 sshd[4271]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:22:37.153183 sshd[4271]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:22:39.832801 sshd[4271]: Failed password for root from 159.223.87.202 port 34188 ssh2 Feb 13 05:22:42.477680 sshd[4271]: Received disconnect from 159.223.87.202 port 34188:11: Bye Bye [preauth] Feb 13 05:22:42.477680 sshd[4271]: Disconnected from authenticating user root 159.223.87.202 port 34188 [preauth] Feb 13 05:22:42.478460 systemd[1]: sshd@24-147.75.49.59:22-159.223.87.202:34188.service: Deactivated successfully. Feb 13 05:22:58.216816 systemd[1]: Started sshd@25-147.75.49.59:22-119.91.214.145:56642.service. Feb 13 05:23:10.031907 update_engine[1460]: I0213 05:23:10.031793 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 05:23:10.031907 update_engine[1460]: I0213 05:23:10.031874 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 05:23:10.033750 update_engine[1460]: I0213 05:23:10.033675 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 05:23:10.034608 update_engine[1460]: I0213 05:23:10.034520 1460 omaha_request_params.cc:62] Current group set to lts Feb 13 05:23:10.034907 update_engine[1460]: I0213 05:23:10.034838 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 05:23:10.034907 update_engine[1460]: I0213 05:23:10.034857 1460 update_attempter.cc:643] Scheduling an action processor start. Feb 13 05:23:10.034907 update_engine[1460]: I0213 05:23:10.034889 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 05:23:10.035292 update_engine[1460]: I0213 05:23:10.034955 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 05:23:10.035292 update_engine[1460]: I0213 05:23:10.035094 1460 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 05:23:10.035292 update_engine[1460]: I0213 05:23:10.035111 1460 omaha_request_action.cc:271] Request: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: Feb 13 05:23:10.035292 update_engine[1460]: I0213 05:23:10.035122 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:23:10.036349 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 05:23:10.038215 update_engine[1460]: I0213 05:23:10.038142 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:23:10.038419 update_engine[1460]: E0213 05:23:10.038354 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:23:10.038538 update_engine[1460]: I0213 05:23:10.038509 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 05:23:19.941796 update_engine[1460]: I0213 05:23:19.941673 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:23:19.942617 update_engine[1460]: I0213 05:23:19.942144 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:23:19.942617 update_engine[1460]: E0213 05:23:19.942341 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:23:19.942617 update_engine[1460]: I0213 05:23:19.942510 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 05:23:21.713749 systemd[1]: Started sshd@26-147.75.49.59:22-75.51.10.234:34722.service. Feb 13 05:23:22.043047 sshd[4282]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:23:23.899954 sshd[4282]: Failed password for root from 75.51.10.234 port 34722 ssh2 Feb 13 05:23:24.652978 sshd[4282]: Received disconnect from 75.51.10.234 port 34722:11: Bye Bye [preauth] Feb 13 05:23:24.652978 sshd[4282]: Disconnected from authenticating user root 75.51.10.234 port 34722 [preauth] Feb 13 05:23:24.655499 systemd[1]: sshd@26-147.75.49.59:22-75.51.10.234:34722.service: Deactivated successfully. Feb 13 05:23:29.941855 update_engine[1460]: I0213 05:23:29.941664 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:23:29.942859 update_engine[1460]: I0213 05:23:29.942140 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:23:29.942859 update_engine[1460]: E0213 05:23:29.942340 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:23:29.942859 update_engine[1460]: I0213 05:23:29.942512 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 05:23:37.965877 systemd[1]: Started sshd@27-147.75.49.59:22-104.248.146.70:49820.service. Feb 13 05:23:39.633891 sshd[4291]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:23:39.932480 update_engine[1460]: I0213 05:23:39.932250 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:23:39.933340 update_engine[1460]: I0213 05:23:39.932760 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:23:39.933340 update_engine[1460]: E0213 05:23:39.932974 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:23:39.933340 update_engine[1460]: I0213 05:23:39.933195 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 05:23:39.933340 update_engine[1460]: I0213 05:23:39.933213 1460 omaha_request_action.cc:621] Omaha request response: Feb 13 05:23:39.933756 update_engine[1460]: E0213 05:23:39.933357 1460 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933385 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933394 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933403 1460 update_attempter.cc:306] Processing Done. Feb 13 05:23:39.933756 update_engine[1460]: E0213 05:23:39.933429 1460 update_attempter.cc:619] Update failed. Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933439 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933449 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933457 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933624 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933676 1460 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933686 1460 omaha_request_action.cc:271] Request: Feb 13 05:23:39.933756 update_engine[1460]: Feb 13 05:23:39.933756 update_engine[1460]: Feb 13 05:23:39.933756 update_engine[1460]: Feb 13 05:23:39.933756 update_engine[1460]: Feb 13 05:23:39.933756 update_engine[1460]: Feb 13 05:23:39.933756 update_engine[1460]: Feb 13 05:23:39.933756 update_engine[1460]: I0213 05:23:39.933696 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934008 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:23:39.935512 update_engine[1460]: E0213 05:23:39.934171 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934304 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934319 1460 omaha_request_action.cc:621] Omaha request response: Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934329 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934336 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934345 1460 update_attempter.cc:306] Processing Done. Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934351 1460 update_attempter.cc:310] Error event sent. Feb 13 05:23:39.935512 update_engine[1460]: I0213 05:23:39.934372 1460 update_check_scheduler.cc:74] Next update check in 47m43s Feb 13 05:23:39.936333 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 05:23:39.936333 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 05:23:41.491845 sshd[4291]: Failed password for root from 104.248.146.70 port 49820 ssh2 Feb 13 05:23:42.848237 sshd[4291]: Received disconnect from 104.248.146.70 port 49820:11: Bye Bye [preauth] Feb 13 05:23:42.848237 sshd[4291]: Disconnected from authenticating user root 104.248.146.70 port 49820 [preauth] Feb 13 05:23:42.850876 systemd[1]: sshd@27-147.75.49.59:22-104.248.146.70:49820.service: Deactivated successfully. Feb 13 05:23:44.193726 systemd[1]: Started sshd@28-147.75.49.59:22-159.223.87.202:53410.service. Feb 13 05:23:45.538419 sshd[4295]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:23:45.538659 sshd[4295]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:23:47.551973 sshd[4295]: Failed password for root from 159.223.87.202 port 53410 ssh2 Feb 13 05:23:48.351691 sshd[4295]: Received disconnect from 159.223.87.202 port 53410:11: Bye Bye [preauth] Feb 13 05:23:48.351691 sshd[4295]: Disconnected from authenticating user root 159.223.87.202 port 53410 [preauth] Feb 13 05:23:48.354262 systemd[1]: sshd@28-147.75.49.59:22-159.223.87.202:53410.service: Deactivated successfully. Feb 13 05:24:19.111803 systemd[1]: Started sshd@29-147.75.49.59:22-75.51.10.234:53436.service. Feb 13 05:24:19.439389 sshd[4303]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:24:20.828824 systemd[1]: Started sshd@30-147.75.49.59:22-119.91.214.145:38606.service. Feb 13 05:24:21.457626 sshd[4303]: Failed password for root from 75.51.10.234 port 53436 ssh2 Feb 13 05:24:22.049125 sshd[4303]: Received disconnect from 75.51.10.234 port 53436:11: Bye Bye [preauth] Feb 13 05:24:22.049125 sshd[4303]: Disconnected from authenticating user root 75.51.10.234 port 53436 [preauth] Feb 13 05:24:22.051642 systemd[1]: sshd@29-147.75.49.59:22-75.51.10.234:53436.service: Deactivated successfully. Feb 13 05:24:54.712319 systemd[1]: Started sshd@31-147.75.49.59:22-159.223.87.202:44404.service. Feb 13 05:24:55.719994 sshd[4314]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:24:55.720228 sshd[4314]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:24:57.345849 sshd[4314]: Failed password for root from 159.223.87.202 port 44404 ssh2 Feb 13 05:24:58.225403 sshd[4278]: Timeout before authentication for 119.91.214.145 port 56642 Feb 13 05:24:58.227081 systemd[1]: sshd@25-147.75.49.59:22-119.91.214.145:56642.service: Deactivated successfully. Feb 13 05:24:58.466892 sshd[4314]: Received disconnect from 159.223.87.202 port 44404:11: Bye Bye [preauth] Feb 13 05:24:58.466892 sshd[4314]: Disconnected from authenticating user root 159.223.87.202 port 44404 [preauth] Feb 13 05:24:58.469364 systemd[1]: sshd@31-147.75.49.59:22-159.223.87.202:44404.service: Deactivated successfully. Feb 13 05:25:01.520798 systemd[1]: Started sshd@32-147.75.49.59:22-65.20.152.13:34488.service. Feb 13 05:25:03.301022 sshd[4320]: Invalid user test from 65.20.152.13 port 34488 Feb 13 05:25:03.307419 sshd[4320]: pam_faillock(sshd:auth): User unknown Feb 13 05:25:03.308575 sshd[4320]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:25:03.308712 sshd[4320]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=65.20.152.13 Feb 13 05:25:03.309568 sshd[4320]: pam_faillock(sshd:auth): User unknown Feb 13 05:25:05.703496 sshd[4320]: Failed password for invalid user test from 65.20.152.13 port 34488 ssh2 Feb 13 05:25:07.990304 sshd[4320]: Connection closed by invalid user test 65.20.152.13 port 34488 [preauth] Feb 13 05:25:07.992832 systemd[1]: sshd@32-147.75.49.59:22-65.20.152.13:34488.service: Deactivated successfully. Feb 13 05:25:08.960434 systemd[1]: Started sshd@33-147.75.49.59:22-103.194.88.187:43250.service. Feb 13 05:25:10.576887 systemd[1]: Started sshd@34-147.75.49.59:22-75.51.10.234:43888.service. Feb 13 05:25:10.903636 sshd[4326]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:25:12.589505 sshd[4326]: Failed password for root from 75.51.10.234 port 43888 ssh2 Feb 13 05:25:13.513718 sshd[4326]: Received disconnect from 75.51.10.234 port 43888:11: Bye Bye [preauth] Feb 13 05:25:13.513718 sshd[4326]: Disconnected from authenticating user root 75.51.10.234 port 43888 [preauth] Feb 13 05:25:13.516385 systemd[1]: sshd@34-147.75.49.59:22-75.51.10.234:43888.service: Deactivated successfully. Feb 13 05:25:42.380448 systemd[1]: Started sshd@35-147.75.49.59:22-119.91.214.145:35020.service. Feb 13 05:25:58.600559 systemd[1]: Started sshd@36-147.75.49.59:22-159.223.87.202:35390.service. Feb 13 05:25:59.610168 sshd[4339]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:26:01.356832 sshd[4339]: Failed password for root from 159.223.87.202 port 35390 ssh2 Feb 13 05:26:01.545267 systemd[1]: Started sshd@37-147.75.49.59:22-75.51.10.234:34344.service. Feb 13 05:26:01.873561 sshd[4342]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:26:01.873816 sshd[4342]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:26:02.357145 sshd[4339]: Received disconnect from 159.223.87.202 port 35390:11: Bye Bye [preauth] Feb 13 05:26:02.357145 sshd[4339]: Disconnected from authenticating user root 159.223.87.202 port 35390 [preauth] Feb 13 05:26:02.359673 systemd[1]: sshd@36-147.75.49.59:22-159.223.87.202:35390.service: Deactivated successfully. Feb 13 05:26:03.896514 sshd[4342]: Failed password for root from 75.51.10.234 port 34344 ssh2 Feb 13 05:26:04.483630 sshd[4342]: Received disconnect from 75.51.10.234 port 34344:11: Bye Bye [preauth] Feb 13 05:26:04.483630 sshd[4342]: Disconnected from authenticating user root 75.51.10.234 port 34344 [preauth] Feb 13 05:26:04.486194 systemd[1]: sshd@37-147.75.49.59:22-75.51.10.234:34344.service: Deactivated successfully. Feb 13 05:26:20.833854 sshd[4306]: Timeout before authentication for 119.91.214.145 port 38606 Feb 13 05:26:20.835200 systemd[1]: sshd@30-147.75.49.59:22-119.91.214.145:38606.service: Deactivated successfully. Feb 13 05:26:52.806303 systemd[1]: Started sshd@38-147.75.49.59:22-75.51.10.234:53030.service. Feb 13 05:26:53.129600 sshd[4355]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:26:53.129837 sshd[4355]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:26:54.956705 sshd[4355]: Failed password for root from 75.51.10.234 port 53030 ssh2 Feb 13 05:26:55.738747 sshd[4355]: Received disconnect from 75.51.10.234 port 53030:11: Bye Bye [preauth] Feb 13 05:26:55.738747 sshd[4355]: Disconnected from authenticating user root 75.51.10.234 port 53030 [preauth] Feb 13 05:26:55.741272 systemd[1]: sshd@38-147.75.49.59:22-75.51.10.234:53030.service: Deactivated successfully. Feb 13 05:27:02.685823 systemd[1]: Started sshd@39-147.75.49.59:22-119.91.214.145:49442.service. Feb 13 05:27:08.965681 sshd[4324]: Timeout before authentication for 103.194.88.187 port 43250 Feb 13 05:27:08.967252 systemd[1]: sshd@33-147.75.49.59:22-103.194.88.187:43250.service: Deactivated successfully. Feb 13 05:27:11.594007 systemd[1]: Started sshd@40-147.75.49.59:22-159.223.87.202:54614.service. Feb 13 05:27:12.604992 sshd[4363]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:27:14.372027 sshd[4363]: Failed password for root from 159.223.87.202 port 54614 ssh2 Feb 13 05:27:15.352253 sshd[4363]: Received disconnect from 159.223.87.202 port 54614:11: Bye Bye [preauth] Feb 13 05:27:15.352253 sshd[4363]: Disconnected from authenticating user root 159.223.87.202 port 54614 [preauth] Feb 13 05:27:15.354790 systemd[1]: sshd@40-147.75.49.59:22-159.223.87.202:54614.service: Deactivated successfully. Feb 13 05:27:26.658638 systemd[1]: Started sshd@41-147.75.49.59:22-104.248.146.70:34890.service. Feb 13 05:27:28.054406 sshd[4370]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:27:28.054651 sshd[4370]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:27:30.353202 sshd[4370]: Failed password for root from 104.248.146.70 port 34890 ssh2 Feb 13 05:27:30.908472 sshd[4370]: Received disconnect from 104.248.146.70 port 34890:11: Bye Bye [preauth] Feb 13 05:27:30.908472 sshd[4370]: Disconnected from authenticating user root 104.248.146.70 port 34890 [preauth] Feb 13 05:27:30.910960 systemd[1]: sshd@41-147.75.49.59:22-104.248.146.70:34890.service: Deactivated successfully. Feb 13 05:27:33.281028 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 13 05:27:33.286965 systemd-tmpfiles[4375]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 05:27:33.287192 systemd-tmpfiles[4375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 05:27:33.287936 systemd-tmpfiles[4375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 05:27:33.298531 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 13 05:27:33.298628 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 13 05:27:33.299712 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 13 05:27:42.385611 sshd[4335]: Timeout before authentication for 119.91.214.145 port 35020 Feb 13 05:27:42.387135 systemd[1]: sshd@35-147.75.49.59:22-119.91.214.145:35020.service: Deactivated successfully. Feb 13 05:27:45.812746 systemd[1]: Started sshd@42-147.75.49.59:22-75.51.10.234:43492.service. Feb 13 05:27:46.137348 sshd[4379]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:27:48.571866 sshd[4379]: Failed password for root from 75.51.10.234 port 43492 ssh2 Feb 13 05:27:48.746713 sshd[4379]: Received disconnect from 75.51.10.234 port 43492:11: Bye Bye [preauth] Feb 13 05:27:48.746713 sshd[4379]: Disconnected from authenticating user root 75.51.10.234 port 43492 [preauth] Feb 13 05:27:48.749196 systemd[1]: sshd@42-147.75.49.59:22-75.51.10.234:43492.service: Deactivated successfully. Feb 13 05:28:18.949054 systemd[1]: Started sshd@43-147.75.49.59:22-159.223.87.202:45604.service. Feb 13 05:28:19.962408 sshd[4387]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:28:19.962662 sshd[4387]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 05:28:21.594347 sshd[4387]: Failed password for root from 159.223.87.202 port 45604 ssh2 Feb 13 05:28:22.709661 sshd[4387]: Received disconnect from 159.223.87.202 port 45604:11: Bye Bye [preauth] Feb 13 05:28:22.709661 sshd[4387]: Disconnected from authenticating user root 159.223.87.202 port 45604 [preauth] Feb 13 05:28:22.712189 systemd[1]: sshd@43-147.75.49.59:22-159.223.87.202:45604.service: Deactivated successfully. Feb 13 05:28:26.146201 systemd[1]: Started sshd@44-147.75.49.59:22-119.91.214.145:33518.service. Feb 13 05:28:40.352439 systemd[1]: Started sshd@45-147.75.49.59:22-75.51.10.234:33958.service. Feb 13 05:28:40.677518 sshd[4398]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:28:43.191843 sshd[4398]: Failed password for root from 75.51.10.234 port 33958 ssh2 Feb 13 05:28:43.286843 sshd[4398]: Received disconnect from 75.51.10.234 port 33958:11: Bye Bye [preauth] Feb 13 05:28:43.286843 sshd[4398]: Disconnected from authenticating user root 75.51.10.234 port 33958 [preauth] Feb 13 05:28:43.289305 systemd[1]: sshd@45-147.75.49.59:22-75.51.10.234:33958.service: Deactivated successfully. Feb 13 05:28:44.850188 systemd[1]: Started sshd@46-147.75.49.59:22-61.177.172.179:40786.service. Feb 13 05:28:45.978210 sshd[4402]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 05:28:48.180998 sshd[4402]: Failed password for root from 61.177.172.179 port 40786 ssh2 Feb 13 05:28:50.659177 sshd[4402]: Failed password for root from 61.177.172.179 port 40786 ssh2 Feb 13 05:28:53.818933 sshd[4402]: Failed password for root from 61.177.172.179 port 40786 ssh2 Feb 13 05:28:54.193383 sshd[4402]: Received disconnect from 61.177.172.179 port 40786:11: [preauth] Feb 13 05:28:54.193383 sshd[4402]: Disconnected from authenticating user root 61.177.172.179 port 40786 [preauth] Feb 13 05:28:54.193956 sshd[4402]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 05:28:54.195955 systemd[1]: sshd@46-147.75.49.59:22-61.177.172.179:40786.service: Deactivated successfully. Feb 13 05:28:54.359239 systemd[1]: Started sshd@47-147.75.49.59:22-61.177.172.179:48379.service. Feb 13 05:28:55.469544 sshd[4408]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 05:28:57.712893 sshd[4408]: Failed password for root from 61.177.172.179 port 48379 ssh2 Feb 13 05:29:00.855986 sshd[4408]: Failed password for root from 61.177.172.179 port 48379 ssh2 Feb 13 05:29:02.691027 sshd[4360]: Timeout before authentication for 119.91.214.145 port 49442 Feb 13 05:29:02.692444 systemd[1]: sshd@39-147.75.49.59:22-119.91.214.145:49442.service: Deactivated successfully. Feb 13 05:29:05.859994 sshd[4408]: Failed password for root from 61.177.172.179 port 48379 ssh2 Feb 13 05:29:05.970878 systemd[1]: Started sshd@48-147.75.49.59:22-218.92.0.112:48278.service. Feb 13 05:29:06.252984 sshd[4408]: Received disconnect from 61.177.172.179 port 48379:11: [preauth] Feb 13 05:29:06.252984 sshd[4408]: Disconnected from authenticating user root 61.177.172.179 port 48379 [preauth] Feb 13 05:29:06.253410 sshd[4408]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 05:29:06.255487 systemd[1]: sshd@47-147.75.49.59:22-61.177.172.179:48379.service: Deactivated successfully. Feb 13 05:29:06.423207 systemd[1]: Started sshd@49-147.75.49.59:22-61.177.172.179:14455.service. Feb 13 05:29:06.909490 sshd[4412]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 13 05:29:07.539854 sshd[4416]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 05:29:08.661273 sshd[4412]: Failed password for root from 218.92.0.112 port 48278 ssh2 Feb 13 05:29:09.763022 sshd[4416]: Failed password for root from 61.177.172.179 port 14455 ssh2 Feb 13 05:29:11.784948 sshd[4412]: Failed password for root from 218.92.0.112 port 48278 ssh2 Feb 13 05:29:12.907631 sshd[4416]: Failed password for root from 61.177.172.179 port 14455 ssh2 Feb 13 05:29:14.242435 sshd[4412]: Failed password for root from 218.92.0.112 port 48278 ssh2 Feb 13 05:29:15.047570 sshd[4412]: Received disconnect from 218.92.0.112 port 48278:11: [preauth] Feb 13 05:29:15.047570 sshd[4412]: Disconnected from authenticating user root 218.92.0.112 port 48278 [preauth] Feb 13 05:29:15.048134 sshd[4412]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 13 05:29:15.050193 systemd[1]: sshd@48-147.75.49.59:22-218.92.0.112:48278.service: Deactivated successfully. Feb 13 05:29:15.184367 systemd[1]: Started sshd@50-147.75.49.59:22-218.92.0.112:47461.service. Feb 13 05:29:16.082498 sshd[4421]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 13 05:29:17.912745 sshd[4416]: Failed password for root from 61.177.172.179 port 14455 ssh2 Feb 13 05:29:18.205812 sshd[4421]: Failed password for root from 218.92.0.112 port 47461 ssh2 Feb 13 05:29:18.325446 sshd[4416]: Received disconnect from 61.177.172.179 port 14455:11: [preauth] Feb 13 05:29:18.325446 sshd[4416]: Disconnected from authenticating user root 61.177.172.179 port 14455 [preauth] Feb 13 05:29:18.326008 sshd[4416]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 05:29:18.328060 systemd[1]: sshd@49-147.75.49.59:22-61.177.172.179:14455.service: Deactivated successfully. Feb 13 05:29:20.520384 sshd[4421]: Failed password for root from 218.92.0.112 port 47461 ssh2 Feb 13 05:29:23.638341 sshd[4421]: Failed password for root from 218.92.0.112 port 47461 ssh2 Feb 13 05:29:24.200843 sshd[4421]: Received disconnect from 218.92.0.112 port 47461:11: [preauth] Feb 13 05:29:24.200843 sshd[4421]: Disconnected from authenticating user root 218.92.0.112 port 47461 [preauth] Feb 13 05:29:24.201357 sshd[4421]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 13 05:29:24.203441 systemd[1]: sshd@50-147.75.49.59:22-218.92.0.112:47461.service: Deactivated successfully. Feb 13 05:29:24.370186 systemd[1]: Started sshd@51-147.75.49.59:22-218.92.0.112:50018.service. Feb 13 05:29:25.340537 sshd[4428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 13 05:29:27.032808 sshd[4428]: Failed password for root from 218.92.0.112 port 50018 ssh2 Feb 13 05:29:27.566200 systemd[1]: Started sshd@52-147.75.49.59:22-159.223.87.202:36596.service. Feb 13 05:29:28.542215 sshd[4431]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:29:29.830851 sshd[4428]: Failed password for root from 218.92.0.112 port 50018 ssh2 Feb 13 05:29:30.314021 sshd[4431]: Failed password for root from 159.223.87.202 port 36596 ssh2 Feb 13 05:29:31.282913 sshd[4431]: Received disconnect from 159.223.87.202 port 36596:11: Bye Bye [preauth] Feb 13 05:29:31.282913 sshd[4431]: Disconnected from authenticating user root 159.223.87.202 port 36596 [preauth] Feb 13 05:29:31.285385 systemd[1]: sshd@52-147.75.49.59:22-159.223.87.202:36596.service: Deactivated successfully. Feb 13 05:29:32.826714 sshd[4428]: Failed password for root from 218.92.0.112 port 50018 ssh2 Feb 13 05:29:33.497900 sshd[4428]: Received disconnect from 218.92.0.112 port 50018:11: [preauth] Feb 13 05:29:33.497900 sshd[4428]: Disconnected from authenticating user root 218.92.0.112 port 50018 [preauth] Feb 13 05:29:33.498446 sshd[4428]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 13 05:29:33.500418 systemd[1]: sshd@51-147.75.49.59:22-218.92.0.112:50018.service: Deactivated successfully. Feb 13 05:29:36.812621 systemd[1]: Started sshd@53-147.75.49.59:22-75.51.10.234:52672.service. Feb 13 05:29:37.144374 sshd[4439]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:29:39.483795 sshd[4439]: Failed password for root from 75.51.10.234 port 52672 ssh2 Feb 13 05:29:39.755144 sshd[4439]: Received disconnect from 75.51.10.234 port 52672:11: Bye Bye [preauth] Feb 13 05:29:39.755144 sshd[4439]: Disconnected from authenticating user root 75.51.10.234 port 52672 [preauth] Feb 13 05:29:39.757027 systemd[1]: sshd@53-147.75.49.59:22-75.51.10.234:52672.service: Deactivated successfully. Feb 13 05:29:49.973463 systemd[1]: Started sshd@54-147.75.49.59:22-119.91.214.145:58130.service. Feb 13 05:30:08.912765 systemd[1]: Started sshd@55-147.75.49.59:22-218.92.0.25:22021.service. Feb 13 05:30:09.981804 sshd[4448]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 13 05:30:12.381128 sshd[4448]: Failed password for root from 218.92.0.25 port 22021 ssh2 Feb 13 05:30:14.534969 sshd[4448]: Failed password for root from 218.92.0.25 port 22021 ssh2 Feb 13 05:30:17.015312 sshd[4448]: Failed password for root from 218.92.0.25 port 22021 ssh2 Feb 13 05:30:18.193120 sshd[4448]: Received disconnect from 218.92.0.25 port 22021:11: [preauth] Feb 13 05:30:18.193120 sshd[4448]: Disconnected from authenticating user root 218.92.0.25 port 22021 [preauth] Feb 13 05:30:18.193673 sshd[4448]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 13 05:30:18.195646 systemd[1]: sshd@55-147.75.49.59:22-218.92.0.25:22021.service: Deactivated successfully. Feb 13 05:30:18.352729 systemd[1]: Started sshd@56-147.75.49.59:22-218.92.0.25:32578.service. Feb 13 05:30:19.389392 sshd[4454]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 13 05:30:21.828798 sshd[4454]: Failed password for root from 218.92.0.25 port 32578 ssh2 Feb 13 05:30:24.300395 sshd[4454]: Failed password for root from 218.92.0.25 port 32578 ssh2 Feb 13 05:30:26.151420 sshd[4394]: Timeout before authentication for 119.91.214.145 port 33518 Feb 13 05:30:26.151874 systemd[1]: sshd@44-147.75.49.59:22-119.91.214.145:33518.service: Deactivated successfully. Feb 13 05:30:26.302974 sshd[4454]: Failed password for root from 218.92.0.25 port 32578 ssh2 Feb 13 05:30:27.570366 sshd[4454]: Received disconnect from 218.92.0.25 port 32578:11: [preauth] Feb 13 05:30:27.570366 sshd[4454]: Disconnected from authenticating user root 218.92.0.25 port 32578 [preauth] Feb 13 05:30:27.570924 sshd[4454]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 13 05:30:27.572906 systemd[1]: sshd@56-147.75.49.59:22-218.92.0.25:32578.service: Deactivated successfully. Feb 13 05:30:27.741603 systemd[1]: Started sshd@57-147.75.49.59:22-218.92.0.25:37529.service. Feb 13 05:30:28.785399 sshd[4459]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 13 05:30:30.793499 sshd[4459]: Failed password for root from 218.92.0.25 port 37529 ssh2 Feb 13 05:30:31.495453 systemd[1]: Started sshd@58-147.75.49.59:22-75.51.10.234:43136.service. Feb 13 05:30:31.824885 sshd[4462]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:30:33.934972 sshd[4459]: Failed password for root from 218.92.0.25 port 37529 ssh2 Feb 13 05:30:34.244392 sshd[4462]: Failed password for root from 75.51.10.234 port 43136 ssh2 Feb 13 05:30:34.434463 sshd[4462]: Received disconnect from 75.51.10.234 port 43136:11: Bye Bye [preauth] Feb 13 05:30:34.434463 sshd[4462]: Disconnected from authenticating user root 75.51.10.234 port 43136 [preauth] Feb 13 05:30:34.436994 systemd[1]: sshd@58-147.75.49.59:22-75.51.10.234:43136.service: Deactivated successfully. Feb 13 05:30:35.222054 systemd[1]: Started sshd@59-147.75.49.59:22-159.223.87.202:55816.service. Feb 13 05:30:36.078979 sshd[4459]: Failed password for root from 218.92.0.25 port 37529 ssh2 Feb 13 05:30:36.201260 sshd[4468]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:30:36.977927 sshd[4459]: Received disconnect from 218.92.0.25 port 37529:11: [preauth] Feb 13 05:30:36.977927 sshd[4459]: Disconnected from authenticating user root 218.92.0.25 port 37529 [preauth] Feb 13 05:30:36.978466 sshd[4459]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 13 05:30:36.980442 systemd[1]: sshd@57-147.75.49.59:22-218.92.0.25:37529.service: Deactivated successfully. Feb 13 05:30:37.641663 sshd[4468]: Failed password for root from 159.223.87.202 port 55816 ssh2 Feb 13 05:30:38.945300 sshd[4468]: Received disconnect from 159.223.87.202 port 55816:11: Bye Bye [preauth] Feb 13 05:30:38.945300 sshd[4468]: Disconnected from authenticating user root 159.223.87.202 port 55816 [preauth] Feb 13 05:30:38.947958 systemd[1]: sshd@59-147.75.49.59:22-159.223.87.202:55816.service: Deactivated successfully. Feb 13 05:30:41.745059 systemd[1]: Started sshd@60-147.75.49.59:22-218.92.0.76:55612.service. Feb 13 05:30:43.637260 sshd[4473]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:30:45.705248 sshd[4473]: Failed password for root from 218.92.0.76 port 55612 ssh2 Feb 13 05:30:48.517985 sshd[4473]: Failed password for root from 218.92.0.76 port 55612 ssh2 Feb 13 05:30:50.993931 sshd[4473]: Failed password for root from 218.92.0.76 port 55612 ssh2 Feb 13 05:30:51.830157 sshd[4473]: Received disconnect from 218.92.0.76 port 55612:11: [preauth] Feb 13 05:30:51.830157 sshd[4473]: Disconnected from authenticating user root 218.92.0.76 port 55612 [preauth] Feb 13 05:30:51.830717 sshd[4473]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:30:51.832760 systemd[1]: sshd@60-147.75.49.59:22-218.92.0.76:55612.service: Deactivated successfully. Feb 13 05:30:51.999910 systemd[1]: Started sshd@61-147.75.49.59:22-218.92.0.76:10662.service. Feb 13 05:30:53.549273 sshd[4479]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:30:55.988759 sshd[4479]: Failed password for root from 218.92.0.76 port 10662 ssh2 Feb 13 05:30:58.467049 sshd[4479]: Failed password for root from 218.92.0.76 port 10662 ssh2 Feb 13 05:30:58.631488 sshd[4445]: Connection closed by 119.91.214.145 port 58130 [preauth] Feb 13 05:30:58.633225 systemd[1]: sshd@54-147.75.49.59:22-119.91.214.145:58130.service: Deactivated successfully. Feb 13 05:31:00.946184 sshd[4479]: Failed password for root from 218.92.0.76 port 10662 ssh2 Feb 13 05:31:01.759254 sshd[4479]: Received disconnect from 218.92.0.76 port 10662:11: [preauth] Feb 13 05:31:01.759254 sshd[4479]: Disconnected from authenticating user root 218.92.0.76 port 10662 [preauth] Feb 13 05:31:01.759781 sshd[4479]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:31:01.761821 systemd[1]: sshd@61-147.75.49.59:22-218.92.0.76:10662.service: Deactivated successfully. Feb 13 05:31:01.952710 systemd[1]: Started sshd@62-147.75.49.59:22-218.92.0.76:13735.service. Feb 13 05:31:02.258158 systemd[1]: Started sshd@63-147.75.49.59:22-119.91.214.145:39036.service. Feb 13 05:31:03.071072 sshd[4485]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:31:03.186417 sshd[4488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:31:04.883341 sshd[4485]: Failed password for root from 218.92.0.76 port 13735 ssh2 Feb 13 05:31:04.998643 sshd[4488]: Failed password for root from 119.91.214.145 port 39036 ssh2 Feb 13 05:31:05.916775 sshd[4488]: Received disconnect from 119.91.214.145 port 39036:11: Bye Bye [preauth] Feb 13 05:31:05.916775 sshd[4488]: Disconnected from authenticating user root 119.91.214.145 port 39036 [preauth] Feb 13 05:31:05.919293 systemd[1]: sshd@63-147.75.49.59:22-119.91.214.145:39036.service: Deactivated successfully. Feb 13 05:31:08.579987 sshd[4485]: Failed password for root from 218.92.0.76 port 13735 ssh2 Feb 13 05:31:13.047081 sshd[4485]: Failed password for root from 218.92.0.76 port 13735 ssh2 Feb 13 05:31:13.885088 sshd[4485]: Received disconnect from 218.92.0.76 port 13735:11: [preauth] Feb 13 05:31:13.885088 sshd[4485]: Disconnected from authenticating user root 218.92.0.76 port 13735 [preauth] Feb 13 05:31:13.885822 sshd[4485]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:31:13.887877 systemd[1]: sshd@62-147.75.49.59:22-218.92.0.76:13735.service: Deactivated successfully. Feb 13 05:31:24.099071 systemd[1]: Started sshd@64-147.75.49.59:22-75.51.10.234:33600.service. Feb 13 05:31:24.425011 sshd[4495]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:31:26.121922 sshd[4495]: Failed password for root from 75.51.10.234 port 33600 ssh2 Feb 13 05:31:27.034995 sshd[4495]: Received disconnect from 75.51.10.234 port 33600:11: Bye Bye [preauth] Feb 13 05:31:27.034995 sshd[4495]: Disconnected from authenticating user root 75.51.10.234 port 33600 [preauth] Feb 13 05:31:27.037600 systemd[1]: sshd@64-147.75.49.59:22-75.51.10.234:33600.service: Deactivated successfully. Feb 13 05:31:32.342133 systemd[1]: Started sshd@65-147.75.49.59:22-104.248.146.70:54812.service. Feb 13 05:31:33.373433 sshd[4501]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:31:35.305827 sshd[4501]: Failed password for root from 104.248.146.70 port 54812 ssh2 Feb 13 05:31:36.121067 sshd[4501]: Received disconnect from 104.248.146.70 port 54812:11: Bye Bye [preauth] Feb 13 05:31:36.121067 sshd[4501]: Disconnected from authenticating user root 104.248.146.70 port 54812 [preauth] Feb 13 05:31:36.123626 systemd[1]: sshd@65-147.75.49.59:22-104.248.146.70:54812.service: Deactivated successfully. Feb 13 05:31:42.605156 systemd[1]: Started sshd@66-147.75.49.59:22-159.223.87.202:46806.service. Feb 13 05:31:43.648839 sshd[4507]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:31:45.621547 sshd[4507]: Failed password for root from 159.223.87.202 port 46806 ssh2 Feb 13 05:31:46.402705 sshd[4507]: Received disconnect from 159.223.87.202 port 46806:11: Bye Bye [preauth] Feb 13 05:31:46.402705 sshd[4507]: Disconnected from authenticating user root 159.223.87.202 port 46806 [preauth] Feb 13 05:31:46.405245 systemd[1]: sshd@66-147.75.49.59:22-159.223.87.202:46806.service: Deactivated successfully. Feb 13 05:31:59.831108 systemd[1]: Started sshd@67-147.75.49.59:22-119.91.214.145:51144.service. Feb 13 05:32:00.803163 sshd[4513]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:32:02.438846 sshd[4513]: Failed password for root from 119.91.214.145 port 51144 ssh2 Feb 13 05:32:03.541831 sshd[4513]: Received disconnect from 119.91.214.145 port 51144:11: Bye Bye [preauth] Feb 13 05:32:03.541831 sshd[4513]: Disconnected from authenticating user root 119.91.214.145 port 51144 [preauth] Feb 13 05:32:03.544370 systemd[1]: sshd@67-147.75.49.59:22-119.91.214.145:51144.service: Deactivated successfully. Feb 13 05:32:15.634052 systemd[1]: Started sshd@68-147.75.49.59:22-75.51.10.234:52288.service. Feb 13 05:32:15.969639 sshd[4517]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:32:18.002248 sshd[4517]: Failed password for root from 75.51.10.234 port 52288 ssh2 Feb 13 05:32:18.581071 sshd[4517]: Received disconnect from 75.51.10.234 port 52288:11: Bye Bye [preauth] Feb 13 05:32:18.581071 sshd[4517]: Disconnected from authenticating user root 75.51.10.234 port 52288 [preauth] Feb 13 05:32:18.583598 systemd[1]: sshd@68-147.75.49.59:22-75.51.10.234:52288.service: Deactivated successfully. Feb 13 05:32:47.226523 systemd[1]: Started sshd@69-147.75.49.59:22-159.223.87.202:37786.service. Feb 13 05:32:48.272748 sshd[4525]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:32:49.833712 sshd[4525]: Failed password for root from 159.223.87.202 port 37786 ssh2 Feb 13 05:32:51.028894 sshd[4525]: Received disconnect from 159.223.87.202 port 37786:11: Bye Bye [preauth] Feb 13 05:32:51.028894 sshd[4525]: Disconnected from authenticating user root 159.223.87.202 port 37786 [preauth] Feb 13 05:32:51.031369 systemd[1]: sshd@69-147.75.49.59:22-159.223.87.202:37786.service: Deactivated successfully. Feb 13 05:32:57.261862 systemd[1]: Started sshd@70-147.75.49.59:22-119.91.214.145:48942.service. Feb 13 05:32:58.181297 sshd[4532]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:33:00.449863 sshd[4532]: Failed password for root from 119.91.214.145 port 48942 ssh2 Feb 13 05:33:00.917255 sshd[4532]: Received disconnect from 119.91.214.145 port 48942:11: Bye Bye [preauth] Feb 13 05:33:00.917255 sshd[4532]: Disconnected from authenticating user root 119.91.214.145 port 48942 [preauth] Feb 13 05:33:00.919888 systemd[1]: sshd@70-147.75.49.59:22-119.91.214.145:48942.service: Deactivated successfully. Feb 13 05:33:09.046409 systemd[1]: Started sshd@71-147.75.49.59:22-75.51.10.234:42756.service. Feb 13 05:33:09.380285 sshd[4536]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:33:11.493132 sshd[4536]: Failed password for root from 75.51.10.234 port 42756 ssh2 Feb 13 05:33:11.991444 sshd[4536]: Received disconnect from 75.51.10.234 port 42756:11: Bye Bye [preauth] Feb 13 05:33:11.991444 sshd[4536]: Disconnected from authenticating user root 75.51.10.234 port 42756 [preauth] Feb 13 05:33:11.994006 systemd[1]: sshd@71-147.75.49.59:22-75.51.10.234:42756.service: Deactivated successfully. Feb 13 05:33:53.088052 systemd[1]: Started sshd@72-147.75.49.59:22-159.223.87.202:56982.service. Feb 13 05:33:54.097374 sshd[4549]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:33:55.549118 systemd[1]: Started sshd@73-147.75.49.59:22-119.91.214.145:44310.service. Feb 13 05:33:55.717849 sshd[4549]: Failed password for root from 159.223.87.202 port 56982 ssh2 Feb 13 05:33:56.468706 sshd[4552]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:33:56.844236 sshd[4549]: Received disconnect from 159.223.87.202 port 56982:11: Bye Bye [preauth] Feb 13 05:33:56.844236 sshd[4549]: Disconnected from authenticating user root 159.223.87.202 port 56982 [preauth] Feb 13 05:33:56.846930 systemd[1]: sshd@72-147.75.49.59:22-159.223.87.202:56982.service: Deactivated successfully. Feb 13 05:33:58.365928 sshd[4552]: Failed password for root from 119.91.214.145 port 44310 ssh2 Feb 13 05:33:59.197614 sshd[4552]: Received disconnect from 119.91.214.145 port 44310:11: Bye Bye [preauth] Feb 13 05:33:59.197614 sshd[4552]: Disconnected from authenticating user root 119.91.214.145 port 44310 [preauth] Feb 13 05:33:59.200134 systemd[1]: sshd@73-147.75.49.59:22-119.91.214.145:44310.service: Deactivated successfully. Feb 13 05:34:03.694015 systemd[1]: Started sshd@74-147.75.49.59:22-75.51.10.234:33222.service. Feb 13 05:34:04.018108 sshd[4557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:34:06.346910 sshd[4557]: Failed password for root from 75.51.10.234 port 33222 ssh2 Feb 13 05:34:06.627940 sshd[4557]: Received disconnect from 75.51.10.234 port 33222:11: Bye Bye [preauth] Feb 13 05:34:06.627940 sshd[4557]: Disconnected from authenticating user root 75.51.10.234 port 33222 [preauth] Feb 13 05:34:06.630401 systemd[1]: sshd@74-147.75.49.59:22-75.51.10.234:33222.service: Deactivated successfully. Feb 13 05:34:11.770087 systemd[1]: Started sshd@75-147.75.49.59:22-141.98.11.90:47696.service. Feb 13 05:34:13.208243 sshd[4563]: Invalid user teste from 141.98.11.90 port 47696 Feb 13 05:34:13.439690 sshd[4563]: pam_faillock(sshd:auth): User unknown Feb 13 05:34:13.440838 sshd[4563]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:34:13.440930 sshd[4563]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.90 Feb 13 05:34:13.441946 sshd[4563]: pam_faillock(sshd:auth): User unknown Feb 13 05:34:15.003299 sshd[4563]: Failed password for invalid user teste from 141.98.11.90 port 47696 ssh2 Feb 13 05:34:15.740876 sshd[4563]: Connection closed by invalid user teste 141.98.11.90 port 47696 [preauth] Feb 13 05:34:15.743435 systemd[1]: sshd@75-147.75.49.59:22-141.98.11.90:47696.service: Deactivated successfully. Feb 13 05:34:52.525106 systemd[1]: Started sshd@76-147.75.49.59:22-104.248.146.70:47938.service. Feb 13 05:34:54.249947 systemd[1]: Started sshd@77-147.75.49.59:22-119.91.214.145:53674.service. Feb 13 05:34:55.171684 sshd[4576]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:34:56.833023 sshd[4576]: Failed password for root from 119.91.214.145 port 53674 ssh2 Feb 13 05:34:57.908264 sshd[4576]: Received disconnect from 119.91.214.145 port 53674:11: Bye Bye [preauth] Feb 13 05:34:57.908264 sshd[4576]: Disconnected from authenticating user root 119.91.214.145 port 53674 [preauth] Feb 13 05:34:57.910720 systemd[1]: sshd@77-147.75.49.59:22-119.91.214.145:53674.service: Deactivated successfully. Feb 13 05:34:59.470578 sshd[4573]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:35:01.283578 systemd[1]: Started sshd@78-147.75.49.59:22-75.51.10.234:51936.service. Feb 13 05:35:01.611283 sshd[4580]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:35:02.351222 sshd[4573]: Failed password for root from 104.248.146.70 port 47938 ssh2 Feb 13 05:35:03.764658 sshd[4580]: Failed password for root from 75.51.10.234 port 51936 ssh2 Feb 13 05:35:04.221145 sshd[4580]: Received disconnect from 75.51.10.234 port 51936:11: Bye Bye [preauth] Feb 13 05:35:04.221145 sshd[4580]: Disconnected from authenticating user root 75.51.10.234 port 51936 [preauth] Feb 13 05:35:04.223848 systemd[1]: sshd@78-147.75.49.59:22-75.51.10.234:51936.service: Deactivated successfully. Feb 13 05:35:04.796224 sshd[4573]: Received disconnect from 104.248.146.70 port 47938:11: Bye Bye [preauth] Feb 13 05:35:04.796224 sshd[4573]: Disconnected from authenticating user root 104.248.146.70 port 47938 [preauth] Feb 13 05:35:04.798756 systemd[1]: sshd@76-147.75.49.59:22-104.248.146.70:47938.service: Deactivated successfully. Feb 13 05:35:08.436180 systemd[1]: Started sshd@79-147.75.49.59:22-159.223.87.202:47960.service. Feb 13 05:35:09.447797 sshd[4585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:35:11.033797 sshd[4585]: Failed password for root from 159.223.87.202 port 47960 ssh2 Feb 13 05:35:12.193919 sshd[4585]: Received disconnect from 159.223.87.202 port 47960:11: Bye Bye [preauth] Feb 13 05:35:12.193919 sshd[4585]: Disconnected from authenticating user root 159.223.87.202 port 47960 [preauth] Feb 13 05:35:12.194836 systemd[1]: sshd@79-147.75.49.59:22-159.223.87.202:47960.service: Deactivated successfully. Feb 13 05:35:53.415708 systemd[1]: Started sshd@80-147.75.49.59:22-119.91.214.145:46164.service. Feb 13 05:35:54.348539 sshd[4597]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:35:55.778991 sshd[4597]: Failed password for root from 119.91.214.145 port 46164 ssh2 Feb 13 05:35:57.079962 sshd[4597]: Received disconnect from 119.91.214.145 port 46164:11: Bye Bye [preauth] Feb 13 05:35:57.079962 sshd[4597]: Disconnected from authenticating user root 119.91.214.145 port 46164 [preauth] Feb 13 05:35:57.082556 systemd[1]: sshd@80-147.75.49.59:22-119.91.214.145:46164.service: Deactivated successfully. Feb 13 05:36:02.270680 systemd[1]: Started sshd@81-147.75.49.59:22-75.51.10.234:42428.service. Feb 13 05:36:02.595837 sshd[4601]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:36:04.789554 sshd[4601]: Failed password for root from 75.51.10.234 port 42428 ssh2 Feb 13 05:36:05.205024 sshd[4601]: Received disconnect from 75.51.10.234 port 42428:11: Bye Bye [preauth] Feb 13 05:36:05.205024 sshd[4601]: Disconnected from authenticating user root 75.51.10.234 port 42428 [preauth] Feb 13 05:36:05.207665 systemd[1]: sshd@81-147.75.49.59:22-75.51.10.234:42428.service: Deactivated successfully. Feb 13 05:36:16.279774 systemd[1]: Started sshd@82-147.75.49.59:22-159.223.87.202:38948.service. Feb 13 05:36:17.589124 sshd[4605]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:36:19.842768 sshd[4605]: Failed password for root from 159.223.87.202 port 38948 ssh2 Feb 13 05:36:20.395988 sshd[4605]: Received disconnect from 159.223.87.202 port 38948:11: Bye Bye [preauth] Feb 13 05:36:20.395988 sshd[4605]: Disconnected from authenticating user root 159.223.87.202 port 38948 [preauth] Feb 13 05:36:20.398524 systemd[1]: sshd@82-147.75.49.59:22-159.223.87.202:38948.service: Deactivated successfully. Feb 13 05:36:51.446839 systemd[1]: Started sshd@83-147.75.49.59:22-119.91.214.145:57340.service. Feb 13 05:36:52.373632 sshd[4615]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:36:54.431553 sshd[4615]: Failed password for root from 119.91.214.145 port 57340 ssh2 Feb 13 05:36:55.103693 sshd[4615]: Received disconnect from 119.91.214.145 port 57340:11: Bye Bye [preauth] Feb 13 05:36:55.103693 sshd[4615]: Disconnected from authenticating user root 119.91.214.145 port 57340 [preauth] Feb 13 05:36:55.106285 systemd[1]: sshd@83-147.75.49.59:22-119.91.214.145:57340.service: Deactivated successfully. Feb 13 05:37:00.542891 systemd[1]: Started sshd@84-147.75.49.59:22-75.51.10.234:32906.service. Feb 13 05:37:00.873338 sshd[4619]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:37:02.364050 sshd[4619]: Failed password for root from 75.51.10.234 port 32906 ssh2 Feb 13 05:37:03.483940 sshd[4619]: Received disconnect from 75.51.10.234 port 32906:11: Bye Bye [preauth] Feb 13 05:37:03.483940 sshd[4619]: Disconnected from authenticating user root 75.51.10.234 port 32906 [preauth] Feb 13 05:37:03.486454 systemd[1]: sshd@84-147.75.49.59:22-75.51.10.234:32906.service: Deactivated successfully. Feb 13 05:37:22.385093 systemd[1]: Started sshd@85-147.75.49.59:22-159.223.87.202:58168.service. Feb 13 05:37:23.714322 sshd[4625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:37:25.696769 sshd[4625]: Failed password for root from 159.223.87.202 port 58168 ssh2 Feb 13 05:37:26.525013 sshd[4625]: Received disconnect from 159.223.87.202 port 58168:11: Bye Bye [preauth] Feb 13 05:37:26.525013 sshd[4625]: Disconnected from authenticating user root 159.223.87.202 port 58168 [preauth] Feb 13 05:37:26.527542 systemd[1]: sshd@85-147.75.49.59:22-159.223.87.202:58168.service: Deactivated successfully. Feb 13 05:37:48.924152 systemd[1]: Started sshd@86-147.75.49.59:22-119.91.214.145:48180.service. Feb 13 05:37:49.838129 sshd[4633]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:37:51.719985 sshd[4633]: Failed password for root from 119.91.214.145 port 48180 ssh2 Feb 13 05:37:52.566371 sshd[4633]: Received disconnect from 119.91.214.145 port 48180:11: Bye Bye [preauth] Feb 13 05:37:52.566371 sshd[4633]: Disconnected from authenticating user root 119.91.214.145 port 48180 [preauth] Feb 13 05:37:52.569021 systemd[1]: sshd@86-147.75.49.59:22-119.91.214.145:48180.service: Deactivated successfully. Feb 13 05:37:54.389110 systemd[1]: Started sshd@87-147.75.49.59:22-75.51.10.234:51606.service. Feb 13 05:37:54.716052 sshd[4637]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:37:56.617965 sshd[4637]: Failed password for root from 75.51.10.234 port 51606 ssh2 Feb 13 05:37:57.325225 sshd[4637]: Received disconnect from 75.51.10.234 port 51606:11: Bye Bye [preauth] Feb 13 05:37:57.325225 sshd[4637]: Disconnected from authenticating user root 75.51.10.234 port 51606 [preauth] Feb 13 05:37:57.327827 systemd[1]: sshd@87-147.75.49.59:22-75.51.10.234:51606.service: Deactivated successfully. Feb 13 05:38:07.247484 systemd[1]: Started sshd@88-147.75.49.59:22-104.248.146.70:57350.service. Feb 13 05:38:08.447237 sshd[4641]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:38:09.938199 sshd[4641]: Failed password for root from 104.248.146.70 port 57350 ssh2 Feb 13 05:38:11.195800 sshd[4641]: Received disconnect from 104.248.146.70 port 57350:11: Bye Bye [preauth] Feb 13 05:38:11.195800 sshd[4641]: Disconnected from authenticating user root 104.248.146.70 port 57350 [preauth] Feb 13 05:38:11.196898 systemd[1]: sshd@88-147.75.49.59:22-104.248.146.70:57350.service: Deactivated successfully. Feb 13 05:38:27.421545 systemd[1]: Started sshd@89-147.75.49.59:22-159.223.87.202:49156.service. Feb 13 05:38:28.429682 sshd[4649]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:38:30.666205 sshd[4649]: Failed password for root from 159.223.87.202 port 49156 ssh2 Feb 13 05:38:31.176136 sshd[4649]: Received disconnect from 159.223.87.202 port 49156:11: Bye Bye [preauth] Feb 13 05:38:31.176136 sshd[4649]: Disconnected from authenticating user root 159.223.87.202 port 49156 [preauth] Feb 13 05:38:31.178832 systemd[1]: sshd@89-147.75.49.59:22-159.223.87.202:49156.service: Deactivated successfully. Feb 13 05:38:45.978774 systemd[1]: Started sshd@90-147.75.49.59:22-119.91.214.145:57696.service. Feb 13 05:38:47.006541 sshd[4655]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:38:47.273874 systemd[1]: Started sshd@91-147.75.49.59:22-75.51.10.234:42066.service. Feb 13 05:38:47.604302 sshd[4658]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:38:49.185032 sshd[4655]: Failed password for root from 119.91.214.145 port 57696 ssh2 Feb 13 05:38:49.757032 sshd[4655]: Received disconnect from 119.91.214.145 port 57696:11: Bye Bye [preauth] Feb 13 05:38:49.757032 sshd[4655]: Disconnected from authenticating user root 119.91.214.145 port 57696 [preauth] Feb 13 05:38:49.757787 systemd[1]: sshd@90-147.75.49.59:22-119.91.214.145:57696.service: Deactivated successfully. Feb 13 05:38:49.782661 sshd[4658]: Failed password for root from 75.51.10.234 port 42066 ssh2 Feb 13 05:38:50.214508 sshd[4658]: Received disconnect from 75.51.10.234 port 42066:11: Bye Bye [preauth] Feb 13 05:38:50.214508 sshd[4658]: Disconnected from authenticating user root 75.51.10.234 port 42066 [preauth] Feb 13 05:38:50.217223 systemd[1]: sshd@91-147.75.49.59:22-75.51.10.234:42066.service: Deactivated successfully. Feb 13 05:39:35.899187 systemd[1]: Started sshd@92-147.75.49.59:22-159.223.87.202:40148.service. Feb 13 05:39:37.230867 sshd[4672]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:39:38.751099 systemd[1]: Started sshd@93-147.75.49.59:22-61.239.150.206:47779.service. Feb 13 05:39:39.274043 sshd[4672]: Failed password for root from 159.223.87.202 port 40148 ssh2 Feb 13 05:39:39.458132 systemd[1]: Started sshd@94-147.75.49.59:22-75.51.10.234:60762.service. Feb 13 05:39:39.789171 sshd[4678]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:39:40.041854 sshd[4672]: Received disconnect from 159.223.87.202 port 40148:11: Bye Bye [preauth] Feb 13 05:39:40.041854 sshd[4672]: Disconnected from authenticating user root 159.223.87.202 port 40148 [preauth] Feb 13 05:39:40.044641 systemd[1]: sshd@92-147.75.49.59:22-159.223.87.202:40148.service: Deactivated successfully. Feb 13 05:39:40.341995 sshd[4675]: Invalid user admin from 61.239.150.206 port 47779 Feb 13 05:39:40.348094 sshd[4675]: pam_faillock(sshd:auth): User unknown Feb 13 05:39:40.349241 sshd[4675]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:39:40.349331 sshd[4675]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.239.150.206 Feb 13 05:39:40.350233 sshd[4675]: pam_faillock(sshd:auth): User unknown Feb 13 05:39:42.107702 sshd[4678]: Failed password for root from 75.51.10.234 port 60762 ssh2 Feb 13 05:39:42.137431 sshd[4675]: Failed password for invalid user admin from 61.239.150.206 port 47779 ssh2 Feb 13 05:39:42.405756 sshd[4678]: Received disconnect from 75.51.10.234 port 60762:11: Bye Bye [preauth] Feb 13 05:39:42.405756 sshd[4678]: Disconnected from authenticating user root 75.51.10.234 port 60762 [preauth] Feb 13 05:39:42.408137 systemd[1]: sshd@94-147.75.49.59:22-75.51.10.234:60762.service: Deactivated successfully. Feb 13 05:39:44.299652 sshd[4682]: pam_faillock(sshd:auth): User unknown Feb 13 05:39:44.304323 sshd[4675]: Postponed keyboard-interactive for invalid user admin from 61.239.150.206 port 47779 ssh2 [preauth] Feb 13 05:39:44.322591 systemd[1]: Started sshd@95-147.75.49.59:22-119.91.214.145:43724.service. Feb 13 05:39:44.777453 sshd[4682]: pam_unix(sshd:auth): check pass; user unknown Feb 13 05:39:44.778175 sshd[4682]: pam_faillock(sshd:auth): User unknown Feb 13 05:39:45.269925 sshd[4684]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:39:46.781015 sshd[4675]: PAM: Permission denied for illegal user admin from 61.239.150.206 Feb 13 05:39:46.781854 sshd[4675]: Failed keyboard-interactive/pam for invalid user admin from 61.239.150.206 port 47779 ssh2 Feb 13 05:39:47.181486 sshd[4675]: Connection closed by invalid user admin 61.239.150.206 port 47779 [preauth] Feb 13 05:39:47.182377 systemd[1]: sshd@93-147.75.49.59:22-61.239.150.206:47779.service: Deactivated successfully. Feb 13 05:39:47.412987 sshd[4684]: Failed password for root from 119.91.214.145 port 43724 ssh2 Feb 13 05:39:48.004436 sshd[4684]: Received disconnect from 119.91.214.145 port 43724:11: Bye Bye [preauth] Feb 13 05:39:48.004436 sshd[4684]: Disconnected from authenticating user root 119.91.214.145 port 43724 [preauth] Feb 13 05:39:48.007044 systemd[1]: sshd@95-147.75.49.59:22-119.91.214.145:43724.service: Deactivated successfully. Feb 13 05:40:34.455788 systemd[1]: Started sshd@96-147.75.49.59:22-75.51.10.234:51230.service. Feb 13 05:40:34.788494 sshd[4696]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:40:36.655614 sshd[4696]: Failed password for root from 75.51.10.234 port 51230 ssh2 Feb 13 05:40:37.399564 sshd[4696]: Received disconnect from 75.51.10.234 port 51230:11: Bye Bye [preauth] Feb 13 05:40:37.399564 sshd[4696]: Disconnected from authenticating user root 75.51.10.234 port 51230 [preauth] Feb 13 05:40:37.402141 systemd[1]: sshd@96-147.75.49.59:22-75.51.10.234:51230.service: Deactivated successfully. Feb 13 05:40:41.691957 systemd[1]: Started sshd@97-147.75.49.59:22-159.223.87.202:59368.service. Feb 13 05:40:42.479224 systemd[1]: Started sshd@98-147.75.49.59:22-119.91.214.145:37970.service. Feb 13 05:40:42.717396 sshd[4701]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:40:43.437145 sshd[4704]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:40:44.685152 sshd[4701]: Failed password for root from 159.223.87.202 port 59368 ssh2 Feb 13 05:40:45.464260 sshd[4701]: Received disconnect from 159.223.87.202 port 59368:11: Bye Bye [preauth] Feb 13 05:40:45.464260 sshd[4701]: Disconnected from authenticating user root 159.223.87.202 port 59368 [preauth] Feb 13 05:40:45.466767 systemd[1]: sshd@97-147.75.49.59:22-159.223.87.202:59368.service: Deactivated successfully. Feb 13 05:40:45.540279 sshd[4704]: Failed password for root from 119.91.214.145 port 37970 ssh2 Feb 13 05:40:46.173976 sshd[4704]: Received disconnect from 119.91.214.145 port 37970:11: Bye Bye [preauth] Feb 13 05:40:46.173976 sshd[4704]: Disconnected from authenticating user root 119.91.214.145 port 37970 [preauth] Feb 13 05:40:46.174712 systemd[1]: sshd@98-147.75.49.59:22-119.91.214.145:37970.service: Deactivated successfully. Feb 13 05:41:28.979301 systemd[1]: Started sshd@99-147.75.49.59:22-75.51.10.234:41700.service. Feb 13 05:41:29.305951 sshd[4714]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:41:31.725193 sshd[4714]: Failed password for root from 75.51.10.234 port 41700 ssh2 Feb 13 05:41:31.916070 sshd[4714]: Received disconnect from 75.51.10.234 port 41700:11: Bye Bye [preauth] Feb 13 05:41:31.916070 sshd[4714]: Disconnected from authenticating user root 75.51.10.234 port 41700 [preauth] Feb 13 05:41:31.918613 systemd[1]: sshd@99-147.75.49.59:22-75.51.10.234:41700.service: Deactivated successfully. Feb 13 05:41:41.934529 systemd[1]: Started sshd@100-147.75.49.59:22-119.91.214.145:47908.service. Feb 13 05:41:42.861082 sshd[4720]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:41:45.063820 sshd[4720]: Failed password for root from 119.91.214.145 port 47908 ssh2 Feb 13 05:41:45.591314 sshd[4720]: Received disconnect from 119.91.214.145 port 47908:11: Bye Bye [preauth] Feb 13 05:41:45.591314 sshd[4720]: Disconnected from authenticating user root 119.91.214.145 port 47908 [preauth] Feb 13 05:41:45.593810 systemd[1]: sshd@100-147.75.49.59:22-119.91.214.145:47908.service: Deactivated successfully. Feb 13 05:41:48.016354 systemd[1]: Started sshd@101-147.75.49.59:22-159.223.87.202:50356.service. Feb 13 05:41:49.037214 sshd[4726]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:41:50.869232 sshd[4726]: Failed password for root from 159.223.87.202 port 50356 ssh2 Feb 13 05:41:51.783910 sshd[4726]: Received disconnect from 159.223.87.202 port 50356:11: Bye Bye [preauth] Feb 13 05:41:51.783910 sshd[4726]: Disconnected from authenticating user root 159.223.87.202 port 50356 [preauth] Feb 13 05:41:51.786424 systemd[1]: sshd@101-147.75.49.59:22-159.223.87.202:50356.service: Deactivated successfully. Feb 13 05:42:27.499869 systemd[1]: Started sshd@102-147.75.49.59:22-75.51.10.234:60412.service. Feb 13 05:42:27.823250 sshd[4732]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:42:29.110054 systemd[1]: Started sshd@103-147.75.49.59:22-104.248.146.70:39748.service. Feb 13 05:42:29.871082 sshd[4732]: Failed password for root from 75.51.10.234 port 60412 ssh2 Feb 13 05:42:30.219066 sshd[4735]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:42:30.432568 sshd[4732]: Received disconnect from 75.51.10.234 port 60412:11: Bye Bye [preauth] Feb 13 05:42:30.432568 sshd[4732]: Disconnected from authenticating user root 75.51.10.234 port 60412 [preauth] Feb 13 05:42:30.435094 systemd[1]: sshd@102-147.75.49.59:22-75.51.10.234:60412.service: Deactivated successfully. Feb 13 05:42:32.346842 sshd[4735]: Failed password for root from 104.248.146.70 port 39748 ssh2 Feb 13 05:42:32.957980 sshd[4735]: Received disconnect from 104.248.146.70 port 39748:11: Bye Bye [preauth] Feb 13 05:42:32.957980 sshd[4735]: Disconnected from authenticating user root 104.248.146.70 port 39748 [preauth] Feb 13 05:42:32.960530 systemd[1]: sshd@103-147.75.49.59:22-104.248.146.70:39748.service: Deactivated successfully. Feb 13 05:42:40.699718 systemd[1]: Started sshd@104-147.75.49.59:22-119.91.214.145:42738.service. Feb 13 05:42:41.640412 sshd[4742]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:42:44.275629 sshd[4742]: Failed password for root from 119.91.214.145 port 42738 ssh2 Feb 13 05:42:46.943242 sshd[4742]: Received disconnect from 119.91.214.145 port 42738:11: Bye Bye [preauth] Feb 13 05:42:46.943242 sshd[4742]: Disconnected from authenticating user root 119.91.214.145 port 42738 [preauth] Feb 13 05:42:46.945760 systemd[1]: sshd@104-147.75.49.59:22-119.91.214.145:42738.service: Deactivated successfully. Feb 13 05:42:53.548879 systemd[1]: Started sshd@105-147.75.49.59:22-159.223.87.202:41344.service. Feb 13 05:42:54.567909 sshd[4748]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:42:56.655870 sshd[4748]: Failed password for root from 159.223.87.202 port 41344 ssh2 Feb 13 05:42:57.314647 sshd[4748]: Received disconnect from 159.223.87.202 port 41344:11: Bye Bye [preauth] Feb 13 05:42:57.314647 sshd[4748]: Disconnected from authenticating user root 159.223.87.202 port 41344 [preauth] Feb 13 05:42:57.317172 systemd[1]: sshd@105-147.75.49.59:22-159.223.87.202:41344.service: Deactivated successfully. Feb 13 05:43:04.444717 systemd[1]: Started sshd@106-147.75.49.59:22-139.178.68.195:44158.service. Feb 13 05:43:04.476058 sshd[4753]: Accepted publickey for core from 139.178.68.195 port 44158 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:04.476963 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:04.480358 systemd-logind[1458]: New session 8 of user core. Feb 13 05:43:04.481106 systemd[1]: Started session-8.scope. Feb 13 05:43:04.614792 sshd[4753]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:04.616242 systemd[1]: sshd@106-147.75.49.59:22-139.178.68.195:44158.service: Deactivated successfully. Feb 13 05:43:04.616676 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 05:43:04.617088 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Feb 13 05:43:04.617543 systemd-logind[1458]: Removed session 8. Feb 13 05:43:09.625796 systemd[1]: Started sshd@107-147.75.49.59:22-139.178.68.195:38164.service. Feb 13 05:43:09.657639 sshd[4788]: Accepted publickey for core from 139.178.68.195 port 38164 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:09.658540 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:09.661806 systemd-logind[1458]: New session 9 of user core. Feb 13 05:43:09.662619 systemd[1]: Started session-9.scope. Feb 13 05:43:09.754371 sshd[4788]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:09.755834 systemd[1]: sshd@107-147.75.49.59:22-139.178.68.195:38164.service: Deactivated successfully. Feb 13 05:43:09.756259 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 05:43:09.756552 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Feb 13 05:43:09.757195 systemd-logind[1458]: Removed session 9. Feb 13 05:43:14.764269 systemd[1]: Started sshd@108-147.75.49.59:22-139.178.68.195:38168.service. Feb 13 05:43:14.795878 sshd[4815]: Accepted publickey for core from 139.178.68.195 port 38168 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:14.796801 sshd[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:14.800245 systemd-logind[1458]: New session 10 of user core. Feb 13 05:43:14.800975 systemd[1]: Started session-10.scope. Feb 13 05:43:14.892757 sshd[4815]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:14.894234 systemd[1]: sshd@108-147.75.49.59:22-139.178.68.195:38168.service: Deactivated successfully. Feb 13 05:43:14.894665 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 05:43:14.895096 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Feb 13 05:43:14.895555 systemd-logind[1458]: Removed session 10. Feb 13 05:43:19.901978 systemd[1]: Started sshd@109-147.75.49.59:22-139.178.68.195:52676.service. Feb 13 05:43:19.933446 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 52676 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:19.934375 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:19.937532 systemd-logind[1458]: New session 11 of user core. Feb 13 05:43:19.938301 systemd[1]: Started session-11.scope. Feb 13 05:43:20.027666 sshd[4844]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:20.029605 systemd[1]: sshd@109-147.75.49.59:22-139.178.68.195:52676.service: Deactivated successfully. Feb 13 05:43:20.029975 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 05:43:20.030371 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Feb 13 05:43:20.030959 systemd[1]: Started sshd@110-147.75.49.59:22-139.178.68.195:52688.service. Feb 13 05:43:20.031360 systemd-logind[1458]: Removed session 11. Feb 13 05:43:20.063063 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 52688 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:20.063852 sshd[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:20.066841 systemd-logind[1458]: New session 12 of user core. Feb 13 05:43:20.067378 systemd[1]: Started session-12.scope. Feb 13 05:43:20.477742 sshd[4870]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:20.479905 systemd[1]: sshd@110-147.75.49.59:22-139.178.68.195:52688.service: Deactivated successfully. Feb 13 05:43:20.480241 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 05:43:20.480530 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Feb 13 05:43:20.481128 systemd[1]: Started sshd@111-147.75.49.59:22-139.178.68.195:52702.service. Feb 13 05:43:20.481441 systemd-logind[1458]: Removed session 12. Feb 13 05:43:20.513689 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 52702 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:20.517103 sshd[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:20.527680 systemd-logind[1458]: New session 13 of user core. Feb 13 05:43:20.530414 systemd[1]: Started session-13.scope. Feb 13 05:43:20.694264 sshd[4895]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:20.695750 systemd[1]: sshd@111-147.75.49.59:22-139.178.68.195:52702.service: Deactivated successfully. Feb 13 05:43:20.696210 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 05:43:20.696516 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Feb 13 05:43:20.696996 systemd-logind[1458]: Removed session 13. Feb 13 05:43:25.237622 systemd[1]: Started sshd@112-147.75.49.59:22-75.51.10.234:50890.service. Feb 13 05:43:25.580503 sshd[4922]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:43:25.703927 systemd[1]: Started sshd@113-147.75.49.59:22-139.178.68.195:52714.service. Feb 13 05:43:25.735084 sshd[4925]: Accepted publickey for core from 139.178.68.195 port 52714 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:25.735996 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:25.739467 systemd-logind[1458]: New session 14 of user core. Feb 13 05:43:25.740220 systemd[1]: Started session-14.scope. Feb 13 05:43:25.831684 sshd[4925]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:25.833180 systemd[1]: sshd@113-147.75.49.59:22-139.178.68.195:52714.service: Deactivated successfully. Feb 13 05:43:25.833613 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 05:43:25.833968 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Feb 13 05:43:25.834408 systemd-logind[1458]: Removed session 14. Feb 13 05:43:27.587903 sshd[4922]: Failed password for root from 75.51.10.234 port 50890 ssh2 Feb 13 05:43:28.190065 sshd[4922]: Received disconnect from 75.51.10.234 port 50890:11: Bye Bye [preauth] Feb 13 05:43:28.190065 sshd[4922]: Disconnected from authenticating user root 75.51.10.234 port 50890 [preauth] Feb 13 05:43:28.191034 systemd[1]: sshd@112-147.75.49.59:22-75.51.10.234:50890.service: Deactivated successfully. Feb 13 05:43:30.841138 systemd[1]: Started sshd@114-147.75.49.59:22-139.178.68.195:32842.service. Feb 13 05:43:30.872595 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 32842 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:30.873563 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:30.876962 systemd-logind[1458]: New session 15 of user core. Feb 13 05:43:30.877689 systemd[1]: Started session-15.scope. Feb 13 05:43:30.958959 sshd[4952]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:30.960336 systemd[1]: sshd@114-147.75.49.59:22-139.178.68.195:32842.service: Deactivated successfully. Feb 13 05:43:30.960778 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 05:43:30.961190 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Feb 13 05:43:30.961680 systemd-logind[1458]: Removed session 15. Feb 13 05:43:35.969244 systemd[1]: Started sshd@115-147.75.49.59:22-139.178.68.195:32846.service. Feb 13 05:43:36.001441 sshd[4979]: Accepted publickey for core from 139.178.68.195 port 32846 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:36.004757 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:36.015449 systemd-logind[1458]: New session 16 of user core. Feb 13 05:43:36.018433 systemd[1]: Started session-16.scope. Feb 13 05:43:36.123431 sshd[4979]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:36.124900 systemd[1]: sshd@115-147.75.49.59:22-139.178.68.195:32846.service: Deactivated successfully. Feb 13 05:43:36.125326 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 05:43:36.125669 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Feb 13 05:43:36.126242 systemd-logind[1458]: Removed session 16. Feb 13 05:43:39.890733 systemd[1]: Started sshd@116-147.75.49.59:22-119.91.214.145:56086.service. Feb 13 05:43:40.809871 sshd[5004]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:43:41.134079 systemd[1]: Started sshd@117-147.75.49.59:22-139.178.68.195:47522.service. Feb 13 05:43:41.165918 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 47522 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:41.166840 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:41.169940 systemd-logind[1458]: New session 17 of user core. Feb 13 05:43:41.170639 systemd[1]: Started session-17.scope. Feb 13 05:43:41.262655 sshd[5007]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:41.264369 systemd[1]: sshd@117-147.75.49.59:22-139.178.68.195:47522.service: Deactivated successfully. Feb 13 05:43:41.264872 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 05:43:41.265332 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Feb 13 05:43:41.265958 systemd-logind[1458]: Removed session 17. Feb 13 05:43:42.545961 sshd[5004]: Failed password for root from 119.91.214.145 port 56086 ssh2 Feb 13 05:43:43.538757 sshd[5004]: Received disconnect from 119.91.214.145 port 56086:11: Bye Bye [preauth] Feb 13 05:43:43.538757 sshd[5004]: Disconnected from authenticating user root 119.91.214.145 port 56086 [preauth] Feb 13 05:43:43.541389 systemd[1]: sshd@116-147.75.49.59:22-119.91.214.145:56086.service: Deactivated successfully. Feb 13 05:43:46.272683 systemd[1]: Started sshd@118-147.75.49.59:22-139.178.68.195:43006.service. Feb 13 05:43:46.303974 sshd[5033]: Accepted publickey for core from 139.178.68.195 port 43006 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:46.304897 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:46.308176 systemd-logind[1458]: New session 18 of user core. Feb 13 05:43:46.308897 systemd[1]: Started session-18.scope. Feb 13 05:43:46.410543 sshd[5033]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:46.416573 systemd[1]: sshd@118-147.75.49.59:22-139.178.68.195:43006.service: Deactivated successfully. Feb 13 05:43:46.418399 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 05:43:46.420193 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Feb 13 05:43:46.422532 systemd-logind[1458]: Removed session 18. Feb 13 05:43:51.421518 systemd[1]: Started sshd@119-147.75.49.59:22-139.178.68.195:43008.service. Feb 13 05:43:51.457483 sshd[5060]: Accepted publickey for core from 139.178.68.195 port 43008 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:51.458186 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:51.460669 systemd-logind[1458]: New session 19 of user core. Feb 13 05:43:51.461236 systemd[1]: Started session-19.scope. Feb 13 05:43:51.551017 sshd[5060]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:51.552340 systemd[1]: sshd@119-147.75.49.59:22-139.178.68.195:43008.service: Deactivated successfully. Feb 13 05:43:51.552774 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 05:43:51.553121 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Feb 13 05:43:51.553545 systemd-logind[1458]: Removed session 19. Feb 13 05:43:56.560552 systemd[1]: Started sshd@120-147.75.49.59:22-139.178.68.195:51722.service. Feb 13 05:43:56.592295 sshd[5089]: Accepted publickey for core from 139.178.68.195 port 51722 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:43:56.593219 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:43:56.596531 systemd-logind[1458]: New session 20 of user core. Feb 13 05:43:56.597315 systemd[1]: Started session-20.scope. Feb 13 05:43:56.687670 sshd[5089]: pam_unix(sshd:session): session closed for user core Feb 13 05:43:56.689414 systemd[1]: sshd@120-147.75.49.59:22-139.178.68.195:51722.service: Deactivated successfully. Feb 13 05:43:56.689929 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 05:43:56.690392 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Feb 13 05:43:56.691163 systemd-logind[1458]: Removed session 20. Feb 13 05:43:57.879390 systemd[1]: Started sshd@121-147.75.49.59:22-159.223.87.202:60564.service. Feb 13 05:43:58.892000 sshd[5115]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:44:01.432014 sshd[5115]: Failed password for root from 159.223.87.202 port 60564 ssh2 Feb 13 05:44:01.639929 sshd[5115]: Received disconnect from 159.223.87.202 port 60564:11: Bye Bye [preauth] Feb 13 05:44:01.639929 sshd[5115]: Disconnected from authenticating user root 159.223.87.202 port 60564 [preauth] Feb 13 05:44:01.642513 systemd[1]: sshd@121-147.75.49.59:22-159.223.87.202:60564.service: Deactivated successfully. Feb 13 05:44:01.698829 systemd[1]: Started sshd@122-147.75.49.59:22-139.178.68.195:51728.service. Feb 13 05:44:01.734457 sshd[5119]: Accepted publickey for core from 139.178.68.195 port 51728 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:01.737502 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:01.747754 systemd-logind[1458]: New session 21 of user core. Feb 13 05:44:01.751023 systemd[1]: Started session-21.scope. Feb 13 05:44:01.857202 sshd[5119]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:01.858627 systemd[1]: sshd@122-147.75.49.59:22-139.178.68.195:51728.service: Deactivated successfully. Feb 13 05:44:01.859050 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 05:44:01.859438 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Feb 13 05:44:01.860108 systemd-logind[1458]: Removed session 21. Feb 13 05:44:06.867479 systemd[1]: Started sshd@123-147.75.49.59:22-139.178.68.195:51584.service. Feb 13 05:44:06.898961 sshd[5144]: Accepted publickey for core from 139.178.68.195 port 51584 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:06.899882 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:06.903217 systemd-logind[1458]: New session 22 of user core. Feb 13 05:44:06.903933 systemd[1]: Started session-22.scope. Feb 13 05:44:06.993160 sshd[5144]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:06.994731 systemd[1]: sshd@123-147.75.49.59:22-139.178.68.195:51584.service: Deactivated successfully. Feb 13 05:44:06.995201 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 05:44:06.995510 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Feb 13 05:44:06.996184 systemd-logind[1458]: Removed session 22. Feb 13 05:44:12.002068 systemd[1]: Started sshd@124-147.75.49.59:22-139.178.68.195:51598.service. Feb 13 05:44:12.033072 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 51598 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:12.034011 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:12.037529 systemd-logind[1458]: New session 23 of user core. Feb 13 05:44:12.038299 systemd[1]: Started session-23.scope. Feb 13 05:44:12.131025 sshd[5169]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:12.132458 systemd[1]: sshd@124-147.75.49.59:22-139.178.68.195:51598.service: Deactivated successfully. Feb 13 05:44:12.132890 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 05:44:12.133255 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Feb 13 05:44:12.133765 systemd-logind[1458]: Removed session 23. Feb 13 05:44:17.140526 systemd[1]: Started sshd@125-147.75.49.59:22-139.178.68.195:57298.service. Feb 13 05:44:17.171607 sshd[5192]: Accepted publickey for core from 139.178.68.195 port 57298 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:17.172613 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:17.176032 systemd-logind[1458]: New session 24 of user core. Feb 13 05:44:17.176998 systemd[1]: Started session-24.scope. Feb 13 05:44:17.264931 sshd[5192]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:17.266324 systemd[1]: sshd@125-147.75.49.59:22-139.178.68.195:57298.service: Deactivated successfully. Feb 13 05:44:17.266762 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 05:44:17.267125 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Feb 13 05:44:17.267500 systemd-logind[1458]: Removed session 24. Feb 13 05:44:18.594764 systemd[1]: Started sshd@126-147.75.49.59:22-75.51.10.234:41354.service. Feb 13 05:44:18.918800 sshd[5219]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:44:20.870860 sshd[5219]: Failed password for root from 75.51.10.234 port 41354 ssh2 Feb 13 05:44:21.528062 sshd[5219]: Received disconnect from 75.51.10.234 port 41354:11: Bye Bye [preauth] Feb 13 05:44:21.528062 sshd[5219]: Disconnected from authenticating user root 75.51.10.234 port 41354 [preauth] Feb 13 05:44:21.530600 systemd[1]: sshd@126-147.75.49.59:22-75.51.10.234:41354.service: Deactivated successfully. Feb 13 05:44:22.277726 systemd[1]: Started sshd@127-147.75.49.59:22-139.178.68.195:57302.service. Feb 13 05:44:22.312774 sshd[5223]: Accepted publickey for core from 139.178.68.195 port 57302 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:22.313603 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:22.316432 systemd-logind[1458]: New session 25 of user core. Feb 13 05:44:22.317301 systemd[1]: Started session-25.scope. Feb 13 05:44:22.408108 sshd[5223]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:22.409567 systemd[1]: sshd@127-147.75.49.59:22-139.178.68.195:57302.service: Deactivated successfully. Feb 13 05:44:22.410003 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 05:44:22.410409 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Feb 13 05:44:22.411025 systemd-logind[1458]: Removed session 25. Feb 13 05:44:27.418768 systemd[1]: Started sshd@128-147.75.49.59:22-139.178.68.195:34086.service. Feb 13 05:44:27.449955 sshd[5249]: Accepted publickey for core from 139.178.68.195 port 34086 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:27.451058 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:27.454319 systemd-logind[1458]: New session 26 of user core. Feb 13 05:44:27.455009 systemd[1]: Started session-26.scope. Feb 13 05:44:27.541926 sshd[5249]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:27.543367 systemd[1]: sshd@128-147.75.49.59:22-139.178.68.195:34086.service: Deactivated successfully. Feb 13 05:44:27.543840 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 05:44:27.544255 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Feb 13 05:44:27.544810 systemd-logind[1458]: Removed session 26. Feb 13 05:44:32.540202 systemd[1]: Started sshd@129-147.75.49.59:22-180.101.88.197:60478.service. Feb 13 05:44:32.546083 systemd[1]: Started sshd@130-147.75.49.59:22-139.178.68.195:34090.service. Feb 13 05:44:32.577070 sshd[5276]: Accepted publickey for core from 139.178.68.195 port 34090 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:32.577968 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:32.581215 systemd-logind[1458]: New session 27 of user core. Feb 13 05:44:32.581948 systemd[1]: Started session-27.scope. Feb 13 05:44:32.671741 sshd[5276]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:32.673211 systemd[1]: sshd@130-147.75.49.59:22-139.178.68.195:34090.service: Deactivated successfully. Feb 13 05:44:32.673684 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 05:44:32.674087 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Feb 13 05:44:32.674521 systemd-logind[1458]: Removed session 27. Feb 13 05:44:33.509749 sshd[5274]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:44:35.522574 sshd[5274]: Failed password for root from 180.101.88.197 port 60478 ssh2 Feb 13 05:44:37.682054 systemd[1]: Started sshd@131-147.75.49.59:22-139.178.68.195:44914.service. Feb 13 05:44:37.713407 sshd[5304]: Accepted publickey for core from 139.178.68.195 port 44914 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:37.714453 sshd[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:37.718025 systemd-logind[1458]: New session 28 of user core. Feb 13 05:44:37.718921 systemd[1]: Started session-28.scope. Feb 13 05:44:37.810652 sshd[5304]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:37.812172 systemd[1]: sshd@131-147.75.49.59:22-139.178.68.195:44914.service: Deactivated successfully. Feb 13 05:44:37.812656 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 05:44:37.813115 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. Feb 13 05:44:37.813579 systemd-logind[1458]: Removed session 28. Feb 13 05:44:38.651103 sshd[5274]: Failed password for root from 180.101.88.197 port 60478 ssh2 Feb 13 05:44:39.324484 systemd[1]: Started sshd@132-147.75.49.59:22-119.91.214.145:56236.service. Feb 13 05:44:40.249833 sshd[5330]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:44:40.643615 sshd[5274]: Failed password for root from 180.101.88.197 port 60478 ssh2 Feb 13 05:44:41.664133 sshd[5274]: Received disconnect from 180.101.88.197 port 60478:11: [preauth] Feb 13 05:44:41.664133 sshd[5274]: Disconnected from authenticating user root 180.101.88.197 port 60478 [preauth] Feb 13 05:44:41.664698 sshd[5274]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:44:41.666733 systemd[1]: sshd@129-147.75.49.59:22-180.101.88.197:60478.service: Deactivated successfully. Feb 13 05:44:41.805818 systemd[1]: Started sshd@133-147.75.49.59:22-180.101.88.197:11162.service. Feb 13 05:44:42.222553 sshd[5330]: Failed password for root from 119.91.214.145 port 56236 ssh2 Feb 13 05:44:42.820138 systemd[1]: Started sshd@134-147.75.49.59:22-139.178.68.195:44930.service. Feb 13 05:44:42.851658 sshd[5337]: Accepted publickey for core from 139.178.68.195 port 44930 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:42.852689 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:42.856349 systemd-logind[1458]: New session 29 of user core. Feb 13 05:44:42.857329 systemd[1]: Started session-29.scope. Feb 13 05:44:42.945616 sshd[5337]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:42.947114 systemd[1]: sshd@134-147.75.49.59:22-139.178.68.195:44930.service: Deactivated successfully. Feb 13 05:44:42.947541 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 05:44:42.947932 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. Feb 13 05:44:42.948427 systemd-logind[1458]: Removed session 29. Feb 13 05:44:42.979035 sshd[5330]: Received disconnect from 119.91.214.145 port 56236:11: Bye Bye [preauth] Feb 13 05:44:42.979035 sshd[5330]: Disconnected from authenticating user root 119.91.214.145 port 56236 [preauth] Feb 13 05:44:42.979615 systemd[1]: sshd@132-147.75.49.59:22-119.91.214.145:56236.service: Deactivated successfully. Feb 13 05:44:43.116213 sshd[5334]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:44:45.168758 sshd[5334]: Failed password for root from 180.101.88.197 port 11162 ssh2 Feb 13 05:44:47.820652 sshd[5334]: Failed password for root from 180.101.88.197 port 11162 ssh2 Feb 13 05:44:47.955281 systemd[1]: Started sshd@135-147.75.49.59:22-139.178.68.195:48846.service. Feb 13 05:44:47.986757 sshd[5363]: Accepted publickey for core from 139.178.68.195 port 48846 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:47.987701 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:47.991103 systemd-logind[1458]: New session 30 of user core. Feb 13 05:44:47.991882 systemd[1]: Started session-30.scope. Feb 13 05:44:48.080124 sshd[5363]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:48.081711 systemd[1]: sshd@135-147.75.49.59:22-139.178.68.195:48846.service: Deactivated successfully. Feb 13 05:44:48.082175 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 05:44:48.082520 systemd-logind[1458]: Session 30 logged out. Waiting for processes to exit. Feb 13 05:44:48.083183 systemd-logind[1458]: Removed session 30. Feb 13 05:44:50.943638 sshd[5334]: Failed password for root from 180.101.88.197 port 11162 ssh2 Feb 13 05:44:51.250740 sshd[5334]: Received disconnect from 180.101.88.197 port 11162:11: [preauth] Feb 13 05:44:51.250740 sshd[5334]: Disconnected from authenticating user root 180.101.88.197 port 11162 [preauth] Feb 13 05:44:51.251216 sshd[5334]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:44:51.253274 systemd[1]: sshd@133-147.75.49.59:22-180.101.88.197:11162.service: Deactivated successfully. Feb 13 05:44:51.438513 systemd[1]: Started sshd@136-147.75.49.59:22-180.101.88.197:12425.service. Feb 13 05:44:52.487846 sshd[5389]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:44:53.090517 systemd[1]: Started sshd@137-147.75.49.59:22-139.178.68.195:48860.service. Feb 13 05:44:53.121785 sshd[5392]: Accepted publickey for core from 139.178.68.195 port 48860 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:53.122804 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:53.126365 systemd-logind[1458]: New session 31 of user core. Feb 13 05:44:53.127376 systemd[1]: Started session-31.scope. Feb 13 05:44:53.222951 sshd[5392]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:53.224928 systemd[1]: sshd@137-147.75.49.59:22-139.178.68.195:48860.service: Deactivated successfully. Feb 13 05:44:53.225520 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 05:44:53.226050 systemd-logind[1458]: Session 31 logged out. Waiting for processes to exit. Feb 13 05:44:53.226573 systemd-logind[1458]: Removed session 31. Feb 13 05:44:54.440491 sshd[5389]: Failed password for root from 180.101.88.197 port 12425 ssh2 Feb 13 05:44:57.251895 sshd[5389]: Failed password for root from 180.101.88.197 port 12425 ssh2 Feb 13 05:44:58.232265 systemd[1]: Started sshd@138-147.75.49.59:22-139.178.68.195:47988.service. Feb 13 05:44:58.264321 sshd[5417]: Accepted publickey for core from 139.178.68.195 port 47988 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:44:58.267501 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:44:58.278088 systemd-logind[1458]: New session 32 of user core. Feb 13 05:44:58.281076 systemd[1]: Started session-32.scope. Feb 13 05:44:58.390857 sshd[5417]: pam_unix(sshd:session): session closed for user core Feb 13 05:44:58.392361 systemd[1]: sshd@138-147.75.49.59:22-139.178.68.195:47988.service: Deactivated successfully. Feb 13 05:44:58.392811 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 05:44:58.393264 systemd-logind[1458]: Session 32 logged out. Waiting for processes to exit. Feb 13 05:44:58.393821 systemd-logind[1458]: Removed session 32. Feb 13 05:44:59.923857 sshd[5389]: Failed password for root from 180.101.88.197 port 12425 ssh2 Feb 13 05:45:00.682560 sshd[5389]: Received disconnect from 180.101.88.197 port 12425:11: [preauth] Feb 13 05:45:00.682560 sshd[5389]: Disconnected from authenticating user root 180.101.88.197 port 12425 [preauth] Feb 13 05:45:00.683161 sshd[5389]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 05:45:00.685224 systemd[1]: sshd@136-147.75.49.59:22-180.101.88.197:12425.service: Deactivated successfully. Feb 13 05:45:01.254237 systemd[1]: Started sshd@139-147.75.49.59:22-159.223.87.202:51550.service. Feb 13 05:45:02.266930 sshd[5443]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:45:03.396269 systemd[1]: Started sshd@140-147.75.49.59:22-139.178.68.195:47992.service. Feb 13 05:45:03.430699 sshd[5446]: Accepted publickey for core from 139.178.68.195 port 47992 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:03.431675 sshd[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:03.435020 systemd-logind[1458]: New session 33 of user core. Feb 13 05:45:03.435847 systemd[1]: Started session-33.scope. Feb 13 05:45:03.528403 sshd[5446]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:03.529846 systemd[1]: sshd@140-147.75.49.59:22-139.178.68.195:47992.service: Deactivated successfully. Feb 13 05:45:03.530291 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 05:45:03.530667 systemd-logind[1458]: Session 33 logged out. Waiting for processes to exit. Feb 13 05:45:03.531293 systemd-logind[1458]: Removed session 33. Feb 13 05:45:03.592262 sshd[5443]: Failed password for root from 159.223.87.202 port 51550 ssh2 Feb 13 05:45:05.013771 sshd[5443]: Received disconnect from 159.223.87.202 port 51550:11: Bye Bye [preauth] Feb 13 05:45:05.013771 sshd[5443]: Disconnected from authenticating user root 159.223.87.202 port 51550 [preauth] Feb 13 05:45:05.016330 systemd[1]: sshd@139-147.75.49.59:22-159.223.87.202:51550.service: Deactivated successfully. Feb 13 05:45:08.538416 systemd[1]: Started sshd@141-147.75.49.59:22-139.178.68.195:40580.service. Feb 13 05:45:08.569858 sshd[5471]: Accepted publickey for core from 139.178.68.195 port 40580 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:08.570805 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:08.574269 systemd-logind[1458]: New session 34 of user core. Feb 13 05:45:08.575079 systemd[1]: Started session-34.scope. Feb 13 05:45:08.667977 sshd[5471]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:08.669393 systemd[1]: sshd@141-147.75.49.59:22-139.178.68.195:40580.service: Deactivated successfully. Feb 13 05:45:08.669831 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 05:45:08.670251 systemd-logind[1458]: Session 34 logged out. Waiting for processes to exit. Feb 13 05:45:08.670807 systemd-logind[1458]: Removed session 34. Feb 13 05:45:12.208302 systemd[1]: Started sshd@142-147.75.49.59:22-75.51.10.234:60054.service. Feb 13 05:45:12.547667 sshd[5496]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:45:13.679017 systemd[1]: Started sshd@143-147.75.49.59:22-139.178.68.195:40592.service. Feb 13 05:45:13.710574 sshd[5499]: Accepted publickey for core from 139.178.68.195 port 40592 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:13.711550 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:13.715024 systemd-logind[1458]: New session 35 of user core. Feb 13 05:45:13.715915 systemd[1]: Started session-35.scope. Feb 13 05:45:13.807536 sshd[5499]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:13.809057 systemd[1]: sshd@143-147.75.49.59:22-139.178.68.195:40592.service: Deactivated successfully. Feb 13 05:45:13.809505 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 05:45:13.809926 systemd-logind[1458]: Session 35 logged out. Waiting for processes to exit. Feb 13 05:45:13.810436 systemd-logind[1458]: Removed session 35. Feb 13 05:45:14.580722 sshd[5496]: Failed password for root from 75.51.10.234 port 60054 ssh2 Feb 13 05:45:15.159303 sshd[5496]: Received disconnect from 75.51.10.234 port 60054:11: Bye Bye [preauth] Feb 13 05:45:15.159303 sshd[5496]: Disconnected from authenticating user root 75.51.10.234 port 60054 [preauth] Feb 13 05:45:15.161900 systemd[1]: sshd@142-147.75.49.59:22-75.51.10.234:60054.service: Deactivated successfully. Feb 13 05:45:18.817126 systemd[1]: Started sshd@144-147.75.49.59:22-139.178.68.195:58490.service. Feb 13 05:45:18.847947 sshd[5528]: Accepted publickey for core from 139.178.68.195 port 58490 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:18.848914 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:18.852195 systemd-logind[1458]: New session 36 of user core. Feb 13 05:45:18.853088 systemd[1]: Started session-36.scope. Feb 13 05:45:18.943993 sshd[5528]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:18.945356 systemd[1]: sshd@144-147.75.49.59:22-139.178.68.195:58490.service: Deactivated successfully. Feb 13 05:45:18.945784 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 05:45:18.946189 systemd-logind[1458]: Session 36 logged out. Waiting for processes to exit. Feb 13 05:45:18.946736 systemd-logind[1458]: Removed session 36. Feb 13 05:45:23.953416 systemd[1]: Started sshd@145-147.75.49.59:22-139.178.68.195:58502.service. Feb 13 05:45:23.984795 sshd[5552]: Accepted publickey for core from 139.178.68.195 port 58502 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:23.985821 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:23.989200 systemd-logind[1458]: New session 37 of user core. Feb 13 05:45:23.989954 systemd[1]: Started session-37.scope. Feb 13 05:45:24.080913 sshd[5552]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:24.082396 systemd[1]: sshd@145-147.75.49.59:22-139.178.68.195:58502.service: Deactivated successfully. Feb 13 05:45:24.082849 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 05:45:24.083302 systemd-logind[1458]: Session 37 logged out. Waiting for processes to exit. Feb 13 05:45:24.083956 systemd-logind[1458]: Removed session 37. Feb 13 05:45:29.090722 systemd[1]: Started sshd@146-147.75.49.59:22-139.178.68.195:37602.service. Feb 13 05:45:29.122643 sshd[5577]: Accepted publickey for core from 139.178.68.195 port 37602 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:29.125840 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:29.136316 systemd-logind[1458]: New session 38 of user core. Feb 13 05:45:29.138848 systemd[1]: Started session-38.scope. Feb 13 05:45:29.239502 sshd[5577]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:29.241012 systemd[1]: sshd@146-147.75.49.59:22-139.178.68.195:37602.service: Deactivated successfully. Feb 13 05:45:29.241441 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 05:45:29.241837 systemd-logind[1458]: Session 38 logged out. Waiting for processes to exit. Feb 13 05:45:29.242296 systemd-logind[1458]: Removed session 38. Feb 13 05:45:33.957956 systemd[1]: Started sshd@147-147.75.49.59:22-218.92.0.76:38590.service. Feb 13 05:45:34.251062 systemd[1]: Started sshd@148-147.75.49.59:22-139.178.68.195:37606.service. Feb 13 05:45:34.286543 sshd[5607]: Accepted publickey for core from 139.178.68.195 port 37606 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:34.289744 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:34.300136 systemd-logind[1458]: New session 39 of user core. Feb 13 05:45:34.302651 systemd[1]: Started session-39.scope. Feb 13 05:45:34.410618 sshd[5607]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:34.412238 systemd[1]: sshd@148-147.75.49.59:22-139.178.68.195:37606.service: Deactivated successfully. Feb 13 05:45:34.412700 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 05:45:34.413114 systemd-logind[1458]: Session 39 logged out. Waiting for processes to exit. Feb 13 05:45:34.413559 systemd-logind[1458]: Removed session 39. Feb 13 05:45:35.020891 sshd[5604]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:45:37.209479 sshd[5604]: Failed password for root from 218.92.0.76 port 38590 ssh2 Feb 13 05:45:38.733331 systemd[1]: Started sshd@149-147.75.49.59:22-119.91.214.145:43920.service. Feb 13 05:45:39.421099 systemd[1]: Started sshd@150-147.75.49.59:22-139.178.68.195:51834.service. Feb 13 05:45:39.451946 sshd[5636]: Accepted publickey for core from 139.178.68.195 port 51834 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:39.452863 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:39.456180 systemd-logind[1458]: New session 40 of user core. Feb 13 05:45:39.456962 systemd[1]: Started session-40.scope. Feb 13 05:45:39.545490 sshd[5604]: Failed password for root from 218.92.0.76 port 38590 ssh2 Feb 13 05:45:39.548199 sshd[5636]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:39.549624 systemd[1]: sshd@150-147.75.49.59:22-139.178.68.195:51834.service: Deactivated successfully. Feb 13 05:45:39.550046 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 05:45:39.550435 systemd-logind[1458]: Session 40 logged out. Waiting for processes to exit. Feb 13 05:45:39.551045 systemd-logind[1458]: Removed session 40. Feb 13 05:45:39.653557 sshd[5633]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:45:41.726450 sshd[5633]: Failed password for root from 119.91.214.145 port 43920 ssh2 Feb 13 05:45:42.354580 sshd[5604]: Failed password for root from 218.92.0.76 port 38590 ssh2 Feb 13 05:45:42.382607 sshd[5633]: Received disconnect from 119.91.214.145 port 43920:11: Bye Bye [preauth] Feb 13 05:45:42.382607 sshd[5633]: Disconnected from authenticating user root 119.91.214.145 port 43920 [preauth] Feb 13 05:45:42.385114 systemd[1]: sshd@149-147.75.49.59:22-119.91.214.145:43920.service: Deactivated successfully. Feb 13 05:45:43.212574 sshd[5604]: Received disconnect from 218.92.0.76 port 38590:11: [preauth] Feb 13 05:45:43.212574 sshd[5604]: Disconnected from authenticating user root 218.92.0.76 port 38590 [preauth] Feb 13 05:45:43.213012 sshd[5604]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:45:43.214413 systemd[1]: sshd@147-147.75.49.59:22-218.92.0.76:38590.service: Deactivated successfully. Feb 13 05:45:43.398671 systemd[1]: Started sshd@151-147.75.49.59:22-218.92.0.76:43141.service. Feb 13 05:45:44.529747 sshd[5662]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:45:44.557883 systemd[1]: Started sshd@152-147.75.49.59:22-139.178.68.195:51844.service. Feb 13 05:45:44.589054 sshd[5665]: Accepted publickey for core from 139.178.68.195 port 51844 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:44.589995 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:44.593230 systemd-logind[1458]: New session 41 of user core. Feb 13 05:45:44.594067 systemd[1]: Started session-41.scope. Feb 13 05:45:44.686755 sshd[5665]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:44.688198 systemd[1]: sshd@152-147.75.49.59:22-139.178.68.195:51844.service: Deactivated successfully. Feb 13 05:45:44.688632 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 05:45:44.689001 systemd-logind[1458]: Session 41 logged out. Waiting for processes to exit. Feb 13 05:45:44.689483 systemd-logind[1458]: Removed session 41. Feb 13 05:45:46.622836 sshd[5662]: Failed password for root from 218.92.0.76 port 43141 ssh2 Feb 13 05:45:49.112663 sshd[5662]: Failed password for root from 218.92.0.76 port 43141 ssh2 Feb 13 05:45:49.696492 systemd[1]: Started sshd@153-147.75.49.59:22-139.178.68.195:38928.service. Feb 13 05:45:49.727903 sshd[5693]: Accepted publickey for core from 139.178.68.195 port 38928 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:49.728874 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:49.732329 systemd-logind[1458]: New session 42 of user core. Feb 13 05:45:49.733138 systemd[1]: Started session-42.scope. Feb 13 05:45:49.824136 sshd[5693]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:49.825647 systemd[1]: sshd@153-147.75.49.59:22-139.178.68.195:38928.service: Deactivated successfully. Feb 13 05:45:49.826069 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 05:45:49.826423 systemd-logind[1458]: Session 42 logged out. Waiting for processes to exit. Feb 13 05:45:49.826944 systemd-logind[1458]: Removed session 42. Feb 13 05:45:51.931882 sshd[5662]: Failed password for root from 218.92.0.76 port 43141 ssh2 Feb 13 05:45:52.753685 sshd[5662]: Received disconnect from 218.92.0.76 port 43141:11: [preauth] Feb 13 05:45:52.753685 sshd[5662]: Disconnected from authenticating user root 218.92.0.76 port 43141 [preauth] Feb 13 05:45:52.754295 sshd[5662]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:45:52.756344 systemd[1]: sshd@151-147.75.49.59:22-218.92.0.76:43141.service: Deactivated successfully. Feb 13 05:45:52.926826 systemd[1]: Started sshd@154-147.75.49.59:22-218.92.0.76:42278.service. Feb 13 05:45:54.038291 sshd[5719]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:45:54.834084 systemd[1]: Started sshd@155-147.75.49.59:22-139.178.68.195:38942.service. Feb 13 05:45:54.866211 sshd[5722]: Accepted publickey for core from 139.178.68.195 port 38942 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:45:54.869465 sshd[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:45:54.880316 systemd-logind[1458]: New session 43 of user core. Feb 13 05:45:54.883753 systemd[1]: Started session-43.scope. Feb 13 05:45:54.986796 sshd[5722]: pam_unix(sshd:session): session closed for user core Feb 13 05:45:54.988379 systemd[1]: sshd@155-147.75.49.59:22-139.178.68.195:38942.service: Deactivated successfully. Feb 13 05:45:54.988865 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 05:45:54.989350 systemd-logind[1458]: Session 43 logged out. Waiting for processes to exit. Feb 13 05:45:54.989970 systemd-logind[1458]: Removed session 43. Feb 13 05:45:56.502766 sshd[5719]: Failed password for root from 218.92.0.76 port 42278 ssh2 Feb 13 05:45:58.842700 sshd[5719]: Failed password for root from 218.92.0.76 port 42278 ssh2 Feb 13 05:45:59.996296 systemd[1]: Started sshd@156-147.75.49.59:22-139.178.68.195:48010.service. Feb 13 05:46:00.027459 sshd[5747]: Accepted publickey for core from 139.178.68.195 port 48010 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:00.028376 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:00.031624 systemd-logind[1458]: New session 44 of user core. Feb 13 05:46:00.032446 systemd[1]: Started session-44.scope. Feb 13 05:46:00.125230 sshd[5747]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:00.126683 systemd[1]: sshd@156-147.75.49.59:22-139.178.68.195:48010.service: Deactivated successfully. Feb 13 05:46:00.127104 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 05:46:00.127462 systemd-logind[1458]: Session 44 logged out. Waiting for processes to exit. Feb 13 05:46:00.128086 systemd-logind[1458]: Removed session 44. Feb 13 05:46:01.330839 sshd[5719]: Failed password for root from 218.92.0.76 port 42278 ssh2 Feb 13 05:46:02.255217 sshd[5719]: Received disconnect from 218.92.0.76 port 42278:11: [preauth] Feb 13 05:46:02.255217 sshd[5719]: Disconnected from authenticating user root 218.92.0.76 port 42278 [preauth] Feb 13 05:46:02.255771 sshd[5719]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 13 05:46:02.257835 systemd[1]: sshd@154-147.75.49.59:22-218.92.0.76:42278.service: Deactivated successfully. Feb 13 05:46:05.135866 systemd[1]: Started sshd@157-147.75.49.59:22-139.178.68.195:48016.service. Feb 13 05:46:05.167440 sshd[5773]: Accepted publickey for core from 139.178.68.195 port 48016 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:05.170685 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:05.181536 systemd-logind[1458]: New session 45 of user core. Feb 13 05:46:05.184603 systemd[1]: Started session-45.scope. Feb 13 05:46:05.290020 sshd[5773]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:05.291483 systemd[1]: sshd@157-147.75.49.59:22-139.178.68.195:48016.service: Deactivated successfully. Feb 13 05:46:05.291919 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 05:46:05.292341 systemd-logind[1458]: Session 45 logged out. Waiting for processes to exit. Feb 13 05:46:05.292941 systemd-logind[1458]: Removed session 45. Feb 13 05:46:05.425885 systemd[1]: Started sshd@158-147.75.49.59:22-159.223.87.202:42536.service. Feb 13 05:46:06.812975 sshd[5798]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:46:08.719126 systemd[1]: Started sshd@159-147.75.49.59:22-75.51.10.234:50530.service. Feb 13 05:46:08.926184 sshd[5798]: Failed password for root from 159.223.87.202 port 42536 ssh2 Feb 13 05:46:09.044107 sshd[5801]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=75.51.10.234 user=root Feb 13 05:46:09.635291 sshd[5798]: Received disconnect from 159.223.87.202 port 42536:11: Bye Bye [preauth] Feb 13 05:46:09.635291 sshd[5798]: Disconnected from authenticating user root 159.223.87.202 port 42536 [preauth] Feb 13 05:46:09.637808 systemd[1]: sshd@158-147.75.49.59:22-159.223.87.202:42536.service: Deactivated successfully. Feb 13 05:46:10.299717 systemd[1]: Started sshd@160-147.75.49.59:22-139.178.68.195:56252.service. Feb 13 05:46:10.330636 sshd[5805]: Accepted publickey for core from 139.178.68.195 port 56252 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:10.331613 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:10.334847 systemd-logind[1458]: New session 46 of user core. Feb 13 05:46:10.335573 systemd[1]: Started session-46.scope. Feb 13 05:46:10.427755 sshd[5805]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:10.429250 systemd[1]: sshd@160-147.75.49.59:22-139.178.68.195:56252.service: Deactivated successfully. Feb 13 05:46:10.429687 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 05:46:10.430118 systemd-logind[1458]: Session 46 logged out. Waiting for processes to exit. Feb 13 05:46:10.430544 systemd-logind[1458]: Removed session 46. Feb 13 05:46:10.901623 sshd[5801]: Failed password for root from 75.51.10.234 port 50530 ssh2 Feb 13 05:46:11.653708 sshd[5801]: Received disconnect from 75.51.10.234 port 50530:11: Bye Bye [preauth] Feb 13 05:46:11.653708 sshd[5801]: Disconnected from authenticating user root 75.51.10.234 port 50530 [preauth] Feb 13 05:46:11.656266 systemd[1]: sshd@159-147.75.49.59:22-75.51.10.234:50530.service: Deactivated successfully. Feb 13 05:46:15.437470 systemd[1]: Started sshd@161-147.75.49.59:22-139.178.68.195:56254.service. Feb 13 05:46:15.468713 sshd[5831]: Accepted publickey for core from 139.178.68.195 port 56254 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:15.469620 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:15.472903 systemd-logind[1458]: New session 47 of user core. Feb 13 05:46:15.473655 systemd[1]: Started session-47.scope. Feb 13 05:46:15.566148 sshd[5831]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:15.567686 systemd[1]: sshd@161-147.75.49.59:22-139.178.68.195:56254.service: Deactivated successfully. Feb 13 05:46:15.568122 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 05:46:15.568468 systemd-logind[1458]: Session 47 logged out. Waiting for processes to exit. Feb 13 05:46:15.569068 systemd-logind[1458]: Removed session 47. Feb 13 05:46:20.575487 systemd[1]: Started sshd@162-147.75.49.59:22-139.178.68.195:44336.service. Feb 13 05:46:20.606485 sshd[5857]: Accepted publickey for core from 139.178.68.195 port 44336 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:20.607463 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:20.610547 systemd-logind[1458]: New session 48 of user core. Feb 13 05:46:20.611309 systemd[1]: Started session-48.scope. Feb 13 05:46:20.699960 sshd[5857]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:20.701857 systemd[1]: sshd@162-147.75.49.59:22-139.178.68.195:44336.service: Deactivated successfully. Feb 13 05:46:20.702216 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 05:46:20.702671 systemd-logind[1458]: Session 48 logged out. Waiting for processes to exit. Feb 13 05:46:20.703269 systemd[1]: Started sshd@163-147.75.49.59:22-139.178.68.195:44352.service. Feb 13 05:46:20.703691 systemd-logind[1458]: Removed session 48. Feb 13 05:46:20.734695 sshd[5882]: Accepted publickey for core from 139.178.68.195 port 44352 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:20.735718 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:20.739534 systemd-logind[1458]: New session 49 of user core. Feb 13 05:46:20.740550 systemd[1]: Started session-49.scope. Feb 13 05:46:21.880623 sshd[5882]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:21.887758 systemd[1]: sshd@163-147.75.49.59:22-139.178.68.195:44352.service: Deactivated successfully. Feb 13 05:46:21.888784 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 05:46:21.889222 systemd-logind[1458]: Session 49 logged out. Waiting for processes to exit. Feb 13 05:46:21.889871 systemd[1]: Started sshd@164-147.75.49.59:22-139.178.68.195:44354.service. Feb 13 05:46:21.890364 systemd-logind[1458]: Removed session 49. Feb 13 05:46:21.920531 sshd[5905]: Accepted publickey for core from 139.178.68.195 port 44354 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:21.921417 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:21.924322 systemd-logind[1458]: New session 50 of user core. Feb 13 05:46:21.925410 systemd[1]: Started session-50.scope. Feb 13 05:46:22.493831 systemd[1]: Started sshd@165-147.75.49.59:22-104.248.146.70:60360.service. Feb 13 05:46:22.763949 sshd[5905]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:22.766332 systemd[1]: sshd@164-147.75.49.59:22-139.178.68.195:44354.service: Deactivated successfully. Feb 13 05:46:22.766863 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 05:46:22.767352 systemd-logind[1458]: Session 50 logged out. Waiting for processes to exit. Feb 13 05:46:22.768548 systemd[1]: Started sshd@166-147.75.49.59:22-139.178.68.195:44362.service. Feb 13 05:46:22.769140 systemd-logind[1458]: Removed session 50. Feb 13 05:46:22.802696 sshd[5936]: Accepted publickey for core from 139.178.68.195 port 44362 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:22.806034 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:22.816667 systemd-logind[1458]: New session 51 of user core. Feb 13 05:46:22.819647 systemd[1]: Started session-51.scope. Feb 13 05:46:23.066067 sshd[5936]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:23.067704 systemd[1]: sshd@166-147.75.49.59:22-139.178.68.195:44362.service: Deactivated successfully. Feb 13 05:46:23.068052 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 05:46:23.068353 systemd-logind[1458]: Session 51 logged out. Waiting for processes to exit. Feb 13 05:46:23.068960 systemd[1]: Started sshd@167-147.75.49.59:22-139.178.68.195:44372.service. Feb 13 05:46:23.069351 systemd-logind[1458]: Removed session 51. Feb 13 05:46:23.100333 sshd[5964]: Accepted publickey for core from 139.178.68.195 port 44372 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:23.104118 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:23.115778 systemd-logind[1458]: New session 52 of user core. Feb 13 05:46:23.118432 systemd[1]: Started session-52.scope. Feb 13 05:46:23.259053 sshd[5964]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:23.260468 systemd[1]: sshd@167-147.75.49.59:22-139.178.68.195:44372.service: Deactivated successfully. Feb 13 05:46:23.260917 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 05:46:23.261337 systemd-logind[1458]: Session 52 logged out. Waiting for processes to exit. Feb 13 05:46:23.261884 systemd-logind[1458]: Removed session 52. Feb 13 05:46:28.269403 systemd[1]: Started sshd@168-147.75.49.59:22-139.178.68.195:38692.service. Feb 13 05:46:28.300633 sshd[5989]: Accepted publickey for core from 139.178.68.195 port 38692 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:28.301579 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:28.304602 systemd-logind[1458]: New session 53 of user core. Feb 13 05:46:28.305346 systemd[1]: Started session-53.scope. Feb 13 05:46:28.349623 sshd[5925]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 05:46:28.390760 sshd[5989]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:28.392396 systemd[1]: sshd@168-147.75.49.59:22-139.178.68.195:38692.service: Deactivated successfully. Feb 13 05:46:28.392890 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 05:46:28.393330 systemd-logind[1458]: Session 53 logged out. Waiting for processes to exit. Feb 13 05:46:28.393973 systemd-logind[1458]: Removed session 53. Feb 13 05:46:30.482848 sshd[5925]: Failed password for root from 104.248.146.70 port 60360 ssh2 Feb 13 05:46:31.133932 sshd[5925]: Received disconnect from 104.248.146.70 port 60360:11: Bye Bye [preauth] Feb 13 05:46:31.133932 sshd[5925]: Disconnected from authenticating user root 104.248.146.70 port 60360 [preauth] Feb 13 05:46:31.136550 systemd[1]: sshd@165-147.75.49.59:22-104.248.146.70:60360.service: Deactivated successfully. Feb 13 05:46:33.400421 systemd[1]: Started sshd@169-147.75.49.59:22-139.178.68.195:38698.service. Feb 13 05:46:33.431389 sshd[6017]: Accepted publickey for core from 139.178.68.195 port 38698 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:33.432372 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:33.435558 systemd-logind[1458]: New session 54 of user core. Feb 13 05:46:33.436459 systemd[1]: Started session-54.scope. Feb 13 05:46:33.528771 sshd[6017]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:33.530296 systemd[1]: sshd@169-147.75.49.59:22-139.178.68.195:38698.service: Deactivated successfully. Feb 13 05:46:33.530745 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 05:46:33.531162 systemd-logind[1458]: Session 54 logged out. Waiting for processes to exit. Feb 13 05:46:33.531572 systemd-logind[1458]: Removed session 54. Feb 13 05:46:37.600058 systemd[1]: Started sshd@170-147.75.49.59:22-119.91.214.145:42042.service. Feb 13 05:46:38.539725 systemd[1]: Started sshd@171-147.75.49.59:22-139.178.68.195:46348.service. Feb 13 05:46:38.560359 sshd[6040]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:46:38.570564 sshd[6043]: Accepted publickey for core from 139.178.68.195 port 46348 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:38.571480 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:38.574800 systemd-logind[1458]: New session 55 of user core. Feb 13 05:46:38.575670 systemd[1]: Started session-55.scope. Feb 13 05:46:38.664169 sshd[6043]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:38.665715 systemd[1]: sshd@171-147.75.49.59:22-139.178.68.195:46348.service: Deactivated successfully. Feb 13 05:46:38.666178 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 05:46:38.666543 systemd-logind[1458]: Session 55 logged out. Waiting for processes to exit. Feb 13 05:46:38.667233 systemd-logind[1458]: Removed session 55. Feb 13 05:46:40.733348 sshd[6040]: Failed password for root from 119.91.214.145 port 42042 ssh2 Feb 13 05:46:41.304213 sshd[6040]: Received disconnect from 119.91.214.145 port 42042:11: Bye Bye [preauth] Feb 13 05:46:41.304213 sshd[6040]: Disconnected from authenticating user root 119.91.214.145 port 42042 [preauth] Feb 13 05:46:41.306812 systemd[1]: sshd@170-147.75.49.59:22-119.91.214.145:42042.service: Deactivated successfully. Feb 13 05:46:43.673853 systemd[1]: Started sshd@172-147.75.49.59:22-139.178.68.195:46362.service. Feb 13 05:46:43.705124 sshd[6069]: Accepted publickey for core from 139.178.68.195 port 46362 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:43.708389 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:43.718732 systemd-logind[1458]: New session 56 of user core. Feb 13 05:46:43.721333 systemd[1]: Started session-56.scope. Feb 13 05:46:43.825017 sshd[6069]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:43.826390 systemd[1]: sshd@172-147.75.49.59:22-139.178.68.195:46362.service: Deactivated successfully. Feb 13 05:46:43.826835 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 05:46:43.827235 systemd-logind[1458]: Session 56 logged out. Waiting for processes to exit. Feb 13 05:46:43.827763 systemd-logind[1458]: Removed session 56. Feb 13 05:46:48.834932 systemd[1]: Started sshd@173-147.75.49.59:22-139.178.68.195:36076.service. Feb 13 05:46:48.865887 sshd[6097]: Accepted publickey for core from 139.178.68.195 port 36076 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:48.866808 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:48.870135 systemd-logind[1458]: New session 57 of user core. Feb 13 05:46:48.870871 systemd[1]: Started session-57.scope. Feb 13 05:46:48.959762 sshd[6097]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:48.961337 systemd[1]: sshd@173-147.75.49.59:22-139.178.68.195:36076.service: Deactivated successfully. Feb 13 05:46:48.961796 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 05:46:48.962240 systemd-logind[1458]: Session 57 logged out. Waiting for processes to exit. Feb 13 05:46:48.962821 systemd-logind[1458]: Removed session 57. Feb 13 05:46:53.969443 systemd[1]: Started sshd@174-147.75.49.59:22-139.178.68.195:36092.service. Feb 13 05:46:54.000306 sshd[6122]: Accepted publickey for core from 139.178.68.195 port 36092 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:54.001344 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:54.004434 systemd-logind[1458]: New session 58 of user core. Feb 13 05:46:54.005099 systemd[1]: Started session-58.scope. Feb 13 05:46:54.092777 sshd[6122]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:54.094354 systemd[1]: sshd@174-147.75.49.59:22-139.178.68.195:36092.service: Deactivated successfully. Feb 13 05:46:54.094839 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 05:46:54.095316 systemd-logind[1458]: Session 58 logged out. Waiting for processes to exit. Feb 13 05:46:54.095922 systemd-logind[1458]: Removed session 58. Feb 13 05:46:59.102734 systemd[1]: Started sshd@175-147.75.49.59:22-139.178.68.195:50994.service. Feb 13 05:46:59.133872 sshd[6146]: Accepted publickey for core from 139.178.68.195 port 50994 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:46:59.135039 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:46:59.138526 systemd-logind[1458]: New session 59 of user core. Feb 13 05:46:59.139393 systemd[1]: Started session-59.scope. Feb 13 05:46:59.229137 sshd[6146]: pam_unix(sshd:session): session closed for user core Feb 13 05:46:59.230581 systemd[1]: sshd@175-147.75.49.59:22-139.178.68.195:50994.service: Deactivated successfully. Feb 13 05:46:59.231017 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 05:46:59.231380 systemd-logind[1458]: Session 59 logged out. Waiting for processes to exit. Feb 13 05:46:59.231887 systemd-logind[1458]: Removed session 59. Feb 13 05:47:04.238377 systemd[1]: Started sshd@176-147.75.49.59:22-139.178.68.195:51008.service. Feb 13 05:47:04.268922 sshd[6170]: Accepted publickey for core from 139.178.68.195 port 51008 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:04.269762 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:04.272801 systemd-logind[1458]: New session 60 of user core. Feb 13 05:47:04.273458 systemd[1]: Started session-60.scope. Feb 13 05:47:04.361541 sshd[6170]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:04.362938 systemd[1]: sshd@176-147.75.49.59:22-139.178.68.195:51008.service: Deactivated successfully. Feb 13 05:47:04.363394 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 05:47:04.363828 systemd-logind[1458]: Session 60 logged out. Waiting for processes to exit. Feb 13 05:47:04.364346 systemd-logind[1458]: Removed session 60. Feb 13 05:47:09.371162 systemd[1]: Started sshd@177-147.75.49.59:22-139.178.68.195:33876.service. Feb 13 05:47:09.402371 sshd[6195]: Accepted publickey for core from 139.178.68.195 port 33876 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:09.403250 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:09.406271 systemd-logind[1458]: New session 61 of user core. Feb 13 05:47:09.406934 systemd[1]: Started session-61.scope. Feb 13 05:47:09.495275 sshd[6195]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:09.496854 systemd[1]: sshd@177-147.75.49.59:22-139.178.68.195:33876.service: Deactivated successfully. Feb 13 05:47:09.497354 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 05:47:09.497820 systemd-logind[1458]: Session 61 logged out. Waiting for processes to exit. Feb 13 05:47:09.498463 systemd-logind[1458]: Removed session 61. Feb 13 05:47:13.091990 systemd[1]: Started sshd@178-147.75.49.59:22-159.223.87.202:33528.service. Feb 13 05:47:14.100613 sshd[6219]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:47:14.505046 systemd[1]: Started sshd@179-147.75.49.59:22-139.178.68.195:33882.service. Feb 13 05:47:14.535545 sshd[6222]: Accepted publickey for core from 139.178.68.195 port 33882 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:14.536482 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:14.539670 systemd-logind[1458]: New session 62 of user core. Feb 13 05:47:14.540558 systemd[1]: Started session-62.scope. Feb 13 05:47:14.629011 sshd[6222]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:14.630635 systemd[1]: sshd@179-147.75.49.59:22-139.178.68.195:33882.service: Deactivated successfully. Feb 13 05:47:14.631126 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 05:47:14.631531 systemd-logind[1458]: Session 62 logged out. Waiting for processes to exit. Feb 13 05:47:14.632233 systemd-logind[1458]: Removed session 62. Feb 13 05:47:16.213848 sshd[6219]: Failed password for root from 159.223.87.202 port 33528 ssh2 Feb 13 05:47:16.847895 sshd[6219]: Received disconnect from 159.223.87.202 port 33528:11: Bye Bye [preauth] Feb 13 05:47:16.847895 sshd[6219]: Disconnected from authenticating user root 159.223.87.202 port 33528 [preauth] Feb 13 05:47:16.850444 systemd[1]: sshd@178-147.75.49.59:22-159.223.87.202:33528.service: Deactivated successfully. Feb 13 05:47:19.637669 systemd[1]: Started sshd@180-147.75.49.59:22-139.178.68.195:35696.service. Feb 13 05:47:19.668571 sshd[6250]: Accepted publickey for core from 139.178.68.195 port 35696 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:19.669479 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:19.672388 systemd-logind[1458]: New session 63 of user core. Feb 13 05:47:19.673119 systemd[1]: Started session-63.scope. Feb 13 05:47:19.761643 sshd[6250]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:19.763143 systemd[1]: sshd@180-147.75.49.59:22-139.178.68.195:35696.service: Deactivated successfully. Feb 13 05:47:19.763558 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 05:47:19.763974 systemd-logind[1458]: Session 63 logged out. Waiting for processes to exit. Feb 13 05:47:19.764471 systemd-logind[1458]: Removed session 63. Feb 13 05:47:23.544530 systemd[1]: Started sshd@181-147.75.49.59:22-218.92.0.27:52082.service. Feb 13 05:47:24.504187 sshd[6275]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 05:47:24.771522 systemd[1]: Started sshd@182-147.75.49.59:22-139.178.68.195:35698.service. Feb 13 05:47:24.802760 sshd[6278]: Accepted publickey for core from 139.178.68.195 port 35698 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:24.806030 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:24.816322 systemd-logind[1458]: New session 64 of user core. Feb 13 05:47:24.819199 systemd[1]: Started session-64.scope. Feb 13 05:47:24.920950 sshd[6278]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:24.922410 systemd[1]: sshd@182-147.75.49.59:22-139.178.68.195:35698.service: Deactivated successfully. Feb 13 05:47:24.922876 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 05:47:24.923324 systemd-logind[1458]: Session 64 logged out. Waiting for processes to exit. Feb 13 05:47:24.923953 systemd-logind[1458]: Removed session 64. Feb 13 05:47:26.657441 sshd[6275]: Failed password for root from 218.92.0.27 port 52082 ssh2 Feb 13 05:47:29.118803 sshd[6275]: Failed password for root from 218.92.0.27 port 52082 ssh2 Feb 13 05:47:29.930576 systemd[1]: Started sshd@183-147.75.49.59:22-139.178.68.195:56042.service. Feb 13 05:47:29.962240 sshd[6302]: Accepted publickey for core from 139.178.68.195 port 56042 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:29.965463 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:29.975875 systemd-logind[1458]: New session 65 of user core. Feb 13 05:47:29.978521 systemd[1]: Started session-65.scope. Feb 13 05:47:30.096758 sshd[6302]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:30.098396 systemd[1]: sshd@183-147.75.49.59:22-139.178.68.195:56042.service: Deactivated successfully. Feb 13 05:47:30.098886 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 05:47:30.099321 systemd-logind[1458]: Session 65 logged out. Waiting for processes to exit. Feb 13 05:47:30.099973 systemd-logind[1458]: Removed session 65. Feb 13 05:47:32.110705 sshd[6275]: Failed password for root from 218.92.0.27 port 52082 ssh2 Feb 13 05:47:32.655061 sshd[6275]: Received disconnect from 218.92.0.27 port 52082:11: [preauth] Feb 13 05:47:32.655061 sshd[6275]: Disconnected from authenticating user root 218.92.0.27 port 52082 [preauth] Feb 13 05:47:32.655630 sshd[6275]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 05:47:32.657708 systemd[1]: sshd@181-147.75.49.59:22-218.92.0.27:52082.service: Deactivated successfully. Feb 13 05:47:32.825393 systemd[1]: Started sshd@184-147.75.49.59:22-218.92.0.27:48153.service. Feb 13 05:47:33.838220 sshd[6327]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 05:47:35.105684 systemd[1]: Started sshd@185-147.75.49.59:22-139.178.68.195:56058.service. Feb 13 05:47:35.136474 sshd[6332]: Accepted publickey for core from 139.178.68.195 port 56058 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:35.137424 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:35.140570 systemd-logind[1458]: New session 66 of user core. Feb 13 05:47:35.141440 systemd[1]: Started session-66.scope. Feb 13 05:47:35.229304 sshd[6332]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:35.230646 systemd[1]: sshd@185-147.75.49.59:22-139.178.68.195:56058.service: Deactivated successfully. Feb 13 05:47:35.231068 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 05:47:35.231405 systemd-logind[1458]: Session 66 logged out. Waiting for processes to exit. Feb 13 05:47:35.232001 systemd-logind[1458]: Removed session 66. Feb 13 05:47:36.227437 sshd[6327]: Failed password for root from 218.92.0.27 port 48153 ssh2 Feb 13 05:47:37.229354 systemd[1]: Started sshd@186-147.75.49.59:22-119.91.214.145:54352.service. Feb 13 05:47:38.161362 sshd[6356]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:47:38.697359 sshd[6327]: Failed password for root from 218.92.0.27 port 48153 ssh2 Feb 13 05:47:39.902967 sshd[6356]: Failed password for root from 119.91.214.145 port 54352 ssh2 Feb 13 05:47:40.238376 systemd[1]: Started sshd@187-147.75.49.59:22-139.178.68.195:33610.service. Feb 13 05:47:40.269319 sshd[6359]: Accepted publickey for core from 139.178.68.195 port 33610 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:40.270459 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:40.273621 systemd-logind[1458]: New session 67 of user core. Feb 13 05:47:40.274576 systemd[1]: Started session-67.scope. Feb 13 05:47:40.360169 sshd[6359]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:40.361471 systemd[1]: sshd@187-147.75.49.59:22-139.178.68.195:33610.service: Deactivated successfully. Feb 13 05:47:40.361931 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 05:47:40.362325 systemd-logind[1458]: Session 67 logged out. Waiting for processes to exit. Feb 13 05:47:40.362759 systemd-logind[1458]: Removed session 67. Feb 13 05:47:40.900331 sshd[6356]: Received disconnect from 119.91.214.145 port 54352:11: Bye Bye [preauth] Feb 13 05:47:40.900331 sshd[6356]: Disconnected from authenticating user root 119.91.214.145 port 54352 [preauth] Feb 13 05:47:40.902884 systemd[1]: sshd@186-147.75.49.59:22-119.91.214.145:54352.service: Deactivated successfully. Feb 13 05:47:41.503923 sshd[6327]: Failed password for root from 218.92.0.27 port 48153 ssh2 Feb 13 05:47:42.016843 sshd[6327]: Received disconnect from 218.92.0.27 port 48153:11: [preauth] Feb 13 05:47:42.016843 sshd[6327]: Disconnected from authenticating user root 218.92.0.27 port 48153 [preauth] Feb 13 05:47:42.017422 sshd[6327]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 05:47:42.019743 systemd[1]: sshd@184-147.75.49.59:22-218.92.0.27:48153.service: Deactivated successfully. Feb 13 05:47:42.168814 systemd[1]: Started sshd@188-147.75.49.59:22-218.92.0.27:42418.service. Feb 13 05:47:43.175698 sshd[6385]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 05:47:44.937346 sshd[6385]: Failed password for root from 218.92.0.27 port 42418 ssh2 Feb 13 05:47:45.370241 systemd[1]: Started sshd@189-147.75.49.59:22-139.178.68.195:33616.service. Feb 13 05:47:45.401457 sshd[6388]: Accepted publickey for core from 139.178.68.195 port 33616 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:45.402483 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:45.406152 systemd-logind[1458]: New session 68 of user core. Feb 13 05:47:45.407162 systemd[1]: Started session-68.scope. Feb 13 05:47:45.494909 sshd[6388]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:45.496436 systemd[1]: sshd@189-147.75.49.59:22-139.178.68.195:33616.service: Deactivated successfully. Feb 13 05:47:45.496911 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 05:47:45.497316 systemd-logind[1458]: Session 68 logged out. Waiting for processes to exit. Feb 13 05:47:45.497869 systemd-logind[1458]: Removed session 68. Feb 13 05:47:48.267835 sshd[6385]: Failed password for root from 218.92.0.27 port 42418 ssh2 Feb 13 05:47:50.504792 systemd[1]: Started sshd@190-147.75.49.59:22-139.178.68.195:52192.service. Feb 13 05:47:50.535626 sshd[6415]: Accepted publickey for core from 139.178.68.195 port 52192 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:50.536504 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:50.539855 systemd-logind[1458]: New session 69 of user core. Feb 13 05:47:50.540800 systemd[1]: Started session-69.scope. Feb 13 05:47:50.625976 sshd[6415]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:50.627372 systemd[1]: sshd@190-147.75.49.59:22-139.178.68.195:52192.service: Deactivated successfully. Feb 13 05:47:50.627828 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 05:47:50.628285 systemd-logind[1458]: Session 69 logged out. Waiting for processes to exit. Feb 13 05:47:50.628781 systemd-logind[1458]: Removed session 69. Feb 13 05:47:50.738941 sshd[6385]: Failed password for root from 218.92.0.27 port 42418 ssh2 Feb 13 05:47:51.350000 sshd[6385]: Received disconnect from 218.92.0.27 port 42418:11: [preauth] Feb 13 05:47:51.350000 sshd[6385]: Disconnected from authenticating user root 218.92.0.27 port 42418 [preauth] Feb 13 05:47:51.350545 sshd[6385]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 05:47:51.352677 systemd[1]: sshd@188-147.75.49.59:22-218.92.0.27:42418.service: Deactivated successfully. Feb 13 05:47:55.635787 systemd[1]: Started sshd@191-147.75.49.59:22-139.178.68.195:52204.service. Feb 13 05:47:55.666875 sshd[6441]: Accepted publickey for core from 139.178.68.195 port 52204 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:47:55.667790 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:47:55.671249 systemd-logind[1458]: New session 70 of user core. Feb 13 05:47:55.671993 systemd[1]: Started session-70.scope. Feb 13 05:47:55.762171 sshd[6441]: pam_unix(sshd:session): session closed for user core Feb 13 05:47:55.763697 systemd[1]: sshd@191-147.75.49.59:22-139.178.68.195:52204.service: Deactivated successfully. Feb 13 05:47:55.764138 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 05:47:55.764457 systemd-logind[1458]: Session 70 logged out. Waiting for processes to exit. Feb 13 05:47:55.765074 systemd-logind[1458]: Removed session 70. Feb 13 05:48:00.771535 systemd[1]: Started sshd@192-147.75.49.59:22-139.178.68.195:41908.service. Feb 13 05:48:00.802557 sshd[6465]: Accepted publickey for core from 139.178.68.195 port 41908 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:00.803411 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:00.806449 systemd-logind[1458]: New session 71 of user core. Feb 13 05:48:00.807162 systemd[1]: Started session-71.scope. Feb 13 05:48:00.891314 sshd[6465]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:00.892744 systemd[1]: sshd@192-147.75.49.59:22-139.178.68.195:41908.service: Deactivated successfully. Feb 13 05:48:00.893150 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 05:48:00.893485 systemd-logind[1458]: Session 71 logged out. Waiting for processes to exit. Feb 13 05:48:00.894130 systemd-logind[1458]: Removed session 71. Feb 13 05:48:05.900383 systemd[1]: Started sshd@193-147.75.49.59:22-139.178.68.195:41924.service. Feb 13 05:48:05.931620 sshd[6490]: Accepted publickey for core from 139.178.68.195 port 41924 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:05.932670 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:05.935972 systemd-logind[1458]: New session 72 of user core. Feb 13 05:48:05.936737 systemd[1]: Started session-72.scope. Feb 13 05:48:06.024813 sshd[6490]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:06.026248 systemd[1]: sshd@193-147.75.49.59:22-139.178.68.195:41924.service: Deactivated successfully. Feb 13 05:48:06.026694 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 05:48:06.027067 systemd-logind[1458]: Session 72 logged out. Waiting for processes to exit. Feb 13 05:48:06.027483 systemd-logind[1458]: Removed session 72. Feb 13 05:48:11.035213 systemd[1]: Started sshd@194-147.75.49.59:22-139.178.68.195:54004.service. Feb 13 05:48:11.066577 sshd[6515]: Accepted publickey for core from 139.178.68.195 port 54004 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:11.067794 sshd[6515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:11.071216 systemd-logind[1458]: New session 73 of user core. Feb 13 05:48:11.072041 systemd[1]: Started session-73.scope. Feb 13 05:48:11.159404 sshd[6515]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:11.160929 systemd[1]: sshd@194-147.75.49.59:22-139.178.68.195:54004.service: Deactivated successfully. Feb 13 05:48:11.161365 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 05:48:11.161791 systemd-logind[1458]: Session 73 logged out. Waiting for processes to exit. Feb 13 05:48:11.162402 systemd-logind[1458]: Removed session 73. Feb 13 05:48:16.162774 systemd[1]: Started sshd@195-147.75.49.59:22-139.178.68.195:52850.service. Feb 13 05:48:16.194550 sshd[6541]: Accepted publickey for core from 139.178.68.195 port 52850 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:16.197844 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:16.208367 systemd-logind[1458]: New session 74 of user core. Feb 13 05:48:16.210869 systemd[1]: Started session-74.scope. Feb 13 05:48:16.338010 sshd[6541]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:16.339395 systemd[1]: sshd@195-147.75.49.59:22-139.178.68.195:52850.service: Deactivated successfully. Feb 13 05:48:16.339854 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 05:48:16.340268 systemd-logind[1458]: Session 74 logged out. Waiting for processes to exit. Feb 13 05:48:16.340750 systemd-logind[1458]: Removed session 74. Feb 13 05:48:18.907493 systemd[1]: Started sshd@196-147.75.49.59:22-159.223.87.202:52748.service. Feb 13 05:48:19.915517 sshd[6570]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:48:21.347922 systemd[1]: Started sshd@197-147.75.49.59:22-139.178.68.195:52852.service. Feb 13 05:48:21.378831 sshd[6573]: Accepted publickey for core from 139.178.68.195 port 52852 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:21.379749 sshd[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:21.383093 systemd-logind[1458]: New session 75 of user core. Feb 13 05:48:21.383856 systemd[1]: Started session-75.scope. Feb 13 05:48:21.471755 sshd[6573]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:21.473322 systemd[1]: sshd@197-147.75.49.59:22-139.178.68.195:52852.service: Deactivated successfully. Feb 13 05:48:21.473800 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 05:48:21.474163 systemd-logind[1458]: Session 75 logged out. Waiting for processes to exit. Feb 13 05:48:21.474613 systemd-logind[1458]: Removed session 75. Feb 13 05:48:21.953644 sshd[6570]: Failed password for root from 159.223.87.202 port 52748 ssh2 Feb 13 05:48:22.662302 sshd[6570]: Received disconnect from 159.223.87.202 port 52748:11: Bye Bye [preauth] Feb 13 05:48:22.662302 sshd[6570]: Disconnected from authenticating user root 159.223.87.202 port 52748 [preauth] Feb 13 05:48:22.664899 systemd[1]: sshd@196-147.75.49.59:22-159.223.87.202:52748.service: Deactivated successfully. Feb 13 05:48:26.480752 systemd[1]: Started sshd@198-147.75.49.59:22-139.178.68.195:49148.service. Feb 13 05:48:26.511838 sshd[6600]: Accepted publickey for core from 139.178.68.195 port 49148 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:26.512867 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:26.516376 systemd-logind[1458]: New session 76 of user core. Feb 13 05:48:26.517255 systemd[1]: Started session-76.scope. Feb 13 05:48:26.607654 sshd[6600]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:26.609157 systemd[1]: sshd@198-147.75.49.59:22-139.178.68.195:49148.service: Deactivated successfully. Feb 13 05:48:26.609577 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 05:48:26.610015 systemd-logind[1458]: Session 76 logged out. Waiting for processes to exit. Feb 13 05:48:26.610509 systemd-logind[1458]: Removed session 76. Feb 13 05:48:31.616745 systemd[1]: Started sshd@199-147.75.49.59:22-139.178.68.195:49162.service. Feb 13 05:48:31.648241 sshd[6625]: Accepted publickey for core from 139.178.68.195 port 49162 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:31.651467 sshd[6625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:31.661890 systemd-logind[1458]: New session 77 of user core. Feb 13 05:48:31.664455 systemd[1]: Started session-77.scope. Feb 13 05:48:31.769760 sshd[6625]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:31.771264 systemd[1]: sshd@199-147.75.49.59:22-139.178.68.195:49162.service: Deactivated successfully. Feb 13 05:48:31.771716 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 05:48:31.772154 systemd-logind[1458]: Session 77 logged out. Waiting for processes to exit. Feb 13 05:48:31.772678 systemd-logind[1458]: Removed session 77. Feb 13 05:48:36.179187 systemd[1]: Started sshd@200-147.75.49.59:22-119.91.214.145:47062.service. Feb 13 05:48:36.781071 systemd[1]: Started sshd@201-147.75.49.59:22-139.178.68.195:41912.service. Feb 13 05:48:36.815985 sshd[6655]: Accepted publickey for core from 139.178.68.195 port 41912 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:36.816764 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:36.819830 systemd-logind[1458]: New session 78 of user core. Feb 13 05:48:36.820489 systemd[1]: Started session-78.scope. Feb 13 05:48:36.949411 sshd[6655]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:36.950913 systemd[1]: sshd@201-147.75.49.59:22-139.178.68.195:41912.service: Deactivated successfully. Feb 13 05:48:36.951352 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 05:48:36.951773 systemd-logind[1458]: Session 78 logged out. Waiting for processes to exit. Feb 13 05:48:36.952364 systemd-logind[1458]: Removed session 78. Feb 13 05:48:37.108438 sshd[6652]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:48:38.950859 sshd[6652]: Failed password for root from 119.91.214.145 port 47062 ssh2 Feb 13 05:48:39.838916 sshd[6652]: Received disconnect from 119.91.214.145 port 47062:11: Bye Bye [preauth] Feb 13 05:48:39.838916 sshd[6652]: Disconnected from authenticating user root 119.91.214.145 port 47062 [preauth] Feb 13 05:48:39.839823 systemd[1]: sshd@200-147.75.49.59:22-119.91.214.145:47062.service: Deactivated successfully. Feb 13 05:48:41.958445 systemd[1]: Started sshd@202-147.75.49.59:22-139.178.68.195:41922.service. Feb 13 05:48:41.989550 sshd[6681]: Accepted publickey for core from 139.178.68.195 port 41922 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:41.990488 sshd[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:41.993880 systemd-logind[1458]: New session 79 of user core. Feb 13 05:48:41.994714 systemd[1]: Started session-79.scope. Feb 13 05:48:42.084592 sshd[6681]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:42.086171 systemd[1]: sshd@202-147.75.49.59:22-139.178.68.195:41922.service: Deactivated successfully. Feb 13 05:48:42.086622 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 05:48:42.087047 systemd-logind[1458]: Session 79 logged out. Waiting for processes to exit. Feb 13 05:48:42.087547 systemd-logind[1458]: Removed session 79. Feb 13 05:48:47.094403 systemd[1]: Started sshd@203-147.75.49.59:22-139.178.68.195:40372.service. Feb 13 05:48:47.125054 sshd[6705]: Accepted publickey for core from 139.178.68.195 port 40372 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:47.125958 sshd[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:47.129413 systemd-logind[1458]: New session 80 of user core. Feb 13 05:48:47.130205 systemd[1]: Started session-80.scope. Feb 13 05:48:47.219820 sshd[6705]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:47.225797 systemd[1]: sshd@203-147.75.49.59:22-139.178.68.195:40372.service: Deactivated successfully. Feb 13 05:48:47.227693 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 05:48:47.229491 systemd-logind[1458]: Session 80 logged out. Waiting for processes to exit. Feb 13 05:48:47.231702 systemd-logind[1458]: Removed session 80. Feb 13 05:48:52.228329 systemd[1]: Started sshd@204-147.75.49.59:22-139.178.68.195:40384.service. Feb 13 05:48:52.259509 sshd[6732]: Accepted publickey for core from 139.178.68.195 port 40384 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:52.260529 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:52.263852 systemd-logind[1458]: New session 81 of user core. Feb 13 05:48:52.264661 systemd[1]: Started session-81.scope. Feb 13 05:48:52.353562 sshd[6732]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:52.355169 systemd[1]: sshd@204-147.75.49.59:22-139.178.68.195:40384.service: Deactivated successfully. Feb 13 05:48:52.355648 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 05:48:52.356087 systemd-logind[1458]: Session 81 logged out. Waiting for processes to exit. Feb 13 05:48:52.356573 systemd-logind[1458]: Removed session 81. Feb 13 05:48:57.362902 systemd[1]: Started sshd@205-147.75.49.59:22-139.178.68.195:52562.service. Feb 13 05:48:57.393649 sshd[6757]: Accepted publickey for core from 139.178.68.195 port 52562 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:48:57.394576 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:48:57.397799 systemd-logind[1458]: New session 82 of user core. Feb 13 05:48:57.398571 systemd[1]: Started session-82.scope. Feb 13 05:48:57.484362 sshd[6757]: pam_unix(sshd:session): session closed for user core Feb 13 05:48:57.485907 systemd[1]: sshd@205-147.75.49.59:22-139.178.68.195:52562.service: Deactivated successfully. Feb 13 05:48:57.486367 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 05:48:57.486805 systemd-logind[1458]: Session 82 logged out. Waiting for processes to exit. Feb 13 05:48:57.487412 systemd-logind[1458]: Removed session 82. Feb 13 05:49:02.493308 systemd[1]: Started sshd@206-147.75.49.59:22-139.178.68.195:52566.service. Feb 13 05:49:02.525103 sshd[6781]: Accepted publickey for core from 139.178.68.195 port 52566 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:02.528331 sshd[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:02.538874 systemd-logind[1458]: New session 83 of user core. Feb 13 05:49:02.541274 systemd[1]: Started session-83.scope. Feb 13 05:49:02.642361 sshd[6781]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:02.643893 systemd[1]: sshd@206-147.75.49.59:22-139.178.68.195:52566.service: Deactivated successfully. Feb 13 05:49:02.644355 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 05:49:02.644790 systemd-logind[1458]: Session 83 logged out. Waiting for processes to exit. Feb 13 05:49:02.645411 systemd-logind[1458]: Removed session 83. Feb 13 05:49:07.651814 systemd[1]: Started sshd@207-147.75.49.59:22-139.178.68.195:42150.service. Feb 13 05:49:07.682718 sshd[6804]: Accepted publickey for core from 139.178.68.195 port 42150 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:07.683694 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:07.687267 systemd-logind[1458]: New session 84 of user core. Feb 13 05:49:07.688137 systemd[1]: Started session-84.scope. Feb 13 05:49:07.777488 sshd[6804]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:07.778915 systemd[1]: sshd@207-147.75.49.59:22-139.178.68.195:42150.service: Deactivated successfully. Feb 13 05:49:07.779341 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 05:49:07.779660 systemd-logind[1458]: Session 84 logged out. Waiting for processes to exit. Feb 13 05:49:07.780251 systemd-logind[1458]: Removed session 84. Feb 13 05:49:10.875708 systemd[1]: Started sshd@208-147.75.49.59:22-218.92.0.118:22271.service. Feb 13 05:49:11.941320 sshd[6831]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 05:49:12.786432 systemd[1]: Started sshd@209-147.75.49.59:22-139.178.68.195:42156.service. Feb 13 05:49:12.817009 sshd[6834]: Accepted publickey for core from 139.178.68.195 port 42156 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:12.817906 sshd[6834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:12.821428 systemd-logind[1458]: New session 85 of user core. Feb 13 05:49:12.822176 systemd[1]: Started session-85.scope. Feb 13 05:49:12.911764 sshd[6834]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:12.913245 systemd[1]: sshd@209-147.75.49.59:22-139.178.68.195:42156.service: Deactivated successfully. Feb 13 05:49:12.913680 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 05:49:12.914074 systemd-logind[1458]: Session 85 logged out. Waiting for processes to exit. Feb 13 05:49:12.914511 systemd-logind[1458]: Removed session 85. Feb 13 05:49:13.782809 sshd[6831]: Failed password for root from 218.92.0.118 port 22271 ssh2 Feb 13 05:49:16.936542 sshd[6831]: Failed password for root from 218.92.0.118 port 22271 ssh2 Feb 13 05:49:17.923186 systemd[1]: Started sshd@210-147.75.49.59:22-139.178.68.195:43738.service. Feb 13 05:49:17.979484 sshd[6860]: Accepted publickey for core from 139.178.68.195 port 43738 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:17.980310 sshd[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:17.982829 systemd-logind[1458]: New session 86 of user core. Feb 13 05:49:17.983321 systemd[1]: Started session-86.scope. Feb 13 05:49:18.067297 sshd[6860]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:18.068625 systemd[1]: sshd@210-147.75.49.59:22-139.178.68.195:43738.service: Deactivated successfully. Feb 13 05:49:18.069055 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 05:49:18.069435 systemd-logind[1458]: Session 86 logged out. Waiting for processes to exit. Feb 13 05:49:18.070007 systemd-logind[1458]: Removed session 86. Feb 13 05:49:20.480925 sshd[6831]: Failed password for root from 218.92.0.118 port 22271 ssh2 Feb 13 05:49:23.076444 systemd[1]: Started sshd@211-147.75.49.59:22-139.178.68.195:43742.service. Feb 13 05:49:23.107612 sshd[6884]: Accepted publickey for core from 139.178.68.195 port 43742 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:23.108485 sshd[6884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:23.111404 systemd-logind[1458]: New session 87 of user core. Feb 13 05:49:23.112149 systemd[1]: Started session-87.scope. Feb 13 05:49:23.115589 sshd[6831]: Received disconnect from 218.92.0.118 port 22271:11: [preauth] Feb 13 05:49:23.115589 sshd[6831]: Disconnected from authenticating user root 218.92.0.118 port 22271 [preauth] Feb 13 05:49:23.115761 sshd[6831]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 05:49:23.116444 systemd[1]: sshd@208-147.75.49.59:22-218.92.0.118:22271.service: Deactivated successfully. Feb 13 05:49:23.189531 systemd[1]: Started sshd@212-147.75.49.59:22-159.223.87.202:43736.service. Feb 13 05:49:23.195216 sshd[6884]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:23.196803 systemd[1]: sshd@211-147.75.49.59:22-139.178.68.195:43742.service: Deactivated successfully. Feb 13 05:49:23.197119 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 05:49:23.197409 systemd-logind[1458]: Session 87 logged out. Waiting for processes to exit. Feb 13 05:49:23.197970 systemd[1]: Started sshd@213-147.75.49.59:22-139.178.68.195:43744.service. Feb 13 05:49:23.198385 systemd-logind[1458]: Removed session 87. Feb 13 05:49:23.228102 sshd[6913]: Accepted publickey for core from 139.178.68.195 port 43744 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:23.228838 sshd[6913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:23.231338 systemd-logind[1458]: New session 88 of user core. Feb 13 05:49:23.231909 systemd[1]: Started session-88.scope. Feb 13 05:49:23.282258 systemd[1]: Started sshd@214-147.75.49.59:22-218.92.0.118:48046.service. Feb 13 05:49:24.186010 sshd[6909]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.223.87.202 user=root Feb 13 05:49:24.364910 sshd[6918]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 05:49:24.574376 env[1470]: time="2024-02-13T05:49:24.574136231Z" level=info msg="StopContainer for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" with timeout 30 (s)" Feb 13 05:49:24.575266 env[1470]: time="2024-02-13T05:49:24.574908367Z" level=info msg="Stop container \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" with signal terminated" Feb 13 05:49:24.599124 systemd[1]: cri-containerd-e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc.scope: Deactivated successfully. Feb 13 05:49:24.599605 systemd[1]: cri-containerd-e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc.scope: Consumed 3.989s CPU time. Feb 13 05:49:24.616321 env[1470]: time="2024-02-13T05:49:24.616187676Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 05:49:24.621846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc-rootfs.mount: Deactivated successfully. Feb 13 05:49:24.623834 env[1470]: time="2024-02-13T05:49:24.623782807Z" level=info msg="StopContainer for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" with timeout 1 (s)" Feb 13 05:49:24.624361 env[1470]: time="2024-02-13T05:49:24.624111228Z" level=info msg="Stop container \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" with signal terminated" Feb 13 05:49:24.631398 systemd-networkd[1301]: lxc_health: Link DOWN Feb 13 05:49:24.631408 systemd-networkd[1301]: lxc_health: Lost carrier Feb 13 05:49:24.643153 env[1470]: time="2024-02-13T05:49:24.643102066Z" level=info msg="shim disconnected" id=e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc Feb 13 05:49:24.643314 env[1470]: time="2024-02-13T05:49:24.643154578Z" level=warning msg="cleaning up after shim disconnected" id=e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc namespace=k8s.io Feb 13 05:49:24.643314 env[1470]: time="2024-02-13T05:49:24.643171603Z" level=info msg="cleaning up dead shim" Feb 13 05:49:24.652312 env[1470]: time="2024-02-13T05:49:24.652264329Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6980 runtime=io.containerd.runc.v2\n" Feb 13 05:49:24.653753 env[1470]: time="2024-02-13T05:49:24.653680483Z" level=info msg="StopContainer for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" returns successfully" Feb 13 05:49:24.654412 env[1470]: time="2024-02-13T05:49:24.654344385Z" level=info msg="StopPodSandbox for \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\"" Feb 13 05:49:24.654518 env[1470]: time="2024-02-13T05:49:24.654424759Z" level=info msg="Container to stop \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:24.657161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84-shm.mount: Deactivated successfully. Feb 13 05:49:24.662794 systemd[1]: cri-containerd-1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84.scope: Deactivated successfully. Feb 13 05:49:24.685643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84-rootfs.mount: Deactivated successfully. Feb 13 05:49:24.707848 env[1470]: time="2024-02-13T05:49:24.707705856Z" level=info msg="shim disconnected" id=1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84 Feb 13 05:49:24.708193 env[1470]: time="2024-02-13T05:49:24.707853302Z" level=warning msg="cleaning up after shim disconnected" id=1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84 namespace=k8s.io Feb 13 05:49:24.708193 env[1470]: time="2024-02-13T05:49:24.707912556Z" level=info msg="cleaning up dead shim" Feb 13 05:49:24.723511 systemd[1]: cri-containerd-475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd.scope: Deactivated successfully. Feb 13 05:49:24.724215 systemd[1]: cri-containerd-475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd.scope: Consumed 16.628s CPU time. Feb 13 05:49:24.724693 env[1470]: time="2024-02-13T05:49:24.724605205Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7014 runtime=io.containerd.runc.v2\n" Feb 13 05:49:24.725357 env[1470]: time="2024-02-13T05:49:24.725282995Z" level=info msg="TearDown network for sandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" successfully" Feb 13 05:49:24.725357 env[1470]: time="2024-02-13T05:49:24.725339765Z" level=info msg="StopPodSandbox for \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" returns successfully" Feb 13 05:49:24.765454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd-rootfs.mount: Deactivated successfully. Feb 13 05:49:24.765938 env[1470]: time="2024-02-13T05:49:24.765707917Z" level=info msg="shim disconnected" id=475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd Feb 13 05:49:24.765938 env[1470]: time="2024-02-13T05:49:24.765911443Z" level=warning msg="cleaning up after shim disconnected" id=475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd namespace=k8s.io Feb 13 05:49:24.766213 env[1470]: time="2024-02-13T05:49:24.765937566Z" level=info msg="cleaning up dead shim" Feb 13 05:49:24.776025 env[1470]: time="2024-02-13T05:49:24.775978803Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7038 runtime=io.containerd.runc.v2\n" Feb 13 05:49:24.777498 env[1470]: time="2024-02-13T05:49:24.777436077Z" level=info msg="StopContainer for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" returns successfully" Feb 13 05:49:24.778049 env[1470]: time="2024-02-13T05:49:24.778000625Z" level=info msg="StopPodSandbox for \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\"" Feb 13 05:49:24.778166 env[1470]: time="2024-02-13T05:49:24.778081677Z" level=info msg="Container to stop \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:24.778166 env[1470]: time="2024-02-13T05:49:24.778105910Z" level=info msg="Container to stop \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:24.778166 env[1470]: time="2024-02-13T05:49:24.778123486Z" level=info msg="Container to stop \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:24.778166 env[1470]: time="2024-02-13T05:49:24.778139982Z" level=info msg="Container to stop \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:24.778166 env[1470]: time="2024-02-13T05:49:24.778155739Z" level=info msg="Container to stop \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:24.785505 systemd[1]: cri-containerd-7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39.scope: Deactivated successfully. Feb 13 05:49:24.819648 kubelet[2531]: I0213 05:49:24.819602 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-cilium-config-path\") pod \"f4a16a98-dda8-4c2a-bb92-0e54f199f0eb\" (UID: \"f4a16a98-dda8-4c2a-bb92-0e54f199f0eb\") " Feb 13 05:49:24.820297 kubelet[2531]: I0213 05:49:24.819714 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shccd\" (UniqueName: \"kubernetes.io/projected/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-kube-api-access-shccd\") pod \"f4a16a98-dda8-4c2a-bb92-0e54f199f0eb\" (UID: \"f4a16a98-dda8-4c2a-bb92-0e54f199f0eb\") " Feb 13 05:49:24.820297 kubelet[2531]: W0213 05:49:24.820025 2531 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 05:49:24.822469 env[1470]: time="2024-02-13T05:49:24.822373838Z" level=info msg="shim disconnected" id=7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39 Feb 13 05:49:24.822739 env[1470]: time="2024-02-13T05:49:24.822479562Z" level=warning msg="cleaning up after shim disconnected" id=7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39 namespace=k8s.io Feb 13 05:49:24.822739 env[1470]: time="2024-02-13T05:49:24.822511662Z" level=info msg="cleaning up dead shim" Feb 13 05:49:24.824208 kubelet[2531]: I0213 05:49:24.824148 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4a16a98-dda8-4c2a-bb92-0e54f199f0eb" (UID: "f4a16a98-dda8-4c2a-bb92-0e54f199f0eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 05:49:24.825691 kubelet[2531]: I0213 05:49:24.825535 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-kube-api-access-shccd" (OuterVolumeSpecName: "kube-api-access-shccd") pod "f4a16a98-dda8-4c2a-bb92-0e54f199f0eb" (UID: "f4a16a98-dda8-4c2a-bb92-0e54f199f0eb"). InnerVolumeSpecName "kube-api-access-shccd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 05:49:24.836868 env[1470]: time="2024-02-13T05:49:24.836794586Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7071 runtime=io.containerd.runc.v2\n" Feb 13 05:49:24.837504 env[1470]: time="2024-02-13T05:49:24.837435974Z" level=info msg="TearDown network for sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" successfully" Feb 13 05:49:24.837733 env[1470]: time="2024-02-13T05:49:24.837496805Z" level=info msg="StopPodSandbox for \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" returns successfully" Feb 13 05:49:24.920359 kubelet[2531]: I0213 05:49:24.920253 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-bpf-maps\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.920359 kubelet[2531]: I0213 05:49:24.920378 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-clustermesh-secrets\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.920940 kubelet[2531]: I0213 05:49:24.920453 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2htk\" (UniqueName: \"kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-kube-api-access-r2htk\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.920940 kubelet[2531]: I0213 05:49:24.920431 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.920940 kubelet[2531]: I0213 05:49:24.920512 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hostproc\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.920940 kubelet[2531]: I0213 05:49:24.920572 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-kernel\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.920940 kubelet[2531]: I0213 05:49:24.920619 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.920940 kubelet[2531]: I0213 05:49:24.920694 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-lib-modules\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.921632 kubelet[2531]: I0213 05:49:24.920710 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.921632 kubelet[2531]: I0213 05:49:24.920785 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hubble-tls\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.921632 kubelet[2531]: I0213 05:49:24.920827 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.921632 kubelet[2531]: I0213 05:49:24.920883 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-cgroup\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.921632 kubelet[2531]: I0213 05:49:24.920942 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.922168 kubelet[2531]: I0213 05:49:24.921042 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-config-path\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.922168 kubelet[2531]: I0213 05:49:24.921157 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-etc-cni-netd\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.922168 kubelet[2531]: I0213 05:49:24.921245 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cni-path\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.922168 kubelet[2531]: I0213 05:49:24.921247 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.922168 kubelet[2531]: I0213 05:49:24.921343 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-run\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.922168 kubelet[2531]: I0213 05:49:24.921361 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.922803 kubelet[2531]: I0213 05:49:24.921395 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.922803 kubelet[2531]: I0213 05:49:24.921443 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-net\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.922803 kubelet[2531]: I0213 05:49:24.921494 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.922803 kubelet[2531]: I0213 05:49:24.921555 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-xtables-lock\") pod \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\" (UID: \"bf32717c-6e57-49a3-aac5-0f3e83a5ac1a\") " Feb 13 05:49:24.922803 kubelet[2531]: W0213 05:49:24.921578 2531 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.921638 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.921712 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-run\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.921782 2531 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-etc-cni-netd\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.921841 2531 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cni-path\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.921900 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-net\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.921959 2531 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-shccd\" (UniqueName: \"kubernetes.io/projected/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-kube-api-access-shccd\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.923309 kubelet[2531]: I0213 05:49:24.922015 2531 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-bpf-maps\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.924038 kubelet[2531]: I0213 05:49:24.922072 2531 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hostproc\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.924038 kubelet[2531]: I0213 05:49:24.922133 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.924038 kubelet[2531]: I0213 05:49:24.922203 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb-cilium-config-path\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.924038 kubelet[2531]: I0213 05:49:24.922264 2531 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-lib-modules\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.924038 kubelet[2531]: I0213 05:49:24.922324 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-cgroup\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:24.927091 kubelet[2531]: I0213 05:49:24.926988 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 05:49:24.927396 kubelet[2531]: I0213 05:49:24.927307 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 05:49:24.927579 kubelet[2531]: I0213 05:49:24.927464 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-kube-api-access-r2htk" (OuterVolumeSpecName: "kube-api-access-r2htk") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "kube-api-access-r2htk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 05:49:24.927800 kubelet[2531]: I0213 05:49:24.927740 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" (UID: "bf32717c-6e57-49a3-aac5-0f3e83a5ac1a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 05:49:25.023768 kubelet[2531]: I0213 05:49:25.023666 2531 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-xtables-lock\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:25.023768 kubelet[2531]: I0213 05:49:25.023745 2531 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-clustermesh-secrets\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:25.023768 kubelet[2531]: I0213 05:49:25.023792 2531 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r2htk\" (UniqueName: \"kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-kube-api-access-r2htk\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:25.024276 kubelet[2531]: I0213 05:49:25.023826 2531 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-hubble-tls\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:25.024276 kubelet[2531]: I0213 05:49:25.023858 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a-cilium-config-path\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:25.172135 systemd[1]: Removed slice kubepods-besteffort-podf4a16a98_dda8_4c2a_bb92_0e54f199f0eb.slice. Feb 13 05:49:25.172198 systemd[1]: kubepods-besteffort-podf4a16a98_dda8_4c2a_bb92_0e54f199f0eb.slice: Consumed 4.019s CPU time. Feb 13 05:49:25.172781 systemd[1]: Removed slice kubepods-burstable-podbf32717c_6e57_49a3_aac5_0f3e83a5ac1a.slice. Feb 13 05:49:25.172858 systemd[1]: kubepods-burstable-podbf32717c_6e57_49a3_aac5_0f3e83a5ac1a.slice: Consumed 16.690s CPU time. Feb 13 05:49:25.597267 systemd[1]: var-lib-kubelet-pods-f4a16a98\x2ddda8\x2d4c2a\x2dbb92\x2d0e54f199f0eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshccd.mount: Deactivated successfully. Feb 13 05:49:25.597333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39-rootfs.mount: Deactivated successfully. Feb 13 05:49:25.597384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39-shm.mount: Deactivated successfully. Feb 13 05:49:25.597432 systemd[1]: var-lib-kubelet-pods-bf32717c\x2d6e57\x2d49a3\x2daac5\x2d0f3e83a5ac1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr2htk.mount: Deactivated successfully. Feb 13 05:49:25.597484 systemd[1]: var-lib-kubelet-pods-bf32717c\x2d6e57\x2d49a3\x2daac5\x2d0f3e83a5ac1a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 05:49:25.597531 systemd[1]: var-lib-kubelet-pods-bf32717c\x2d6e57\x2d49a3\x2daac5\x2d0f3e83a5ac1a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 05:49:25.648059 kubelet[2531]: I0213 05:49:25.648017 2531 scope.go:115] "RemoveContainer" containerID="e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc" Feb 13 05:49:25.649340 env[1470]: time="2024-02-13T05:49:25.649299216Z" level=info msg="RemoveContainer for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\"" Feb 13 05:49:25.652448 env[1470]: time="2024-02-13T05:49:25.652414462Z" level=info msg="RemoveContainer for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" returns successfully" Feb 13 05:49:25.652714 kubelet[2531]: I0213 05:49:25.652688 2531 scope.go:115] "RemoveContainer" containerID="e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc" Feb 13 05:49:25.653020 env[1470]: time="2024-02-13T05:49:25.652933624Z" level=error msg="ContainerStatus for \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\": not found" Feb 13 05:49:25.653174 kubelet[2531]: E0213 05:49:25.653156 2531 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\": not found" containerID="e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc" Feb 13 05:49:25.653238 kubelet[2531]: I0213 05:49:25.653202 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc} err="failed to get container status \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0183071f90656c771c16313d1cc242714ea559a47c67a48ae7249eb268903fc\": not found" Feb 13 05:49:25.653238 kubelet[2531]: I0213 05:49:25.653220 2531 scope.go:115] "RemoveContainer" containerID="475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd" Feb 13 05:49:25.654657 env[1470]: time="2024-02-13T05:49:25.654614674Z" level=info msg="RemoveContainer for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\"" Feb 13 05:49:25.657252 env[1470]: time="2024-02-13T05:49:25.657205424Z" level=info msg="RemoveContainer for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" returns successfully" Feb 13 05:49:25.657478 kubelet[2531]: I0213 05:49:25.657452 2531 scope.go:115] "RemoveContainer" containerID="12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122" Feb 13 05:49:25.659081 env[1470]: time="2024-02-13T05:49:25.659035519Z" level=info msg="RemoveContainer for \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\"" Feb 13 05:49:25.661705 env[1470]: time="2024-02-13T05:49:25.661628370Z" level=info msg="RemoveContainer for \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\" returns successfully" Feb 13 05:49:25.661906 kubelet[2531]: I0213 05:49:25.661865 2531 scope.go:115] "RemoveContainer" containerID="a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e" Feb 13 05:49:25.663787 env[1470]: time="2024-02-13T05:49:25.663680375Z" level=info msg="RemoveContainer for \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\"" Feb 13 05:49:25.667139 env[1470]: time="2024-02-13T05:49:25.667068463Z" level=info msg="RemoveContainer for \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\" returns successfully" Feb 13 05:49:25.667467 kubelet[2531]: I0213 05:49:25.667425 2531 scope.go:115] "RemoveContainer" containerID="8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe" Feb 13 05:49:25.670023 env[1470]: time="2024-02-13T05:49:25.669949136Z" level=info msg="RemoveContainer for \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\"" Feb 13 05:49:25.673853 env[1470]: time="2024-02-13T05:49:25.673756911Z" level=info msg="RemoveContainer for \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\" returns successfully" Feb 13 05:49:25.674169 kubelet[2531]: I0213 05:49:25.674122 2531 scope.go:115] "RemoveContainer" containerID="2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f" Feb 13 05:49:25.676422 env[1470]: time="2024-02-13T05:49:25.676361726Z" level=info msg="RemoveContainer for \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\"" Feb 13 05:49:25.679893 env[1470]: time="2024-02-13T05:49:25.679786601Z" level=info msg="RemoveContainer for \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\" returns successfully" Feb 13 05:49:25.680221 kubelet[2531]: I0213 05:49:25.680143 2531 scope.go:115] "RemoveContainer" containerID="475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd" Feb 13 05:49:25.680741 env[1470]: time="2024-02-13T05:49:25.680537347Z" level=error msg="ContainerStatus for \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\": not found" Feb 13 05:49:25.680960 kubelet[2531]: E0213 05:49:25.680927 2531 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\": not found" containerID="475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd" Feb 13 05:49:25.681103 kubelet[2531]: I0213 05:49:25.681017 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd} err="failed to get container status \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"475f147ea63e6a312835fea5de5d39bd62408121e6d0ee0333c5242ca45d27fd\": not found" Feb 13 05:49:25.681103 kubelet[2531]: I0213 05:49:25.681053 2531 scope.go:115] "RemoveContainer" containerID="12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122" Feb 13 05:49:25.681603 env[1470]: time="2024-02-13T05:49:25.681426257Z" level=error msg="ContainerStatus for \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\": not found" Feb 13 05:49:25.681900 kubelet[2531]: E0213 05:49:25.681820 2531 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\": not found" containerID="12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122" Feb 13 05:49:25.681900 kubelet[2531]: I0213 05:49:25.681899 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122} err="failed to get container status \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\": rpc error: code = NotFound desc = an error occurred when try to find container \"12c0c46322c3e929910a8416360449a6e3634d9cb2ecb67a7f609f2b6e326122\": not found" Feb 13 05:49:25.682289 kubelet[2531]: I0213 05:49:25.681928 2531 scope.go:115] "RemoveContainer" containerID="a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e" Feb 13 05:49:25.682396 env[1470]: time="2024-02-13T05:49:25.682260518Z" level=error msg="ContainerStatus for \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\": not found" Feb 13 05:49:25.682722 kubelet[2531]: E0213 05:49:25.682654 2531 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\": not found" containerID="a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e" Feb 13 05:49:25.682722 kubelet[2531]: I0213 05:49:25.682712 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e} err="failed to get container status \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4bcf9417d50078a1960f937818aa828c4527c03abc35c96bed24de10f99a11e\": not found" Feb 13 05:49:25.682722 kubelet[2531]: I0213 05:49:25.682736 2531 scope.go:115] "RemoveContainer" containerID="8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe" Feb 13 05:49:25.683264 env[1470]: time="2024-02-13T05:49:25.683085754Z" level=error msg="ContainerStatus for \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\": not found" Feb 13 05:49:25.683477 kubelet[2531]: E0213 05:49:25.683446 2531 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\": not found" containerID="8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe" Feb 13 05:49:25.683635 kubelet[2531]: I0213 05:49:25.683507 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe} err="failed to get container status \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bd709d6e09d4b85b87495edcf16899fecb53d2c1fc0b49c5e2afb7eb1e282fe\": not found" Feb 13 05:49:25.683635 kubelet[2531]: I0213 05:49:25.683532 2531 scope.go:115] "RemoveContainer" containerID="2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f" Feb 13 05:49:25.683997 env[1470]: time="2024-02-13T05:49:25.683883584Z" level=error msg="ContainerStatus for \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\": not found" Feb 13 05:49:25.684255 kubelet[2531]: E0213 05:49:25.684202 2531 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\": not found" containerID="2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f" Feb 13 05:49:25.684402 kubelet[2531]: I0213 05:49:25.684271 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f} err="failed to get container status \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ce8fe142959de809152296a1c2a2865c1ed2caff396e2510403a112a426b30f\": not found" Feb 13 05:49:26.479776 sshd[6909]: Failed password for root from 159.223.87.202 port 43736 ssh2 Feb 13 05:49:26.514157 sshd[6913]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:26.522453 systemd[1]: sshd@213-147.75.49.59:22-139.178.68.195:43744.service: Deactivated successfully. Feb 13 05:49:26.522869 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 05:49:26.523309 systemd-logind[1458]: Session 88 logged out. Waiting for processes to exit. Feb 13 05:49:26.523913 systemd[1]: Started sshd@215-147.75.49.59:22-139.178.68.195:45966.service. Feb 13 05:49:26.524442 systemd-logind[1458]: Removed session 88. Feb 13 05:49:26.554576 sshd[7088]: Accepted publickey for core from 139.178.68.195 port 45966 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:26.555496 sshd[7088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:26.558634 systemd-logind[1458]: New session 89 of user core. Feb 13 05:49:26.559432 systemd[1]: Started session-89.scope. Feb 13 05:49:26.657625 sshd[6918]: Failed password for root from 218.92.0.118 port 48046 ssh2 Feb 13 05:49:26.937786 sshd[6909]: Received disconnect from 159.223.87.202 port 43736:11: Bye Bye [preauth] Feb 13 05:49:26.937786 sshd[6909]: Disconnected from authenticating user root 159.223.87.202 port 43736 [preauth] Feb 13 05:49:26.938436 systemd[1]: sshd@212-147.75.49.59:22-159.223.87.202:43736.service: Deactivated successfully. Feb 13 05:49:27.100104 sshd[7088]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:27.102784 systemd[1]: sshd@215-147.75.49.59:22-139.178.68.195:45966.service: Deactivated successfully. Feb 13 05:49:27.103407 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 05:49:27.103838 systemd-logind[1458]: Session 89 logged out. Waiting for processes to exit. Feb 13 05:49:27.104914 systemd[1]: Started sshd@216-147.75.49.59:22-139.178.68.195:45974.service. Feb 13 05:49:27.105451 systemd-logind[1458]: Removed session 89. Feb 13 05:49:27.108095 kubelet[2531]: I0213 05:49:27.108061 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:49:27.108414 kubelet[2531]: E0213 05:49:27.108126 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" containerName="mount-cgroup" Feb 13 05:49:27.108414 kubelet[2531]: E0213 05:49:27.108137 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" containerName="apply-sysctl-overwrites" Feb 13 05:49:27.108414 kubelet[2531]: E0213 05:49:27.108145 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" containerName="mount-bpf-fs" Feb 13 05:49:27.108414 kubelet[2531]: E0213 05:49:27.108150 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" containerName="cilium-agent" Feb 13 05:49:27.108414 kubelet[2531]: E0213 05:49:27.108158 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4a16a98-dda8-4c2a-bb92-0e54f199f0eb" containerName="cilium-operator" Feb 13 05:49:27.108414 kubelet[2531]: E0213 05:49:27.108165 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" containerName="clean-cilium-state" Feb 13 05:49:27.108414 kubelet[2531]: I0213 05:49:27.108190 2531 memory_manager.go:346] "RemoveStaleState removing state" podUID="bf32717c-6e57-49a3-aac5-0f3e83a5ac1a" containerName="cilium-agent" Feb 13 05:49:27.108414 kubelet[2531]: I0213 05:49:27.108198 2531 memory_manager.go:346] "RemoveStaleState removing state" podUID="f4a16a98-dda8-4c2a-bb92-0e54f199f0eb" containerName="cilium-operator" Feb 13 05:49:27.111915 systemd[1]: Created slice kubepods-burstable-pod1c347c20_ffe7_4319_b09e_2f3f87d0f2c9.slice. Feb 13 05:49:27.137703 sshd[7114]: Accepted publickey for core from 139.178.68.195 port 45974 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:27.138488 sshd[7114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:27.140933 systemd-logind[1458]: New session 90 of user core. Feb 13 05:49:27.141370 systemd[1]: Started session-90.scope. Feb 13 05:49:27.173707 kubelet[2531]: I0213 05:49:27.173661 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=bf32717c-6e57-49a3-aac5-0f3e83a5ac1a path="/var/lib/kubelet/pods/bf32717c-6e57-49a3-aac5-0f3e83a5ac1a/volumes" Feb 13 05:49:27.175311 kubelet[2531]: I0213 05:49:27.175275 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f4a16a98-dda8-4c2a-bb92-0e54f199f0eb path="/var/lib/kubelet/pods/f4a16a98-dda8-4c2a-bb92-0e54f199f0eb/volumes" Feb 13 05:49:27.239404 kubelet[2531]: I0213 05:49:27.239297 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-cgroup\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239404 kubelet[2531]: I0213 05:49:27.239346 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-lib-modules\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239404 kubelet[2531]: I0213 05:49:27.239375 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cni-path\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239755 kubelet[2531]: I0213 05:49:27.239422 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-xtables-lock\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239755 kubelet[2531]: I0213 05:49:27.239496 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-kernel\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239755 kubelet[2531]: I0213 05:49:27.239560 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-clustermesh-secrets\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239755 kubelet[2531]: I0213 05:49:27.239597 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-net\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.239755 kubelet[2531]: I0213 05:49:27.239667 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4nl9\" (UniqueName: \"kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-kube-api-access-v4nl9\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240107 kubelet[2531]: I0213 05:49:27.239732 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-bpf-maps\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240107 kubelet[2531]: I0213 05:49:27.239773 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-run\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240107 kubelet[2531]: I0213 05:49:27.239802 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-config-path\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240107 kubelet[2531]: I0213 05:49:27.239851 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hostproc\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240107 kubelet[2531]: I0213 05:49:27.239899 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hubble-tls\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240107 kubelet[2531]: I0213 05:49:27.239956 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-etc-cni-netd\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.240507 kubelet[2531]: I0213 05:49:27.240027 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-ipsec-secrets\") pod \"cilium-889jz\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " pod="kube-system/cilium-889jz" Feb 13 05:49:27.278152 sshd[7114]: pam_unix(sshd:session): session closed for user core Feb 13 05:49:27.279958 systemd[1]: sshd@216-147.75.49.59:22-139.178.68.195:45974.service: Deactivated successfully. Feb 13 05:49:27.280308 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 05:49:27.280750 systemd-logind[1458]: Session 90 logged out. Waiting for processes to exit. Feb 13 05:49:27.281362 systemd[1]: Started sshd@217-147.75.49.59:22-139.178.68.195:45976.service. Feb 13 05:49:27.281772 systemd-logind[1458]: Removed session 90. Feb 13 05:49:27.312412 sshd[7139]: Accepted publickey for core from 139.178.68.195 port 45976 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:49:27.313222 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:49:27.315910 systemd-logind[1458]: New session 91 of user core. Feb 13 05:49:27.316538 systemd[1]: Started session-91.scope. Feb 13 05:49:27.413725 env[1470]: time="2024-02-13T05:49:27.413638135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-889jz,Uid:1c347c20-ffe7-4319-b09e-2f3f87d0f2c9,Namespace:kube-system,Attempt:0,}" Feb 13 05:49:27.422729 env[1470]: time="2024-02-13T05:49:27.422673470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:49:27.422729 env[1470]: time="2024-02-13T05:49:27.422709812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:49:27.422887 env[1470]: time="2024-02-13T05:49:27.422728129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:49:27.422945 env[1470]: time="2024-02-13T05:49:27.422900594Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd pid=7171 runtime=io.containerd.runc.v2 Feb 13 05:49:27.432027 systemd[1]: Started cri-containerd-f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd.scope. Feb 13 05:49:27.446494 env[1470]: time="2024-02-13T05:49:27.446464198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-889jz,Uid:1c347c20-ffe7-4319-b09e-2f3f87d0f2c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\"" Feb 13 05:49:27.448070 env[1470]: time="2024-02-13T05:49:27.448049583Z" level=info msg="CreateContainer within sandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 05:49:27.453560 env[1470]: time="2024-02-13T05:49:27.453538159Z" level=info msg="CreateContainer within sandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\"" Feb 13 05:49:27.453903 env[1470]: time="2024-02-13T05:49:27.453841527Z" level=info msg="StartContainer for \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\"" Feb 13 05:49:27.463259 systemd[1]: Started cri-containerd-1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58.scope. Feb 13 05:49:27.471141 systemd[1]: cri-containerd-1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58.scope: Deactivated successfully. Feb 13 05:49:27.471348 systemd[1]: Stopped cri-containerd-1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58.scope. Feb 13 05:49:27.481053 env[1470]: time="2024-02-13T05:49:27.480973915Z" level=info msg="shim disconnected" id=1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58 Feb 13 05:49:27.481053 env[1470]: time="2024-02-13T05:49:27.481027900Z" level=warning msg="cleaning up after shim disconnected" id=1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58 namespace=k8s.io Feb 13 05:49:27.481053 env[1470]: time="2024-02-13T05:49:27.481038181Z" level=info msg="cleaning up dead shim" Feb 13 05:49:27.486340 env[1470]: time="2024-02-13T05:49:27.486281180Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7232 runtime=io.containerd.runc.v2\ntime=\"2024-02-13T05:49:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 13 05:49:27.486570 env[1470]: time="2024-02-13T05:49:27.486487140Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Feb 13 05:49:27.486732 env[1470]: time="2024-02-13T05:49:27.486662760Z" level=error msg="Failed to pipe stdout of container \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\"" error="reading from a closed fifo" Feb 13 05:49:27.486796 env[1470]: time="2024-02-13T05:49:27.486722606Z" level=error msg="Failed to pipe stderr of container \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\"" error="reading from a closed fifo" Feb 13 05:49:27.487496 env[1470]: time="2024-02-13T05:49:27.487458463Z" level=error msg="StartContainer for \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 13 05:49:27.487688 kubelet[2531]: E0213 05:49:27.487643 2531 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58" Feb 13 05:49:27.487779 kubelet[2531]: E0213 05:49:27.487741 2531 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 05:49:27.487779 kubelet[2531]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 05:49:27.487779 kubelet[2531]: rm /hostbin/cilium-mount Feb 13 05:49:27.487878 kubelet[2531]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v4nl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-889jz_kube-system(1c347c20-ffe7-4319-b09e-2f3f87d0f2c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 13 05:49:27.487878 kubelet[2531]: E0213 05:49:27.487777 2531 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-889jz" podUID=1c347c20-ffe7-4319-b09e-2f3f87d0f2c9 Feb 13 05:49:27.659846 env[1470]: time="2024-02-13T05:49:27.659710789Z" level=info msg="StopPodSandbox for \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\"" Feb 13 05:49:27.660222 env[1470]: time="2024-02-13T05:49:27.659863074Z" level=info msg="Container to stop \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 05:49:27.673233 systemd[1]: cri-containerd-f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd.scope: Deactivated successfully. Feb 13 05:49:27.714676 env[1470]: time="2024-02-13T05:49:27.714511117Z" level=info msg="shim disconnected" id=f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd Feb 13 05:49:27.715086 env[1470]: time="2024-02-13T05:49:27.714675278Z" level=warning msg="cleaning up after shim disconnected" id=f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd namespace=k8s.io Feb 13 05:49:27.715086 env[1470]: time="2024-02-13T05:49:27.714725100Z" level=info msg="cleaning up dead shim" Feb 13 05:49:27.731098 env[1470]: time="2024-02-13T05:49:27.730978535Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7262 runtime=io.containerd.runc.v2\n" Feb 13 05:49:27.731774 env[1470]: time="2024-02-13T05:49:27.731673875Z" level=info msg="TearDown network for sandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" successfully" Feb 13 05:49:27.731774 env[1470]: time="2024-02-13T05:49:27.731732393Z" level=info msg="StopPodSandbox for \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" returns successfully" Feb 13 05:49:27.843711 kubelet[2531]: I0213 05:49:27.843633 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hubble-tls\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844082 kubelet[2531]: I0213 05:49:27.843782 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-ipsec-secrets\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844082 kubelet[2531]: I0213 05:49:27.843857 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-kernel\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844082 kubelet[2531]: I0213 05:49:27.843912 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hostproc\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844082 kubelet[2531]: I0213 05:49:27.843975 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-run\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844082 kubelet[2531]: I0213 05:49:27.843994 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844082 kubelet[2531]: I0213 05:49:27.844060 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844037 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-config-path\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844099 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844222 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-cgroup\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844298 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844332 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cni-path\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844415 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-xtables-lock\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844418 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844890 kubelet[2531]: W0213 05:49:27.844424 2531 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844509 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-clustermesh-secrets\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844507 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844600 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-lib-modules\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844649 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844699 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-bpf-maps\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844796 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-etc-cni-netd\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.844890 kubelet[2531]: I0213 05:49:27.844794 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.844864 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-net\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.844933 2531 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4nl9\" (UniqueName: \"kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-kube-api-access-v4nl9\") pod \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\" (UID: \"1c347c20-ffe7-4319-b09e-2f3f87d0f2c9\") " Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.844910 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.844968 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.845746 2531 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-xtables-lock\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.845842 2531 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-lib-modules\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846018 2531 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-bpf-maps\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846158 2531 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-etc-cni-netd\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846245 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846353 2531 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hostproc\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846430 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-run\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846515 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-cgroup\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.847381 kubelet[2531]: I0213 05:49:27.846601 2531 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cni-path\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.852745 kubelet[2531]: I0213 05:49:27.852322 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 05:49:27.853028 kubelet[2531]: I0213 05:49:27.852752 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 05:49:27.853028 kubelet[2531]: I0213 05:49:27.852823 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 05:49:27.853525 kubelet[2531]: I0213 05:49:27.853441 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-kube-api-access-v4nl9" (OuterVolumeSpecName: "kube-api-access-v4nl9") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "kube-api-access-v4nl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 05:49:27.855095 kubelet[2531]: I0213 05:49:27.854979 2531 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" (UID: "1c347c20-ffe7-4319-b09e-2f3f87d0f2c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 05:49:27.947097 kubelet[2531]: I0213 05:49:27.946865 2531 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-hubble-tls\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.947097 kubelet[2531]: I0213 05:49:27.946949 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.947097 kubelet[2531]: I0213 05:49:27.946990 2531 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-cilium-config-path\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.947097 kubelet[2531]: I0213 05:49:27.947027 2531 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-clustermesh-secrets\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.947097 kubelet[2531]: I0213 05:49:27.947061 2531 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v4nl9\" (UniqueName: \"kubernetes.io/projected/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-kube-api-access-v4nl9\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:27.947097 kubelet[2531]: I0213 05:49:27.947094 2531 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9-host-proc-sys-net\") on node \"ci-3510.3.2-a-69b5ddf616\" DevicePath \"\"" Feb 13 05:49:28.348121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd-rootfs.mount: Deactivated successfully. Feb 13 05:49:28.348391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd-shm.mount: Deactivated successfully. Feb 13 05:49:28.348620 systemd[1]: var-lib-kubelet-pods-1c347c20\x2dffe7\x2d4319\x2db09e\x2d2f3f87d0f2c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4nl9.mount: Deactivated successfully. Feb 13 05:49:28.348816 systemd[1]: var-lib-kubelet-pods-1c347c20\x2dffe7\x2d4319\x2db09e\x2d2f3f87d0f2c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 05:49:28.348940 systemd[1]: var-lib-kubelet-pods-1c347c20\x2dffe7\x2d4319\x2db09e\x2d2f3f87d0f2c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 05:49:28.348969 systemd[1]: var-lib-kubelet-pods-1c347c20\x2dffe7\x2d4319\x2db09e\x2d2f3f87d0f2c9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 05:49:28.665984 kubelet[2531]: I0213 05:49:28.665804 2531 scope.go:115] "RemoveContainer" containerID="1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58" Feb 13 05:49:28.668573 env[1470]: time="2024-02-13T05:49:28.668482503Z" level=info msg="RemoveContainer for \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\"" Feb 13 05:49:28.672411 env[1470]: time="2024-02-13T05:49:28.672339479Z" level=info msg="RemoveContainer for \"1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58\" returns successfully" Feb 13 05:49:28.675356 systemd[1]: Removed slice kubepods-burstable-pod1c347c20_ffe7_4319_b09e_2f3f87d0f2c9.slice. Feb 13 05:49:28.725516 kubelet[2531]: I0213 05:49:28.725475 2531 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:49:28.725765 kubelet[2531]: E0213 05:49:28.725577 2531 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" containerName="mount-cgroup" Feb 13 05:49:28.725765 kubelet[2531]: I0213 05:49:28.725640 2531 memory_manager.go:346] "RemoveStaleState removing state" podUID="1c347c20-ffe7-4319-b09e-2f3f87d0f2c9" containerName="mount-cgroup" Feb 13 05:49:28.733970 systemd[1]: Created slice kubepods-burstable-pod0d116b2c_aa8c_4146_a724_89abe41c4366.slice. Feb 13 05:49:28.855224 kubelet[2531]: I0213 05:49:28.855112 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-cilium-run\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.855889 kubelet[2531]: I0213 05:49:28.855271 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d116b2c-aa8c-4146-a724-89abe41c4366-cilium-ipsec-secrets\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.855889 kubelet[2531]: I0213 05:49:28.855344 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-hostproc\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.855889 kubelet[2531]: I0213 05:49:28.855411 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-xtables-lock\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.855889 kubelet[2531]: I0213 05:49:28.855566 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d116b2c-aa8c-4146-a724-89abe41c4366-clustermesh-secrets\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.855889 kubelet[2531]: I0213 05:49:28.855766 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttrf\" (UniqueName: \"kubernetes.io/projected/0d116b2c-aa8c-4146-a724-89abe41c4366-kube-api-access-xttrf\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.856457 kubelet[2531]: I0213 05:49:28.855895 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-etc-cni-netd\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.856457 kubelet[2531]: I0213 05:49:28.855975 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d116b2c-aa8c-4146-a724-89abe41c4366-cilium-config-path\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.856457 kubelet[2531]: E0213 05:49:28.856018 2531 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 05:49:28.856457 kubelet[2531]: I0213 05:49:28.856036 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-host-proc-sys-net\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.856457 kubelet[2531]: I0213 05:49:28.856207 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d116b2c-aa8c-4146-a724-89abe41c4366-hubble-tls\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.856457 kubelet[2531]: I0213 05:49:28.856318 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-lib-modules\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.856457 kubelet[2531]: I0213 05:49:28.856390 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-host-proc-sys-kernel\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.857253 kubelet[2531]: I0213 05:49:28.856512 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-cilium-cgroup\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.857253 kubelet[2531]: I0213 05:49:28.856631 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-bpf-maps\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.857253 kubelet[2531]: I0213 05:49:28.856705 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d116b2c-aa8c-4146-a724-89abe41c4366-cni-path\") pod \"cilium-vnx6t\" (UID: \"0d116b2c-aa8c-4146-a724-89abe41c4366\") " pod="kube-system/cilium-vnx6t" Feb 13 05:49:28.881921 kubelet[2531]: I0213 05:49:28.881828 2531 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-69b5ddf616" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-13 05:49:28.881690461 +0000 UTC m=+2155.767091557 LastTransitionTime:2024-02-13 05:49:28.881690461 +0000 UTC m=+2155.767091557 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 13 05:49:29.038078 env[1470]: time="2024-02-13T05:49:29.037834709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnx6t,Uid:0d116b2c-aa8c-4146-a724-89abe41c4366,Namespace:kube-system,Attempt:0,}" Feb 13 05:49:29.059478 env[1470]: time="2024-02-13T05:49:29.059275904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:49:29.059478 env[1470]: time="2024-02-13T05:49:29.059373790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:49:29.059478 env[1470]: time="2024-02-13T05:49:29.059413651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:49:29.059976 env[1470]: time="2024-02-13T05:49:29.059818958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd pid=7290 runtime=io.containerd.runc.v2 Feb 13 05:49:29.089142 systemd[1]: Started cri-containerd-ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd.scope. Feb 13 05:49:29.123204 env[1470]: time="2024-02-13T05:49:29.123146550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnx6t,Uid:0d116b2c-aa8c-4146-a724-89abe41c4366,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\"" Feb 13 05:49:29.126149 env[1470]: time="2024-02-13T05:49:29.126099940Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 05:49:29.128732 sshd[6918]: Failed password for root from 218.92.0.118 port 48046 ssh2 Feb 13 05:49:29.134321 env[1470]: time="2024-02-13T05:49:29.134249995Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93\"" Feb 13 05:49:29.134757 env[1470]: time="2024-02-13T05:49:29.134678402Z" level=info msg="StartContainer for \"8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93\"" Feb 13 05:49:29.153195 systemd[1]: Started cri-containerd-8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93.scope. Feb 13 05:49:29.170824 kubelet[2531]: I0213 05:49:29.170793 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=1c347c20-ffe7-4319-b09e-2f3f87d0f2c9 path="/var/lib/kubelet/pods/1c347c20-ffe7-4319-b09e-2f3f87d0f2c9/volumes" Feb 13 05:49:29.182384 env[1470]: time="2024-02-13T05:49:29.182335297Z" level=info msg="StartContainer for \"8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93\" returns successfully" Feb 13 05:49:29.194831 systemd[1]: cri-containerd-8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93.scope: Deactivated successfully. Feb 13 05:49:29.234884 env[1470]: time="2024-02-13T05:49:29.234752882Z" level=info msg="shim disconnected" id=8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93 Feb 13 05:49:29.234884 env[1470]: time="2024-02-13T05:49:29.234844626Z" level=warning msg="cleaning up after shim disconnected" id=8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93 namespace=k8s.io Feb 13 05:49:29.234884 env[1470]: time="2024-02-13T05:49:29.234870032Z" level=info msg="cleaning up dead shim" Feb 13 05:49:29.248459 env[1470]: time="2024-02-13T05:49:29.248354954Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7373 runtime=io.containerd.runc.v2\n" Feb 13 05:49:29.679413 env[1470]: time="2024-02-13T05:49:29.679301167Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 05:49:29.693312 env[1470]: time="2024-02-13T05:49:29.693186780Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a\"" Feb 13 05:49:29.694226 env[1470]: time="2024-02-13T05:49:29.694083842Z" level=info msg="StartContainer for \"41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a\"" Feb 13 05:49:29.701878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786456345.mount: Deactivated successfully. Feb 13 05:49:29.724162 systemd[1]: Started cri-containerd-41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a.scope. Feb 13 05:49:29.745978 env[1470]: time="2024-02-13T05:49:29.745909356Z" level=info msg="StartContainer for \"41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a\" returns successfully" Feb 13 05:49:29.754060 systemd[1]: cri-containerd-41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a.scope: Deactivated successfully. Feb 13 05:49:29.792272 env[1470]: time="2024-02-13T05:49:29.792205412Z" level=info msg="shim disconnected" id=41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a Feb 13 05:49:29.792537 env[1470]: time="2024-02-13T05:49:29.792276688Z" level=warning msg="cleaning up after shim disconnected" id=41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a namespace=k8s.io Feb 13 05:49:29.792537 env[1470]: time="2024-02-13T05:49:29.792296396Z" level=info msg="cleaning up dead shim" Feb 13 05:49:29.805482 env[1470]: time="2024-02-13T05:49:29.805380175Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7434 runtime=io.containerd.runc.v2\n" Feb 13 05:49:30.348784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a-rootfs.mount: Deactivated successfully. Feb 13 05:49:30.588356 kubelet[2531]: W0213 05:49:30.588326 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c347c20_ffe7_4319_b09e_2f3f87d0f2c9.slice/cri-containerd-1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58.scope WatchSource:0}: container "1257d06cf86d86330ec93de7b0d34128654d177959c7845abd6e94a12823ac58" in namespace "k8s.io": not found Feb 13 05:49:30.687176 env[1470]: time="2024-02-13T05:49:30.686938254Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 05:49:30.708971 env[1470]: time="2024-02-13T05:49:30.708830943Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe\"" Feb 13 05:49:30.709937 env[1470]: time="2024-02-13T05:49:30.709840240Z" level=info msg="StartContainer for \"0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe\"" Feb 13 05:49:30.746787 systemd[1]: Started cri-containerd-0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe.scope. Feb 13 05:49:30.773466 env[1470]: time="2024-02-13T05:49:30.773386507Z" level=info msg="StartContainer for \"0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe\" returns successfully" Feb 13 05:49:30.777660 systemd[1]: cri-containerd-0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe.scope: Deactivated successfully. Feb 13 05:49:30.798342 env[1470]: time="2024-02-13T05:49:30.798264476Z" level=info msg="shim disconnected" id=0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe Feb 13 05:49:30.798342 env[1470]: time="2024-02-13T05:49:30.798311092Z" level=warning msg="cleaning up after shim disconnected" id=0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe namespace=k8s.io Feb 13 05:49:30.798342 env[1470]: time="2024-02-13T05:49:30.798325076Z" level=info msg="cleaning up dead shim" Feb 13 05:49:30.805472 env[1470]: time="2024-02-13T05:49:30.805411335Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7489 runtime=io.containerd.runc.v2\n" Feb 13 05:49:31.348844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe-rootfs.mount: Deactivated successfully. Feb 13 05:49:31.696431 env[1470]: time="2024-02-13T05:49:31.696202420Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 05:49:31.710880 env[1470]: time="2024-02-13T05:49:31.710756622Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68\"" Feb 13 05:49:31.711951 env[1470]: time="2024-02-13T05:49:31.711816180Z" level=info msg="StartContainer for \"bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68\"" Feb 13 05:49:31.718849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271796380.mount: Deactivated successfully. Feb 13 05:49:31.742281 systemd[1]: Started cri-containerd-bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68.scope. Feb 13 05:49:31.762558 env[1470]: time="2024-02-13T05:49:31.762520178Z" level=info msg="StartContainer for \"bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68\" returns successfully" Feb 13 05:49:31.763285 systemd[1]: cri-containerd-bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68.scope: Deactivated successfully. Feb 13 05:49:31.785156 env[1470]: time="2024-02-13T05:49:31.785075576Z" level=info msg="shim disconnected" id=bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68 Feb 13 05:49:31.785156 env[1470]: time="2024-02-13T05:49:31.785126558Z" level=warning msg="cleaning up after shim disconnected" id=bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68 namespace=k8s.io Feb 13 05:49:31.785156 env[1470]: time="2024-02-13T05:49:31.785138309Z" level=info msg="cleaning up dead shim" Feb 13 05:49:31.791071 env[1470]: time="2024-02-13T05:49:31.791042335Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:49:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7544 runtime=io.containerd.runc.v2\n" Feb 13 05:49:31.816890 sshd[6918]: Failed password for root from 218.92.0.118 port 48046 ssh2 Feb 13 05:49:32.348968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68-rootfs.mount: Deactivated successfully. Feb 13 05:49:32.579367 sshd[6918]: Received disconnect from 218.92.0.118 port 48046:11: [preauth] Feb 13 05:49:32.579367 sshd[6918]: Disconnected from authenticating user root 218.92.0.118 port 48046 [preauth] Feb 13 05:49:32.579914 sshd[6918]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 05:49:32.581954 systemd[1]: sshd@214-147.75.49.59:22-218.92.0.118:48046.service: Deactivated successfully. Feb 13 05:49:32.705528 env[1470]: time="2024-02-13T05:49:32.705328014Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 05:49:32.725530 env[1470]: time="2024-02-13T05:49:32.725424836Z" level=info msg="CreateContainer within sandbox \"ac52a2cf01cae328a40c6100c7529c74e4cfdc30a150c69c4ef75fe9065c53dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ddd2be62a2400e19a4daf2b91f0ac364e23a11cdbe65841dd57a94a16360a97f\"" Feb 13 05:49:32.725857 env[1470]: time="2024-02-13T05:49:32.725797754Z" level=info msg="StartContainer for \"ddd2be62a2400e19a4daf2b91f0ac364e23a11cdbe65841dd57a94a16360a97f\"" Feb 13 05:49:32.734919 systemd[1]: Started cri-containerd-ddd2be62a2400e19a4daf2b91f0ac364e23a11cdbe65841dd57a94a16360a97f.scope. Feb 13 05:49:32.748238 systemd[1]: Started sshd@218-147.75.49.59:22-218.92.0.118:47899.service. Feb 13 05:49:32.748425 env[1470]: time="2024-02-13T05:49:32.748329637Z" level=info msg="StartContainer for \"ddd2be62a2400e19a4daf2b91f0ac364e23a11cdbe65841dd57a94a16360a97f\" returns successfully" Feb 13 05:49:32.901593 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 05:49:33.246738 env[1470]: time="2024-02-13T05:49:33.246647659Z" level=info msg="StopPodSandbox for \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\"" Feb 13 05:49:33.247034 env[1470]: time="2024-02-13T05:49:33.246858416Z" level=info msg="TearDown network for sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" successfully" Feb 13 05:49:33.247034 env[1470]: time="2024-02-13T05:49:33.246952076Z" level=info msg="StopPodSandbox for \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" returns successfully" Feb 13 05:49:33.247880 env[1470]: time="2024-02-13T05:49:33.247816376Z" level=info msg="RemovePodSandbox for \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\"" Feb 13 05:49:33.248052 env[1470]: time="2024-02-13T05:49:33.247896146Z" level=info msg="Forcibly stopping sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\"" Feb 13 05:49:33.248202 env[1470]: time="2024-02-13T05:49:33.248077068Z" level=info msg="TearDown network for sandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" successfully" Feb 13 05:49:33.254050 env[1470]: time="2024-02-13T05:49:33.253946203Z" level=info msg="RemovePodSandbox \"7827fa2ae02daa23bcd64797c78e6e66f89847b4a752941db654f07badfcde39\" returns successfully" Feb 13 05:49:33.255870 env[1470]: time="2024-02-13T05:49:33.255725457Z" level=info msg="StopPodSandbox for \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\"" Feb 13 05:49:33.256288 env[1470]: time="2024-02-13T05:49:33.256138882Z" level=info msg="TearDown network for sandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" successfully" Feb 13 05:49:33.256505 env[1470]: time="2024-02-13T05:49:33.256283249Z" level=info msg="StopPodSandbox for \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" returns successfully" Feb 13 05:49:33.258718 env[1470]: time="2024-02-13T05:49:33.258647545Z" level=info msg="RemovePodSandbox for \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\"" Feb 13 05:49:33.258944 env[1470]: time="2024-02-13T05:49:33.258734599Z" level=info msg="Forcibly stopping sandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\"" Feb 13 05:49:33.258944 env[1470]: time="2024-02-13T05:49:33.258920747Z" level=info msg="TearDown network for sandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" successfully" Feb 13 05:49:33.262772 env[1470]: time="2024-02-13T05:49:33.262672759Z" level=info msg="RemovePodSandbox \"f13ab2d2b10fadfe3c1ada70fa151f101f0d6ddec63c8a6f565b5d926c745cfd\" returns successfully" Feb 13 05:49:33.263538 env[1470]: time="2024-02-13T05:49:33.263462047Z" level=info msg="StopPodSandbox for \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\"" Feb 13 05:49:33.263793 env[1470]: time="2024-02-13T05:49:33.263702056Z" level=info msg="TearDown network for sandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" successfully" Feb 13 05:49:33.263941 env[1470]: time="2024-02-13T05:49:33.263794507Z" level=info msg="StopPodSandbox for \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" returns successfully" Feb 13 05:49:33.264558 env[1470]: time="2024-02-13T05:49:33.264496579Z" level=info msg="RemovePodSandbox for \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\"" Feb 13 05:49:33.264774 env[1470]: time="2024-02-13T05:49:33.264570925Z" level=info msg="Forcibly stopping sandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\"" Feb 13 05:49:33.264899 env[1470]: time="2024-02-13T05:49:33.264777331Z" level=info msg="TearDown network for sandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" successfully" Feb 13 05:49:33.268842 env[1470]: time="2024-02-13T05:49:33.268744841Z" level=info msg="RemovePodSandbox \"1eecf893a531642aa4e84e848377ef74cd3be6edddf1f7c0d6538ddd20745e84\" returns successfully" Feb 13 05:49:33.474654 systemd[1]: Started sshd@219-147.75.49.59:22-119.91.214.145:53218.service. Feb 13 05:49:33.698312 kubelet[2531]: W0213 05:49:33.698233 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d116b2c_aa8c_4146_a724_89abe41c4366.slice/cri-containerd-8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93.scope WatchSource:0}: task 8794441b91fee5c63810ade73c7320530df74d59fee17c8c237f5302cb38cb93 not found: not found Feb 13 05:49:33.713473 kubelet[2531]: I0213 05:49:33.713453 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vnx6t" podStartSLOduration=5.713428465 podCreationTimestamp="2024-02-13 05:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 05:49:33.713092655 +0000 UTC m=+2160.598493693" watchObservedRunningTime="2024-02-13 05:49:33.713428465 +0000 UTC m=+2160.598829504" Feb 13 05:49:33.800848 sshd[7588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 05:49:34.432573 sshd[7750]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.91.214.145 user=root Feb 13 05:49:35.662825 sshd[7588]: Failed password for root from 218.92.0.118 port 47899 ssh2 Feb 13 05:49:35.703080 systemd-networkd[1301]: lxc_health: Link UP Feb 13 05:49:35.722543 systemd-networkd[1301]: lxc_health: Gained carrier Feb 13 05:49:35.722738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 05:49:36.098736 sshd[7750]: Failed password for root from 119.91.214.145 port 53218 ssh2 Feb 13 05:49:36.807700 kubelet[2531]: W0213 05:49:36.807623 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d116b2c_aa8c_4146_a724_89abe41c4366.slice/cri-containerd-41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a.scope WatchSource:0}: task 41a85f912ac72a3e6be599901e30914c9ab953644f8d9d705e8dfc75a77b323a not found: not found Feb 13 05:49:36.931705 systemd-networkd[1301]: lxc_health: Gained IPv6LL Feb 13 05:49:37.171660 sshd[7750]: Received disconnect from 119.91.214.145 port 53218:11: Bye Bye [preauth] Feb 13 05:49:37.171660 sshd[7750]: Disconnected from authenticating user root 119.91.214.145 port 53218 [preauth] Feb 13 05:49:37.172288 systemd[1]: sshd@219-147.75.49.59:22-119.91.214.145:53218.service: Deactivated successfully. Feb 13 05:49:38.813468 sshd[7588]: Failed password for root from 218.92.0.118 port 47899 ssh2 Feb 13 05:49:39.913570 kubelet[2531]: W0213 05:49:39.913484 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d116b2c_aa8c_4146_a724_89abe41c4366.slice/cri-containerd-0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe.scope WatchSource:0}: task 0cbafaf2df280c03a457ca82f7e8fedd54eb956770b74a59709586e67f2099fe not found: not found Feb 13 05:49:41.283717 sshd[7588]: Failed password for root from 218.92.0.118 port 47899 ssh2 Feb 13 05:49:42.387665 sshd[7588]: Received disconnect from 218.92.0.118 port 47899:11: [preauth] Feb 13 05:49:42.387665 sshd[7588]: Disconnected from authenticating user root 218.92.0.118 port 47899 [preauth] Feb 13 05:49:42.388263 sshd[7588]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 05:49:42.390377 systemd[1]: sshd@218-147.75.49.59:22-218.92.0.118:47899.service: Deactivated successfully. Feb 13 05:49:43.025725 kubelet[2531]: W0213 05:49:43.025567 2531 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d116b2c_aa8c_4146_a724_89abe41c4366.slice/cri-containerd-bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68.scope WatchSource:0}: task bd3edbda80b1fd96d067e685a88ca41f25b5d9ec663ea94397b5aaa006d8ec68 not found: not found Feb 13 05:50:27.454137 sshd[7139]: pam_unix(sshd:session): session closed for user core Feb 13 05:50:27.455688 systemd[1]: sshd@217-147.75.49.59:22-139.178.68.195:45976.service: Deactivated successfully. Feb 13 05:50:27.456267 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 05:50:27.456634 systemd-logind[1458]: Session 91 logged out. Waiting for processes to exit. Feb 13 05:50:27.457147 systemd-logind[1458]: Removed session 91.