Feb 9 08:41:29.573554 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 9 08:41:29.573566 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 08:41:29.573574 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 08:41:29.573578 kernel: BIOS-provided physical RAM map: Feb 9 08:41:29.573582 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 08:41:29.573585 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 08:41:29.573590 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 08:41:29.573594 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 08:41:29.573597 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 08:41:29.573602 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000061f6efff] usable Feb 9 08:41:29.573606 kernel: BIOS-e820: [mem 0x0000000061f6f000-0x0000000061f6ffff] ACPI NVS Feb 9 08:41:29.573610 kernel: BIOS-e820: [mem 0x0000000061f70000-0x0000000061f70fff] reserved Feb 9 08:41:29.573613 kernel: BIOS-e820: [mem 0x0000000061f71000-0x000000006c0c4fff] usable Feb 9 08:41:29.573617 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 9 08:41:29.573622 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 9 08:41:29.573628 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 9 08:41:29.573633 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 9 08:41:29.573637 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 9 08:41:29.573668 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 9 08:41:29.573672 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 08:41:29.573676 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 08:41:29.573681 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 08:41:29.573698 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 08:41:29.573702 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 08:41:29.573706 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 9 08:41:29.573728 kernel: NX (Execute Disable) protection: active Feb 9 08:41:29.573732 kernel: SMBIOS 3.2.1 present. Feb 9 08:41:29.573736 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 9 08:41:29.573740 kernel: tsc: Detected 3400.000 MHz processor Feb 9 08:41:29.573744 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 08:41:29.573749 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 08:41:29.573753 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 08:41:29.573757 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 9 08:41:29.573762 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 08:41:29.573766 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 9 08:41:29.573770 kernel: Using GB pages for direct mapping Feb 9 08:41:29.573775 kernel: ACPI: Early table checksum verification disabled Feb 9 08:41:29.573779 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 08:41:29.573784 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 08:41:29.573788 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 9 08:41:29.573794 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 08:41:29.573798 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 9 08:41:29.573804 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 9 08:41:29.573809 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 9 08:41:29.573813 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 08:41:29.573818 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 08:41:29.573822 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 08:41:29.573827 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 08:41:29.573831 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 08:41:29.573837 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 08:41:29.573841 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 08:41:29.573846 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 08:41:29.573851 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 08:41:29.573855 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 08:41:29.573860 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 08:41:29.573865 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 08:41:29.573869 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 08:41:29.573874 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 08:41:29.573879 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 08:41:29.573884 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 08:41:29.573888 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 9 08:41:29.573893 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 08:41:29.573898 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 08:41:29.573902 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 08:41:29.573907 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xf0a 01072009 AMI 00010013) Feb 9 08:41:29.573912 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 08:41:29.573917 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 08:41:29.573921 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 08:41:29.573926 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 08:41:29.573931 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 08:41:29.573935 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 9 08:41:29.573940 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 9 08:41:29.573945 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 9 08:41:29.573949 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 9 08:41:29.573954 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 9 08:41:29.573959 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 9 08:41:29.573964 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 9 08:41:29.573968 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 9 08:41:29.573973 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 9 08:41:29.573977 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 9 08:41:29.573982 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 9 08:41:29.573987 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 9 08:41:29.573991 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 9 08:41:29.573996 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 9 08:41:29.574001 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 9 08:41:29.574006 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 9 08:41:29.574010 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 9 08:41:29.574015 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 9 08:41:29.574019 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 9 08:41:29.574024 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 9 08:41:29.574028 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 9 08:41:29.574033 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 9 08:41:29.574037 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 9 08:41:29.574043 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 9 08:41:29.574047 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 9 08:41:29.574052 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 9 08:41:29.574057 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 9 08:41:29.574061 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 9 08:41:29.574066 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 9 08:41:29.574070 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 9 08:41:29.574075 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 9 08:41:29.574079 kernel: No NUMA configuration found Feb 9 08:41:29.574085 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 9 08:41:29.574089 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 9 08:41:29.574094 kernel: Zone ranges: Feb 9 08:41:29.574099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 08:41:29.574103 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 08:41:29.574108 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 9 08:41:29.574112 kernel: Movable zone start for each node Feb 9 08:41:29.574117 kernel: Early memory node ranges Feb 9 08:41:29.574122 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 08:41:29.574126 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 08:41:29.574131 kernel: node 0: [mem 0x0000000040400000-0x0000000061f6efff] Feb 9 08:41:29.574136 kernel: node 0: [mem 0x0000000061f71000-0x000000006c0c4fff] Feb 9 08:41:29.574141 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 9 08:41:29.574145 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 9 08:41:29.574150 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 9 08:41:29.574155 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 9 08:41:29.574162 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 08:41:29.574168 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 08:41:29.574173 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 08:41:29.574178 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 08:41:29.574184 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 9 08:41:29.574188 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 9 08:41:29.574193 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 9 08:41:29.574198 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 08:41:29.574203 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 08:41:29.574208 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 08:41:29.574214 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 08:41:29.574219 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 08:41:29.574224 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 08:41:29.574229 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 08:41:29.574233 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 08:41:29.574238 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 08:41:29.574243 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 08:41:29.574248 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 08:41:29.574253 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 08:41:29.574259 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 08:41:29.574263 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 08:41:29.574268 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 08:41:29.574273 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 08:41:29.574278 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 08:41:29.574283 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 08:41:29.574288 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 08:41:29.574293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 08:41:29.574298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 08:41:29.574303 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 08:41:29.574308 kernel: TSC deadline timer available Feb 9 08:41:29.574313 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 08:41:29.574318 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 9 08:41:29.574323 kernel: Booting paravirtualized kernel on bare hardware Feb 9 08:41:29.574328 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 08:41:29.574333 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 08:41:29.574338 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 08:41:29.574343 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 08:41:29.574348 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 08:41:29.574353 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 9 08:41:29.574358 kernel: Policy zone: Normal Feb 9 08:41:29.574364 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 08:41:29.574369 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 08:41:29.574374 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 08:41:29.574379 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 08:41:29.574384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 08:41:29.574390 kernel: Memory: 32555728K/33281940K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 725952K reserved, 0K cma-reserved) Feb 9 08:41:29.574395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 08:41:29.574400 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 08:41:29.574404 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 08:41:29.574409 kernel: rcu: Hierarchical RCU implementation. Feb 9 08:41:29.574414 kernel: rcu: RCU event tracing is enabled. Feb 9 08:41:29.574419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 08:41:29.574424 kernel: Rude variant of Tasks RCU enabled. Feb 9 08:41:29.574429 kernel: Tracing variant of Tasks RCU enabled. Feb 9 08:41:29.574435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 08:41:29.574440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 08:41:29.574445 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 08:41:29.574450 kernel: random: crng init done Feb 9 08:41:29.574454 kernel: Console: colour dummy device 80x25 Feb 9 08:41:29.574459 kernel: printk: console [tty0] enabled Feb 9 08:41:29.574464 kernel: printk: console [ttyS1] enabled Feb 9 08:41:29.574469 kernel: ACPI: Core revision 20210730 Feb 9 08:41:29.574474 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 9 08:41:29.574480 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 08:41:29.574485 kernel: DMAR: Host address width 39 Feb 9 08:41:29.574490 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 9 08:41:29.574494 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 9 08:41:29.574499 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 08:41:29.574504 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 08:41:29.574509 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 9 08:41:29.574514 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 9 08:41:29.574519 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 9 08:41:29.574525 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 08:41:29.574530 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 08:41:29.574535 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 08:41:29.574539 kernel: x2apic enabled Feb 9 08:41:29.574544 kernel: Switched APIC routing to cluster x2apic. Feb 9 08:41:29.574549 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 08:41:29.574554 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 08:41:29.574559 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 08:41:29.574564 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 08:41:29.574570 kernel: process: using mwait in idle threads Feb 9 08:41:29.574575 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 08:41:29.574580 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 08:41:29.574585 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 08:41:29.574590 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 08:41:29.574594 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 08:41:29.574599 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 08:41:29.574604 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 08:41:29.574609 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 08:41:29.574615 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 08:41:29.574620 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 08:41:29.574625 kernel: TAA: Mitigation: TSX disabled Feb 9 08:41:29.574630 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 08:41:29.574635 kernel: SRBDS: Mitigation: Microcode Feb 9 08:41:29.574641 kernel: GDS: Vulnerable: No microcode Feb 9 08:41:29.574646 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 08:41:29.574651 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 08:41:29.574675 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 08:41:29.574680 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 08:41:29.574685 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 08:41:29.574690 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 08:41:29.574695 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 08:41:29.574700 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 08:41:29.574705 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 08:41:29.574710 kernel: Freeing SMP alternatives memory: 32K Feb 9 08:41:29.574715 kernel: pid_max: default: 32768 minimum: 301 Feb 9 08:41:29.574720 kernel: LSM: Security Framework initializing Feb 9 08:41:29.574726 kernel: SELinux: Initializing. Feb 9 08:41:29.574731 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 08:41:29.574736 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 08:41:29.574741 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 08:41:29.574746 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 08:41:29.574751 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 08:41:29.574757 kernel: ... version: 4 Feb 9 08:41:29.574762 kernel: ... bit width: 48 Feb 9 08:41:29.574767 kernel: ... generic registers: 4 Feb 9 08:41:29.574772 kernel: ... value mask: 0000ffffffffffff Feb 9 08:41:29.574777 kernel: ... max period: 00007fffffffffff Feb 9 08:41:29.574782 kernel: ... fixed-purpose events: 3 Feb 9 08:41:29.574787 kernel: ... event mask: 000000070000000f Feb 9 08:41:29.574792 kernel: signal: max sigframe size: 2032 Feb 9 08:41:29.574797 kernel: rcu: Hierarchical SRCU implementation. Feb 9 08:41:29.574802 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 08:41:29.574808 kernel: smp: Bringing up secondary CPUs ... Feb 9 08:41:29.574813 kernel: x86: Booting SMP configuration: Feb 9 08:41:29.574818 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 08:41:29.574823 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 08:41:29.574829 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 08:41:29.574834 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 08:41:29.574838 kernel: smpboot: Max logical packages: 1 Feb 9 08:41:29.574844 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 08:41:29.574848 kernel: devtmpfs: initialized Feb 9 08:41:29.574853 kernel: x86/mm: Memory block size: 128MB Feb 9 08:41:29.574858 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x61f6f000-0x61f6ffff] (4096 bytes) Feb 9 08:41:29.574864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 9 08:41:29.574869 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 08:41:29.574875 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 08:41:29.574880 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 08:41:29.574885 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 08:41:29.574890 kernel: audit: initializing netlink subsys (disabled) Feb 9 08:41:29.574895 kernel: audit: type=2000 audit(1707468084.110:1): state=initialized audit_enabled=0 res=1 Feb 9 08:41:29.574900 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 08:41:29.574905 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 08:41:29.574910 kernel: cpuidle: using governor menu Feb 9 08:41:29.574915 kernel: ACPI: bus type PCI registered Feb 9 08:41:29.574920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 08:41:29.574925 kernel: dca service started, version 1.12.1 Feb 9 08:41:29.574930 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 08:41:29.574936 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 08:41:29.574940 kernel: PCI: Using configuration type 1 for base access Feb 9 08:41:29.574946 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 08:41:29.574951 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 08:41:29.574956 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 08:41:29.574961 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 08:41:29.574966 kernel: ACPI: Added _OSI(Module Device) Feb 9 08:41:29.574971 kernel: ACPI: Added _OSI(Processor Device) Feb 9 08:41:29.574976 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 08:41:29.574981 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 08:41:29.574986 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 08:41:29.574991 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 08:41:29.574997 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 08:41:29.575002 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 08:41:29.575007 kernel: ACPI: Dynamic OEM Table Load: Feb 9 08:41:29.575012 kernel: ACPI: SSDT 0xFFFF8D7E80215700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 08:41:29.575017 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 08:41:29.575022 kernel: ACPI: Dynamic OEM Table Load: Feb 9 08:41:29.575027 kernel: ACPI: SSDT 0xFFFF8D7E81CE9800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 08:41:29.575032 kernel: ACPI: Dynamic OEM Table Load: Feb 9 08:41:29.575037 kernel: ACPI: SSDT 0xFFFF8D7E81C5E800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 08:41:29.575042 kernel: ACPI: Dynamic OEM Table Load: Feb 9 08:41:29.575048 kernel: ACPI: SSDT 0xFFFF8D7E81C5C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 08:41:29.575053 kernel: ACPI: Dynamic OEM Table Load: Feb 9 08:41:29.575058 kernel: ACPI: SSDT 0xFFFF8D7E8014E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 08:41:29.575063 kernel: ACPI: Dynamic OEM Table Load: Feb 9 08:41:29.575068 kernel: ACPI: SSDT 0xFFFF8D7E81CECC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 08:41:29.575073 kernel: ACPI: Interpreter enabled Feb 9 08:41:29.575078 kernel: ACPI: PM: (supports S0 S5) Feb 9 08:41:29.575082 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 08:41:29.575087 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 08:41:29.575094 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 08:41:29.575099 kernel: HEST: Table parsing has been initialized. Feb 9 08:41:29.575104 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 08:41:29.575109 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 08:41:29.575114 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 08:41:29.575119 kernel: ACPI: PM: Power Resource [USBC] Feb 9 08:41:29.575124 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 08:41:29.575129 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 08:41:29.575133 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 08:41:29.575139 kernel: ACPI: PM: Power Resource [WRST] Feb 9 08:41:29.575144 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 08:41:29.575149 kernel: ACPI: PM: Power Resource [FN00] Feb 9 08:41:29.575154 kernel: ACPI: PM: Power Resource [FN01] Feb 9 08:41:29.575159 kernel: ACPI: PM: Power Resource [FN02] Feb 9 08:41:29.575164 kernel: ACPI: PM: Power Resource [FN03] Feb 9 08:41:29.575169 kernel: ACPI: PM: Power Resource [FN04] Feb 9 08:41:29.575174 kernel: ACPI: PM: Power Resource [PIN] Feb 9 08:41:29.575179 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 08:41:29.575244 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 08:41:29.575291 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 08:41:29.575332 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 08:41:29.575339 kernel: PCI host bridge to bus 0000:00 Feb 9 08:41:29.575385 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 08:41:29.575422 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 08:41:29.575459 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 08:41:29.575497 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 9 08:41:29.575533 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 08:41:29.575568 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 08:41:29.575617 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 08:41:29.575669 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 08:41:29.575714 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.575763 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 9 08:41:29.575805 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.575852 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 9 08:41:29.575895 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 9 08:41:29.575940 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 9 08:41:29.575981 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 9 08:41:29.576029 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 08:41:29.576074 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 9 08:41:29.576119 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 08:41:29.576161 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 9 08:41:29.576206 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 08:41:29.576248 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 9 08:41:29.576291 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 08:41:29.576336 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 08:41:29.576378 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 9 08:41:29.576419 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 9 08:41:29.576465 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 08:41:29.576506 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 08:41:29.576550 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 08:41:29.576594 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 08:41:29.576643 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 08:41:29.576685 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 9 08:41:29.576728 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 08:41:29.576782 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 08:41:29.576826 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 9 08:41:29.576871 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 08:41:29.576915 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 08:41:29.576957 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 9 08:41:29.576997 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 08:41:29.577041 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 08:41:29.577083 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 9 08:41:29.577123 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 9 08:41:29.577167 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 9 08:41:29.577208 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 9 08:41:29.577250 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 9 08:41:29.577291 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 9 08:41:29.577333 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 08:41:29.577379 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 08:41:29.577423 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.577472 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 08:41:29.577516 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.577563 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 08:41:29.577608 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.577657 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 08:41:29.577699 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.577745 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 9 08:41:29.577788 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.577833 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 08:41:29.577875 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 08:41:29.577923 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 08:41:29.577971 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 08:41:29.578013 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 9 08:41:29.578055 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 08:41:29.578101 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 08:41:29.578143 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 08:41:29.578188 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 08:41:29.578235 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 08:41:29.578280 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 08:41:29.578323 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 9 08:41:29.578367 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 9 08:41:29.578409 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 08:41:29.578453 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 08:41:29.578502 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 08:41:29.578546 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 08:41:29.578589 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 9 08:41:29.578632 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 9 08:41:29.578723 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 08:41:29.578766 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 08:41:29.578810 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 08:41:29.578854 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 08:41:29.578916 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 08:41:29.578958 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 08:41:29.579006 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 08:41:29.579049 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 9 08:41:29.579092 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 08:41:29.579133 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 9 08:41:29.579177 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.579220 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 08:41:29.579261 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 08:41:29.579302 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 08:41:29.579349 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 9 08:41:29.579392 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 9 08:41:29.579434 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 08:41:29.579550 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 9 08:41:29.579595 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 9 08:41:29.579640 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 08:41:29.579702 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 08:41:29.579744 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 08:41:29.579785 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 08:41:29.579831 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 08:41:29.579874 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 9 08:41:29.579916 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 08:41:29.579962 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 08:41:29.580004 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 08:41:29.580046 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 08:41:29.580088 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 08:41:29.580136 kernel: pci_bus 0000:08: extended config space not accessible Feb 9 08:41:29.580186 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 08:41:29.580231 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 9 08:41:29.580277 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 9 08:41:29.580322 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 08:41:29.580366 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 08:41:29.580411 kernel: pci 0000:08:00.0: supports D1 D2 Feb 9 08:41:29.580456 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 08:41:29.580499 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 08:41:29.580541 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 08:41:29.580587 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 08:41:29.580595 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 08:41:29.580600 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 08:41:29.580606 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 08:41:29.580611 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 08:41:29.580616 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 08:41:29.580621 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 08:41:29.580627 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 08:41:29.580632 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 08:41:29.580641 kernel: iommu: Default domain type: Translated Feb 9 08:41:29.580646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 08:41:29.580736 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 9 08:41:29.580782 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 08:41:29.580826 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 9 08:41:29.580833 kernel: vgaarb: loaded Feb 9 08:41:29.580839 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 08:41:29.580844 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 08:41:29.580850 kernel: PTP clock support registered Feb 9 08:41:29.580856 kernel: PCI: Using ACPI for IRQ routing Feb 9 08:41:29.580862 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 08:41:29.580867 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 08:41:29.580872 kernel: e820: reserve RAM buffer [mem 0x61f6f000-0x63ffffff] Feb 9 08:41:29.580877 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 9 08:41:29.580883 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 9 08:41:29.580888 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 9 08:41:29.580893 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 08:41:29.580898 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 9 08:41:29.580904 kernel: clocksource: Switched to clocksource tsc-early Feb 9 08:41:29.580910 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 08:41:29.580915 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 08:41:29.580920 kernel: pnp: PnP ACPI init Feb 9 08:41:29.580962 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 08:41:29.581004 kernel: pnp 00:02: [dma 0 disabled] Feb 9 08:41:29.581044 kernel: pnp 00:03: [dma 0 disabled] Feb 9 08:41:29.581087 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 08:41:29.581124 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 08:41:29.581167 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 08:41:29.581206 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 08:41:29.581245 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 08:41:29.581282 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 08:41:29.581318 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 08:41:29.581357 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 08:41:29.581393 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 08:41:29.581430 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 08:41:29.581467 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 08:41:29.581507 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 08:41:29.581543 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 08:41:29.581582 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 08:41:29.581618 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 08:41:29.581677 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 08:41:29.581734 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 08:41:29.581770 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 08:41:29.581810 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 08:41:29.581817 kernel: pnp: PnP ACPI: found 10 devices Feb 9 08:41:29.581823 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 08:41:29.581830 kernel: NET: Registered PF_INET protocol family Feb 9 08:41:29.581835 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 08:41:29.581840 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 08:41:29.581846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 08:41:29.581851 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 08:41:29.581856 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 08:41:29.581861 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 08:41:29.581867 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 08:41:29.581873 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 08:41:29.581878 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 08:41:29.581884 kernel: NET: Registered PF_XDP protocol family Feb 9 08:41:29.581924 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 9 08:41:29.581966 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 9 08:41:29.582007 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 9 08:41:29.582049 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 08:41:29.582093 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 08:41:29.582135 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 08:41:29.582181 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 08:41:29.582223 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 08:41:29.582265 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 08:41:29.582306 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 08:41:29.582350 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 08:41:29.582391 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 08:41:29.582432 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 08:41:29.582475 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 08:41:29.582515 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 08:41:29.582557 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 08:41:29.582598 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 08:41:29.582642 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 08:41:29.582708 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 08:41:29.582754 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 08:41:29.582797 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 08:41:29.582840 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 08:41:29.582882 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 08:41:29.582923 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 08:41:29.582966 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 08:41:29.583004 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 08:41:29.583042 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 08:41:29.583081 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 08:41:29.583117 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 08:41:29.583155 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 9 08:41:29.583191 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 08:41:29.583233 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 9 08:41:29.583272 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 08:41:29.583316 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 9 08:41:29.583357 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 9 08:41:29.583401 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 9 08:41:29.583441 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 9 08:41:29.583485 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 08:41:29.583524 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 08:41:29.583564 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 9 08:41:29.583605 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 08:41:29.583613 kernel: PCI: CLS 64 bytes, default 64 Feb 9 08:41:29.583619 kernel: DMAR: No ATSR found Feb 9 08:41:29.583624 kernel: DMAR: No SATC found Feb 9 08:41:29.583630 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 9 08:41:29.583635 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 9 08:41:29.583643 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 9 08:41:29.583648 kernel: DMAR: IOMMU feature pasid inconsistent Feb 9 08:41:29.583679 kernel: DMAR: IOMMU feature eafs inconsistent Feb 9 08:41:29.583684 kernel: DMAR: IOMMU feature prs inconsistent Feb 9 08:41:29.583691 kernel: DMAR: IOMMU feature nest inconsistent Feb 9 08:41:29.583715 kernel: DMAR: IOMMU feature mts inconsistent Feb 9 08:41:29.583721 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 9 08:41:29.583726 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 9 08:41:29.583731 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 08:41:29.583737 kernel: DMAR: dmar1: Using Queued invalidation Feb 9 08:41:29.583780 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 08:41:29.583824 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 08:41:29.583868 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 9 08:41:29.583929 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 9 08:41:29.583971 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 9 08:41:29.584012 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 9 08:41:29.584053 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 9 08:41:29.584093 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 9 08:41:29.584134 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 9 08:41:29.584174 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 9 08:41:29.584215 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 9 08:41:29.584258 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 9 08:41:29.584298 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 9 08:41:29.584339 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 9 08:41:29.584380 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 9 08:41:29.584421 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 9 08:41:29.584461 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 9 08:41:29.584502 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 9 08:41:29.584543 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 9 08:41:29.584585 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 9 08:41:29.584627 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 9 08:41:29.584715 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 9 08:41:29.584757 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 9 08:41:29.584800 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 9 08:41:29.584844 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 9 08:41:29.584886 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 08:41:29.584930 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 9 08:41:29.584975 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 9 08:41:29.585020 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 9 08:41:29.585027 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 08:41:29.585033 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 08:41:29.585038 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 9 08:41:29.585044 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 9 08:41:29.585049 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 08:41:29.585054 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 08:41:29.585061 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 08:41:29.585066 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 9 08:41:29.585112 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 08:41:29.585120 kernel: Initialise system trusted keyrings Feb 9 08:41:29.585125 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 08:41:29.585130 kernel: Key type asymmetric registered Feb 9 08:41:29.585135 kernel: Asymmetric key parser 'x509' registered Feb 9 08:41:29.585141 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 08:41:29.585147 kernel: io scheduler mq-deadline registered Feb 9 08:41:29.585153 kernel: io scheduler kyber registered Feb 9 08:41:29.585158 kernel: io scheduler bfq registered Feb 9 08:41:29.585198 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 9 08:41:29.585241 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 9 08:41:29.585282 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 9 08:41:29.585324 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 9 08:41:29.585364 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 9 08:41:29.585408 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 9 08:41:29.585449 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 9 08:41:29.585495 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 08:41:29.585503 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 08:41:29.585508 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 08:41:29.585514 kernel: pstore: Registered erst as persistent store backend Feb 9 08:41:29.585519 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 08:41:29.585524 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 08:41:29.585531 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 08:41:29.585536 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 08:41:29.585579 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 08:41:29.585588 kernel: i8042: PNP: No PS/2 controller found. Feb 9 08:41:29.585624 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 08:41:29.585711 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 08:41:29.585748 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T08:41:28 UTC (1707468088) Feb 9 08:41:29.585786 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 08:41:29.585795 kernel: fail to initialize ptp_kvm Feb 9 08:41:29.585800 kernel: intel_pstate: Intel P-state driver initializing Feb 9 08:41:29.585805 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 08:41:29.585811 kernel: intel_pstate: HWP enabled Feb 9 08:41:29.585816 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 08:41:29.585821 kernel: vesafb: scrolling: redraw Feb 9 08:41:29.585826 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 08:41:29.585831 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x000000003c376e70, using 768k, total 768k Feb 9 08:41:29.585838 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 08:41:29.585843 kernel: fb0: VESA VGA frame buffer device Feb 9 08:41:29.585848 kernel: NET: Registered PF_INET6 protocol family Feb 9 08:41:29.585853 kernel: Segment Routing with IPv6 Feb 9 08:41:29.585859 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 08:41:29.585864 kernel: NET: Registered PF_PACKET protocol family Feb 9 08:41:29.585869 kernel: Key type dns_resolver registered Feb 9 08:41:29.585874 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 08:41:29.585879 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 08:41:29.585885 kernel: IPI shorthand broadcast: enabled Feb 9 08:41:29.585891 kernel: sched_clock: Marking stable (2313861165, 1348876696)->(4609383897, -946646036) Feb 9 08:41:29.585896 kernel: registered taskstats version 1 Feb 9 08:41:29.585902 kernel: Loading compiled-in X.509 certificates Feb 9 08:41:29.585907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 08:41:29.585912 kernel: Key type .fscrypt registered Feb 9 08:41:29.585917 kernel: Key type fscrypt-provisioning registered Feb 9 08:41:29.585922 kernel: pstore: Using crash dump compression: deflate Feb 9 08:41:29.585928 kernel: ima: Allocated hash algorithm: sha1 Feb 9 08:41:29.585934 kernel: ima: No architecture policies found Feb 9 08:41:29.585939 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 08:41:29.585944 kernel: Write protecting the kernel read-only data: 28672k Feb 9 08:41:29.585950 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 08:41:29.585955 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 08:41:29.585960 kernel: Run /init as init process Feb 9 08:41:29.585965 kernel: with arguments: Feb 9 08:41:29.585970 kernel: /init Feb 9 08:41:29.585976 kernel: with environment: Feb 9 08:41:29.585982 kernel: HOME=/ Feb 9 08:41:29.585987 kernel: TERM=linux Feb 9 08:41:29.585992 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 08:41:29.585998 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 08:41:29.586005 systemd[1]: Detected architecture x86-64. Feb 9 08:41:29.586010 systemd[1]: Running in initrd. Feb 9 08:41:29.586016 systemd[1]: No hostname configured, using default hostname. Feb 9 08:41:29.586021 systemd[1]: Hostname set to . Feb 9 08:41:29.586027 systemd[1]: Initializing machine ID from random generator. Feb 9 08:41:29.586033 systemd[1]: Queued start job for default target initrd.target. Feb 9 08:41:29.586038 systemd[1]: Started systemd-ask-password-console.path. Feb 9 08:41:29.586044 systemd[1]: Reached target cryptsetup.target. Feb 9 08:41:29.586049 systemd[1]: Reached target ignition-diskful-subsequent.target. Feb 9 08:41:29.586054 systemd[1]: Reached target paths.target. Feb 9 08:41:29.586060 systemd[1]: Reached target slices.target. Feb 9 08:41:29.586065 systemd[1]: Reached target swap.target. Feb 9 08:41:29.586071 systemd[1]: Reached target timers.target. Feb 9 08:41:29.586077 systemd[1]: Listening on iscsid.socket. Feb 9 08:41:29.586082 systemd[1]: Listening on iscsiuio.socket. Feb 9 08:41:29.586088 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 08:41:29.586093 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 08:41:29.586098 kernel: tsc: Refined TSC clocksource calibration: 3408.091 MHz Feb 9 08:41:29.586104 systemd[1]: Listening on systemd-journald.socket. Feb 9 08:41:29.586109 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x312029d2519, max_idle_ns: 440795330833 ns Feb 9 08:41:29.586115 kernel: clocksource: Switched to clocksource tsc Feb 9 08:41:29.586121 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 08:41:29.586126 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 08:41:29.586131 systemd[1]: Reached target sockets.target. Feb 9 08:41:29.586137 systemd[1]: Starting iscsiuio.service... Feb 9 08:41:29.586142 systemd[1]: Starting kmod-static-nodes.service... Feb 9 08:41:29.586148 kernel: SCSI subsystem initialized Feb 9 08:41:29.586153 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 08:41:29.586158 kernel: Loading iSCSI transport class v2.0-870. Feb 9 08:41:29.586165 systemd[1]: Starting systemd-journald.service... Feb 9 08:41:29.586170 systemd[1]: Starting systemd-modules-load.service... Feb 9 08:41:29.586177 systemd-journald[269]: Journal started Feb 9 08:41:29.586203 systemd-journald[269]: Runtime Journal (/run/log/journal/5f6f364a3c9e4f2d9faaf6d98daa566c) is 8.0M, max 636.8M, 628.8M free. Feb 9 08:41:29.588673 systemd-modules-load[270]: Inserted module 'overlay' Feb 9 08:41:29.612642 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 08:41:29.645691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 08:41:29.645707 systemd[1]: Started iscsiuio.service. Feb 9 08:41:29.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.671700 kernel: Bridge firewalling registered Feb 9 08:41:29.671715 kernel: audit: type=1130 audit(1707468089.670:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.671723 systemd[1]: Started systemd-journald.service. Feb 9 08:41:29.731303 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 9 08:41:29.849507 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 08:41:29.849520 kernel: audit: type=1130 audit(1707468089.749:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.849545 kernel: device-mapper: uevent: version 1.0.3 Feb 9 08:41:29.849551 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 08:41:29.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.750152 systemd[1]: Finished kmod-static-nodes.service. Feb 9 08:41:29.901720 kernel: audit: type=1130 audit(1707468089.858:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.851600 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 9 08:41:29.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.858928 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 08:41:30.007679 kernel: audit: type=1130 audit(1707468089.910:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.007690 kernel: audit: type=1130 audit(1707468089.962:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.910915 systemd[1]: Finished systemd-modules-load.service. Feb 9 08:41:30.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:29.962905 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 08:41:30.062790 kernel: audit: type=1130 audit(1707468090.015:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.016229 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 08:41:30.062974 systemd[1]: Starting systemd-sysctl.service... Feb 9 08:41:30.063250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 08:41:30.066016 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 08:41:30.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.066831 systemd[1]: Finished systemd-sysctl.service. Feb 9 08:41:30.115744 kernel: audit: type=1130 audit(1707468090.065:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.127989 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 08:41:30.227949 kernel: audit: type=1130 audit(1707468090.127:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.227987 kernel: audit: type=1130 audit(1707468090.176:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.177272 systemd[1]: Starting dracut-cmdline.service... Feb 9 08:41:30.259752 kernel: iscsi: registered transport (tcp) Feb 9 08:41:30.259766 dracut-cmdline[295]: dracut-dracut-053 Feb 9 08:41:30.259766 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 08:41:30.259766 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 08:41:30.360783 kernel: iscsi: registered transport (qla4xxx) Feb 9 08:41:30.360817 kernel: QLogic iSCSI HBA Driver Feb 9 08:41:30.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.319556 systemd[1]: Finished dracut-cmdline.service. Feb 9 08:41:30.342512 systemd[1]: Starting dracut-pre-udev.service... Feb 9 08:41:30.369238 systemd[1]: Starting iscsid.service... Feb 9 08:41:30.425769 kernel: raid6: avx2x4 gen() 36128 MB/s Feb 9 08:41:30.425781 kernel: raid6: avx2x4 xor() 18670 MB/s Feb 9 08:41:30.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.387841 systemd[1]: Started iscsid.service. Feb 9 08:41:30.443759 iscsid[457]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 08:41:30.443759 iscsid[457]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 08:41:30.443759 iscsid[457]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 08:41:30.443759 iscsid[457]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 08:41:30.443759 iscsid[457]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 08:41:30.443759 iscsid[457]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 08:41:30.443759 iscsid[457]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 08:41:30.608758 kernel: raid6: avx2x2 gen() 53796 MB/s Feb 9 08:41:30.608769 kernel: raid6: avx2x2 xor() 32104 MB/s Feb 9 08:41:30.608776 kernel: raid6: avx2x1 gen() 45221 MB/s Feb 9 08:41:30.608783 kernel: raid6: avx2x1 xor() 27913 MB/s Feb 9 08:41:30.608789 kernel: raid6: sse2x4 gen() 21363 MB/s Feb 9 08:41:30.651678 kernel: raid6: sse2x4 xor() 11987 MB/s Feb 9 08:41:30.686727 kernel: raid6: sse2x2 gen() 21659 MB/s Feb 9 08:41:30.721722 kernel: raid6: sse2x2 xor() 13431 MB/s Feb 9 08:41:30.754681 kernel: raid6: sse2x1 gen() 18310 MB/s Feb 9 08:41:30.807432 kernel: raid6: sse2x1 xor() 8926 MB/s Feb 9 08:41:30.807447 kernel: raid6: using algorithm avx2x2 gen() 53796 MB/s Feb 9 08:41:30.807455 kernel: raid6: .... xor() 32104 MB/s, rmw enabled Feb 9 08:41:30.825895 kernel: raid6: using avx2x2 recovery algorithm Feb 9 08:41:30.872693 kernel: xor: automatically using best checksumming function avx Feb 9 08:41:30.951648 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 08:41:30.956572 systemd[1]: Finished dracut-pre-udev.service. Feb 9 08:41:30.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.964000 audit: BPF prog-id=6 op=LOAD Feb 9 08:41:30.964000 audit: BPF prog-id=7 op=LOAD Feb 9 08:41:30.965661 systemd[1]: Starting systemd-udevd.service... Feb 9 08:41:30.973775 systemd-udevd[473]: Using default interface naming scheme 'v252'. Feb 9 08:41:30.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:30.978874 systemd[1]: Started systemd-udevd.service. Feb 9 08:41:31.017760 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Feb 9 08:41:30.994364 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 08:41:31.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:31.022775 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 08:41:31.034922 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 08:41:31.115866 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 08:41:31.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:31.116547 systemd[1]: Starting dracut-initqueue.service... Feb 9 08:41:31.145649 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 08:41:31.184314 kernel: ACPI: bus type USB registered Feb 9 08:41:31.184352 kernel: usbcore: registered new interface driver usbfs Feb 9 08:41:31.184384 kernel: usbcore: registered new interface driver hub Feb 9 08:41:31.220686 kernel: usbcore: registered new device driver usb Feb 9 08:41:31.224846 kernel: libata version 3.00 loaded. Feb 9 08:41:31.246647 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 08:41:31.246683 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 9 08:41:31.285509 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 08:41:31.285603 kernel: AES CTR mode by8 optimization enabled Feb 9 08:41:31.302644 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 08:41:31.302718 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 08:41:31.339665 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 9 08:41:31.339748 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 08:41:31.339758 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 08:41:31.411657 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 08:41:31.411922 kernel: scsi host0: ahci Feb 9 08:41:31.411968 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 08:41:31.412170 kernel: pps pps0: new PPS source ptp0 Feb 9 08:41:31.412426 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 9 08:41:31.412488 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 08:41:31.412549 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:66 Feb 9 08:41:31.412598 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 9 08:41:31.412651 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 08:41:31.412703 kernel: scsi host1: ahci Feb 9 08:41:31.445677 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 08:41:31.451686 kernel: scsi host2: ahci Feb 9 08:41:31.451709 kernel: pps pps1: new PPS source ptp1 Feb 9 08:41:31.451775 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 9 08:41:31.451834 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 08:41:31.451886 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:67 Feb 9 08:41:31.451936 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 9 08:41:31.451987 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 08:41:31.478697 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 08:41:31.478770 kernel: scsi host3: ahci Feb 9 08:41:31.478786 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 9 08:41:31.494371 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 08:41:31.494465 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 08:41:31.494519 kernel: hub 1-0:1.0: USB hub found Feb 9 08:41:31.525703 kernel: scsi host4: ahci Feb 9 08:41:31.537422 kernel: hub 1-0:1.0: 16 ports detected Feb 9 08:41:31.537497 kernel: scsi host5: ahci Feb 9 08:41:31.543683 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 9 08:41:31.546644 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 08:41:31.579098 kernel: hub 2-0:1.0: USB hub found Feb 9 08:41:31.579179 kernel: scsi host6: ahci Feb 9 08:41:31.591517 kernel: hub 2-0:1.0: 10 ports detected Feb 9 08:41:31.591615 kernel: scsi host7: ahci Feb 9 08:41:31.621208 kernel: usb: port power management may be unreliable Feb 9 08:41:31.621228 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Feb 9 08:41:31.800712 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 08:41:31.800738 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Feb 9 08:41:31.933948 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Feb 9 08:41:31.933964 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Feb 9 08:41:31.951022 kernel: hub 1-14:1.0: USB hub found Feb 9 08:41:31.951103 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Feb 9 08:41:31.995403 kernel: hub 1-14:1.0: 4 ports detected Feb 9 08:41:31.995478 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Feb 9 08:41:32.029700 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Feb 9 08:41:32.029715 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Feb 9 08:41:32.067644 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 08:41:32.297732 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 08:41:32.298180 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 08:41:32.321710 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 9 08:41:32.354964 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 08:41:32.372644 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 08:41:32.372662 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 08:41:32.386643 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 08:41:32.402646 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 08:41:32.417694 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 08:41:32.431668 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 08:41:32.446671 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 9 08:41:32.446682 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 08:41:32.460642 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 08:41:32.493641 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 08:41:32.509694 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 08:41:32.556443 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 08:41:32.556459 kernel: ata1.00: Features: NCQ-prio Feb 9 08:41:32.556467 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 08:41:32.584579 kernel: ata2.00: Features: NCQ-prio Feb 9 08:41:32.602688 kernel: ata1.00: configured for UDMA/133 Feb 9 08:41:32.602720 kernel: ata2.00: configured for UDMA/133 Feb 9 08:41:32.602727 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 08:41:32.631685 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 08:41:32.648644 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 08:41:32.682786 kernel: port_module: 9 callbacks suppressed Feb 9 08:41:32.682846 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 9 08:41:32.682966 kernel: usbcore: registered new interface driver usbhid Feb 9 08:41:32.713642 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 08:41:32.713760 kernel: usbhid: USB HID core driver Feb 9 08:41:32.777645 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 08:41:32.777661 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 08:41:32.791794 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 08:41:32.805818 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 08:41:32.805904 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 08:41:32.829710 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 9 08:41:32.829784 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 9 08:41:32.829840 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 08:41:32.829904 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 08:41:32.829912 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 08:41:32.839778 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 9 08:41:32.854243 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 08:41:32.854325 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 08:41:32.868092 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 9 08:41:32.963187 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 08:41:32.963204 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 08:41:32.963274 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 08:41:33.045219 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 08:41:33.045298 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 9 08:41:33.080700 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 08:41:33.128510 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 08:41:33.128526 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 08:41:33.128534 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 9 08:41:33.181529 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 08:41:33.242738 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (523) Feb 9 08:41:33.242752 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 08:41:33.242821 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 9 08:41:33.211759 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 08:41:33.277742 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 9 08:41:33.221230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 08:41:33.260546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 08:41:33.288785 systemd[1]: Reached target initrd-root-device.target. Feb 9 08:41:33.305241 systemd[1]: Starting disk-uuid.service... Feb 9 08:41:33.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.313994 systemd[1]: Finished dracut-initqueue.service. Feb 9 08:41:33.481885 kernel: audit: type=1130 audit(1707468093.333:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.481912 kernel: audit: type=1130 audit(1707468093.385:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.481921 kernel: audit: type=1131 audit(1707468093.385:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.333995 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 08:41:33.334056 systemd[1]: Finished disk-uuid.service. Feb 9 08:41:33.387065 systemd[1]: Reached target local-fs-pre.target. Feb 9 08:41:33.490867 systemd[1]: Reached target local-fs.target. Feb 9 08:41:33.490977 systemd[1]: Reached target remote-fs-pre.target. Feb 9 08:41:33.517815 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 08:41:33.517927 systemd[1]: Reached target remote-fs.target. Feb 9 08:41:33.539847 systemd[1]: Reached target sysinit.target. Feb 9 08:41:33.551858 systemd[1]: Reached target basic.target. Feb 9 08:41:33.661698 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 08:41:33.661713 kernel: audit: type=1130 audit(1707468093.611:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.569033 systemd[1]: Starting dracut-pre-mount.service... Feb 9 08:41:33.582271 systemd[1]: Starting verity-setup.service... Feb 9 08:41:33.588891 systemd[1]: Finished dracut-pre-mount.service. Feb 9 08:41:33.613612 systemd[1]: Starting systemd-fsck-root.service... Feb 9 08:41:33.663120 systemd[1]: Found device dev-mapper-usr.device. Feb 9 08:41:33.663466 systemd[1]: Mounting sysusr-usr.mount... Feb 9 08:41:33.663558 systemd[1]: Finished verity-setup.service. Feb 9 08:41:33.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.671753 systemd-fsck[721]: ROOT: clean, 638/553520 files, 133045/553472 blocks Feb 9 08:41:33.760719 kernel: audit: type=1130 audit(1707468093.662:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.760751 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 08:41:33.742009 systemd[1]: Finished systemd-fsck-root.service. Feb 9 08:41:33.822409 kernel: audit: type=1130 audit(1707468093.770:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:33.771352 systemd[1]: Mounted sysusr-usr.mount. Feb 9 08:41:33.831232 systemd[1]: Mounting sysroot.mount... Feb 9 08:41:33.866692 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 08:41:33.866718 systemd[1]: Mounted sysroot.mount. Feb 9 08:41:33.873875 systemd[1]: Reached target initrd-root-fs.target. Feb 9 08:41:33.887555 systemd[1]: Mounting sysroot-usr.mount... Feb 9 08:41:33.910488 systemd[1]: Mounted sysroot-usr.mount. Feb 9 08:41:33.920675 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 08:41:34.016613 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 08:41:34.016651 kernel: BTRFS info (device sdb6): using free space tree Feb 9 08:41:34.016668 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 08:41:34.016680 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 08:41:33.960109 systemd[1]: Starting initrd-setup-root.service... Feb 9 08:41:34.024944 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 08:41:34.040346 systemd[1]: Finished initrd-setup-root.service. Feb 9 08:41:34.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.057742 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 08:41:34.133836 kernel: audit: type=1130 audit(1707468094.055:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.122923 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 08:41:34.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.196081 initrd-setup-root-after-ignition[806]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 08:41:34.219890 kernel: audit: type=1130 audit(1707468094.143:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.143883 systemd[1]: Reached target ignition-subsequent.target. Feb 9 08:41:34.205157 systemd[1]: Starting initrd-parse-etc.service... Feb 9 08:41:34.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.232693 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 08:41:34.318859 kernel: audit: type=1130 audit(1707468094.242:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.232744 systemd[1]: Finished initrd-parse-etc.service. Feb 9 08:41:34.242896 systemd[1]: Reached target initrd-fs.target. Feb 9 08:41:34.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.303852 systemd[1]: Reached target initrd.target. Feb 9 08:41:34.303907 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 08:41:34.304256 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 08:41:34.325975 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 08:41:34.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.342270 systemd[1]: Starting initrd-cleanup.service... Feb 9 08:41:34.359838 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 08:41:34.370928 systemd[1]: Stopped target timers.target. Feb 9 08:41:34.389298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 08:41:34.389629 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 08:41:34.406460 systemd[1]: Stopped target initrd.target. Feb 9 08:41:34.420093 systemd[1]: Stopped target basic.target. Feb 9 08:41:34.434329 systemd[1]: Stopped target ignition-subsequent.target. Feb 9 08:41:34.451210 systemd[1]: Stopped target ignition-diskful-subsequent.target. Feb 9 08:41:34.468191 systemd[1]: Stopped target initrd-root-device.target. Feb 9 08:41:34.485208 systemd[1]: Stopped target paths.target. Feb 9 08:41:34.499329 systemd[1]: Stopped target remote-fs.target. Feb 9 08:41:34.514186 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 08:41:34.529325 systemd[1]: Stopped target slices.target. Feb 9 08:41:34.545327 systemd[1]: Stopped target sockets.target. Feb 9 08:41:34.562202 systemd[1]: Stopped target sysinit.target. Feb 9 08:41:34.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.578188 systemd[1]: Stopped target local-fs.target. Feb 9 08:41:34.593194 systemd[1]: Stopped target local-fs-pre.target. Feb 9 08:41:34.608187 systemd[1]: Stopped target swap.target. Feb 9 08:41:34.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.624112 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 08:41:34.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.624340 systemd[1]: Closed iscsid.socket. Feb 9 08:41:34.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.638343 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 08:41:34.638674 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 08:41:34.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.653398 systemd[1]: Stopped target cryptsetup.target. Feb 9 08:41:34.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.668095 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 08:41:34.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.669970 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 08:41:34.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.683038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 08:41:34.683355 systemd[1]: Stopped dracut-initqueue.service. Feb 9 08:41:34.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.698318 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 08:41:34.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.698662 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 08:41:34.715277 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 08:41:34.715587 systemd[1]: Stopped initrd-setup-root.service. Feb 9 08:41:34.732624 systemd[1]: Stopping iscsiuio.service... Feb 9 08:41:34.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.747841 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 08:41:34.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.748175 systemd[1]: Stopped systemd-sysctl.service. Feb 9 08:41:34.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.766354 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 08:41:34.766689 systemd[1]: Stopped systemd-modules-load.service. Feb 9 08:41:34.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.781203 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 08:41:35.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.781511 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 08:41:35.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.796284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 08:41:35.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:35.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.796593 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 08:41:35.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:35.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:34.814817 systemd[1]: Stopping systemd-udevd.service... Feb 9 08:41:34.835281 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 08:41:34.837146 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 08:41:34.837345 systemd[1]: Stopped iscsiuio.service. Feb 9 08:41:34.845836 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 08:41:34.846115 systemd[1]: Stopped systemd-udevd.service. Feb 9 08:41:34.861869 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 08:41:34.862027 systemd[1]: Closed iscsiuio.socket. Feb 9 08:41:35.167773 iscsid[457]: iscsid shutting down. Feb 9 08:41:34.876951 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 08:41:34.877080 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 08:41:34.893034 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 08:41:34.893121 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 08:41:34.911032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 08:41:34.911169 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 08:41:34.927102 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 08:41:34.927235 systemd[1]: Stopped dracut-cmdline.service. Feb 9 08:41:34.942115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 08:41:34.942252 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 08:41:34.960805 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 08:41:34.973814 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 08:41:34.974046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 08:41:34.992309 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 08:41:34.992438 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 08:41:35.009113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 08:41:35.009247 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 08:41:35.030421 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 08:41:35.031781 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 08:41:35.032056 systemd[1]: Finished initrd-cleanup.service. Feb 9 08:41:35.168657 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 9 08:41:35.044431 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 08:41:35.044621 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 08:41:35.063899 systemd[1]: Reached target initrd-switch-root.target. Feb 9 08:41:35.078357 systemd[1]: Starting initrd-switch-root.service... Feb 9 08:41:35.111225 systemd[1]: Switching root. Feb 9 08:41:35.168898 systemd-journald[269]: Journal stopped Feb 9 08:41:39.187132 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 08:41:39.187146 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 08:41:39.187155 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 08:41:39.187176 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 08:41:39.187180 kernel: SELinux: policy capability open_perms=1 Feb 9 08:41:39.187185 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 08:41:39.187191 kernel: SELinux: policy capability always_check_network=0 Feb 9 08:41:39.187196 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 08:41:39.187201 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 08:41:39.187207 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 08:41:39.187213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 08:41:39.187219 systemd[1]: Successfully loaded SELinux policy in 288.767ms. Feb 9 08:41:39.187225 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.045ms. Feb 9 08:41:39.187232 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 08:41:39.187240 systemd[1]: Detected architecture x86-64. Feb 9 08:41:39.187245 systemd[1]: Detected first boot. Feb 9 08:41:39.187251 systemd[1]: Hostname set to . Feb 9 08:41:39.187258 systemd[1]: Initializing machine ID from random generator. Feb 9 08:41:39.187263 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 08:41:39.187269 systemd[1]: Populated /etc with preset unit settings. Feb 9 08:41:39.187275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:41:39.187282 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:41:39.187289 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:41:39.187295 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 08:41:39.187301 systemd[1]: Stopped iscsid.service. Feb 9 08:41:39.187306 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 08:41:39.187313 systemd[1]: Stopped initrd-switch-root.service. Feb 9 08:41:39.187320 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 08:41:39.187326 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 08:41:39.187332 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 08:41:39.187338 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 08:41:39.187344 systemd[1]: Created slice system-getty.slice. Feb 9 08:41:39.187350 systemd[1]: Created slice system-modprobe.slice. Feb 9 08:41:39.187356 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 08:41:39.187362 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 08:41:39.187368 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 08:41:39.187375 systemd[1]: Created slice user.slice. Feb 9 08:41:39.187381 systemd[1]: Started systemd-ask-password-console.path. Feb 9 08:41:39.187387 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 08:41:39.187393 systemd[1]: Set up automount boot.automount. Feb 9 08:41:39.187401 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 08:41:39.187408 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 08:41:39.187414 systemd[1]: Stopped target initrd-fs.target. Feb 9 08:41:39.187420 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 08:41:39.187427 systemd[1]: Reached target integritysetup.target. Feb 9 08:41:39.187434 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 08:41:39.187440 systemd[1]: Reached target remote-fs.target. Feb 9 08:41:39.187446 systemd[1]: Reached target slices.target. Feb 9 08:41:39.187452 systemd[1]: Reached target swap.target. Feb 9 08:41:39.187458 systemd[1]: Reached target torcx.target. Feb 9 08:41:39.187465 systemd[1]: Reached target veritysetup.target. Feb 9 08:41:39.187471 systemd[1]: Listening on systemd-coredump.socket. Feb 9 08:41:39.187479 systemd[1]: Listening on systemd-initctl.socket. Feb 9 08:41:39.187485 systemd[1]: Listening on systemd-networkd.socket. Feb 9 08:41:39.187491 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 08:41:39.187498 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 08:41:39.187504 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 08:41:39.187510 systemd[1]: Mounting dev-hugepages.mount... Feb 9 08:41:39.187518 systemd[1]: Mounting dev-mqueue.mount... Feb 9 08:41:39.187524 systemd[1]: Mounting media.mount... Feb 9 08:41:39.187531 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 08:41:39.187537 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 08:41:39.187543 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 08:41:39.187550 systemd[1]: Mounting tmp.mount... Feb 9 08:41:39.187556 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 08:41:39.187562 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 08:41:39.187570 systemd[1]: Starting kmod-static-nodes.service... Feb 9 08:41:39.187576 systemd[1]: Starting modprobe@configfs.service... Feb 9 08:41:39.187583 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 08:41:39.187589 systemd[1]: Starting modprobe@drm.service... Feb 9 08:41:39.187596 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 08:41:39.187602 systemd[1]: Starting modprobe@fuse.service... Feb 9 08:41:39.187609 kernel: fuse: init (API version 7.34) Feb 9 08:41:39.187615 systemd[1]: Starting modprobe@loop.service... Feb 9 08:41:39.187621 kernel: loop: module loaded Feb 9 08:41:39.187628 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 08:41:39.187635 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 08:41:39.187659 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 08:41:39.187666 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 08:41:39.187672 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 08:41:39.187679 kernel: audit: type=1131 audit(1707468098.828:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.187685 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 08:41:39.187705 kernel: audit: type=1131 audit(1707468098.916:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.187712 systemd[1]: Stopped systemd-journald.service. Feb 9 08:41:39.187719 kernel: audit: type=1130 audit(1707468098.980:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.187725 kernel: audit: type=1131 audit(1707468098.980:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.187730 kernel: audit: type=1334 audit(1707468099.065:82): prog-id=16 op=LOAD Feb 9 08:41:39.187736 kernel: audit: type=1334 audit(1707468099.084:83): prog-id=17 op=LOAD Feb 9 08:41:39.187742 kernel: audit: type=1334 audit(1707468099.102:84): prog-id=18 op=LOAD Feb 9 08:41:39.187747 kernel: audit: type=1334 audit(1707468099.120:85): prog-id=14 op=UNLOAD Feb 9 08:41:39.187754 systemd[1]: Starting systemd-journald.service... Feb 9 08:41:39.187761 kernel: audit: type=1334 audit(1707468099.120:86): prog-id=15 op=UNLOAD Feb 9 08:41:39.187767 systemd[1]: Starting systemd-modules-load.service... Feb 9 08:41:39.187773 kernel: audit: type=1305 audit(1707468099.184:87): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 08:41:39.187781 systemd-journald[947]: Journal started Feb 9 08:41:39.187806 systemd-journald[947]: Runtime Journal (/run/log/journal/b7ee702377d04ce2a9dd2216583f9f7d) is 8.0M, max 636.8M, 628.8M free. Feb 9 08:41:35.668000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 08:41:35.931000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 08:41:35.934000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 08:41:35.934000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 08:41:35.934000 audit: BPF prog-id=8 op=LOAD Feb 9 08:41:35.934000 audit: BPF prog-id=8 op=UNLOAD Feb 9 08:41:35.934000 audit: BPF prog-id=9 op=LOAD Feb 9 08:41:35.934000 audit: BPF prog-id=9 op=UNLOAD Feb 9 08:41:36.002000 audit[839]: AVC avc: denied { associate } for pid=839 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 08:41:36.002000 audit[839]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=822 pid=839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:41:36.002000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 08:41:36.027000 audit[839]: AVC avc: denied { associate } for pid=839 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 08:41:36.027000 audit[839]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=822 pid=839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:41:36.027000 audit: CWD cwd="/" Feb 9 08:41:36.027000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:36.027000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:36.027000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 08:41:37.541000 audit: BPF prog-id=10 op=LOAD Feb 9 08:41:37.541000 audit: BPF prog-id=3 op=UNLOAD Feb 9 08:41:37.541000 audit: BPF prog-id=11 op=LOAD Feb 9 08:41:37.541000 audit: BPF prog-id=12 op=LOAD Feb 9 08:41:37.541000 audit: BPF prog-id=4 op=UNLOAD Feb 9 08:41:37.541000 audit: BPF prog-id=5 op=UNLOAD Feb 9 08:41:37.541000 audit: BPF prog-id=13 op=LOAD Feb 9 08:41:37.541000 audit: BPF prog-id=10 op=UNLOAD Feb 9 08:41:37.541000 audit: BPF prog-id=14 op=LOAD Feb 9 08:41:37.542000 audit: BPF prog-id=15 op=LOAD Feb 9 08:41:37.542000 audit: BPF prog-id=11 op=UNLOAD Feb 9 08:41:37.542000 audit: BPF prog-id=12 op=UNLOAD Feb 9 08:41:37.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:37.584000 audit: BPF prog-id=13 op=UNLOAD Feb 9 08:41:37.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:37.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:37.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:38.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:38.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:38.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:38.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.065000 audit: BPF prog-id=16 op=LOAD Feb 9 08:41:39.084000 audit: BPF prog-id=17 op=LOAD Feb 9 08:41:39.102000 audit: BPF prog-id=18 op=LOAD Feb 9 08:41:39.120000 audit: BPF prog-id=14 op=UNLOAD Feb 9 08:41:39.120000 audit: BPF prog-id=15 op=UNLOAD Feb 9 08:41:39.184000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 08:41:37.539771 systemd[1]: Queued start job for default target multi-user.target. Feb 9 08:41:36.000648 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:41:37.539779 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Feb 9 08:41:36.001081 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 08:41:37.543044 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 08:41:36.001094 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 08:41:36.001115 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 08:41:36.001122 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 08:41:36.001143 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 08:41:36.001151 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 08:41:36.001278 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 08:41:36.001303 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 08:41:36.001312 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 08:41:36.001782 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 08:41:36.001806 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 08:41:36.001819 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 08:41:36.001828 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 08:41:36.001839 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 08:41:36.001847 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 08:41:37.199903 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 08:41:37.200047 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 08:41:37.200103 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 08:41:37.200406 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 08:41:37.200441 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 08:41:37.200477 /usr/lib/systemd/system-generators/torcx-generator[839]: time="2024-02-09T08:41:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 08:41:39.184000 audit[947]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc3fab50e0 a2=4000 a3=7ffc3fab517c items=0 ppid=1 pid=947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:41:39.184000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 08:41:39.265819 systemd[1]: Starting systemd-network-generator.service... Feb 9 08:41:39.292672 systemd[1]: Starting systemd-remount-fs.service... Feb 9 08:41:39.319700 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 08:41:39.362880 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 08:41:39.362920 systemd[1]: Stopped verity-setup.service. Feb 9 08:41:39.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.408689 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 08:41:39.427815 systemd[1]: Started systemd-journald.service. Feb 9 08:41:39.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.436154 systemd[1]: Mounted dev-hugepages.mount. Feb 9 08:41:39.442881 systemd[1]: Mounted dev-mqueue.mount. Feb 9 08:41:39.449873 systemd[1]: Mounted media.mount. Feb 9 08:41:39.456894 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 08:41:39.465867 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 08:41:39.473884 systemd[1]: Mounted tmp.mount. Feb 9 08:41:39.480874 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 08:41:39.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.488991 systemd[1]: Finished kmod-static-nodes.service. Feb 9 08:41:39.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.496979 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 08:41:39.497084 systemd[1]: Finished modprobe@configfs.service. Feb 9 08:41:39.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.506111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 08:41:39.506293 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 08:41:39.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.515259 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 08:41:39.515508 systemd[1]: Finished modprobe@drm.service. Feb 9 08:41:39.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.525578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 08:41:39.526009 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 08:41:39.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.535675 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 08:41:39.536102 systemd[1]: Finished modprobe@fuse.service. Feb 9 08:41:39.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.545573 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 08:41:39.545996 systemd[1]: Finished modprobe@loop.service. Feb 9 08:41:39.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.555502 systemd[1]: Finished systemd-modules-load.service. Feb 9 08:41:39.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.564430 systemd[1]: Finished systemd-network-generator.service. Feb 9 08:41:39.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.573435 systemd[1]: Finished systemd-remount-fs.service. Feb 9 08:41:39.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.582425 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 08:41:39.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.592108 systemd[1]: Reached target network-pre.target. Feb 9 08:41:39.603433 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 08:41:39.612325 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 08:41:39.619851 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 08:41:39.620883 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 08:41:39.628315 systemd[1]: Starting systemd-journal-flush.service... Feb 9 08:41:39.632059 systemd-journald[947]: Time spent on flushing to /var/log/journal/b7ee702377d04ce2a9dd2216583f9f7d is 11.658ms for 1301 entries. Feb 9 08:41:39.632059 systemd-journald[947]: System Journal (/var/log/journal/b7ee702377d04ce2a9dd2216583f9f7d) is 8.0M, max 195.6M, 187.6M free. Feb 9 08:41:39.672595 systemd-journald[947]: Received client request to flush runtime journal. Feb 9 08:41:39.644740 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 08:41:39.645328 systemd[1]: Starting systemd-random-seed.service... Feb 9 08:41:39.659767 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 08:41:39.660273 systemd[1]: Starting systemd-sysctl.service... Feb 9 08:41:39.667552 systemd[1]: Starting systemd-sysusers.service... Feb 9 08:41:39.675330 systemd[1]: Starting systemd-udev-settle.service... Feb 9 08:41:39.682943 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 08:41:39.690826 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 08:41:39.698853 systemd[1]: Finished systemd-journal-flush.service. Feb 9 08:41:39.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.706863 systemd[1]: Finished systemd-random-seed.service. Feb 9 08:41:39.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.714855 systemd[1]: Finished systemd-sysctl.service. Feb 9 08:41:39.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.722833 systemd[1]: Finished systemd-sysusers.service. Feb 9 08:41:39.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.731818 systemd[1]: Reached target first-boot-complete.target. Feb 9 08:41:39.740385 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 08:41:39.749650 udevadm[964]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 08:41:39.757939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 08:41:39.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.924888 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 08:41:39.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.934000 audit: BPF prog-id=19 op=LOAD Feb 9 08:41:39.935000 audit: BPF prog-id=20 op=LOAD Feb 9 08:41:39.935000 audit: BPF prog-id=6 op=UNLOAD Feb 9 08:41:39.935000 audit: BPF prog-id=7 op=UNLOAD Feb 9 08:41:39.935956 systemd[1]: Starting systemd-udevd.service... Feb 9 08:41:39.947646 systemd-udevd[967]: Using default interface naming scheme 'v252'. Feb 9 08:41:39.967775 systemd[1]: Started systemd-udevd.service. Feb 9 08:41:39.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:39.977791 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 08:41:39.977000 audit: BPF prog-id=21 op=LOAD Feb 9 08:41:39.978860 systemd[1]: Starting systemd-networkd.service... Feb 9 08:41:40.002000 audit: BPF prog-id=22 op=LOAD Feb 9 08:41:40.002000 audit: BPF prog-id=23 op=LOAD Feb 9 08:41:40.002000 audit: BPF prog-id=24 op=LOAD Feb 9 08:41:40.003703 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 08:41:40.003747 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 08:41:40.003882 systemd[1]: Starting systemd-userdbd.service... Feb 9 08:41:40.044344 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 08:41:40.044415 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 08:41:40.051122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 08:41:40.000000 audit[970]: AVC avc: denied { confidentiality } for pid=970 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 08:41:40.000000 audit[970]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56321762ead0 a1=4d8bc a2=7fa28d5b4bc5 a3=5 items=42 ppid=967 pid=970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:41:40.000000 audit: CWD cwd="/" Feb 9 08:41:40.000000 audit: PATH item=0 name=(null) inode=1039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=1 name=(null) inode=23599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=2 name=(null) inode=23599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=3 name=(null) inode=23600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=4 name=(null) inode=23599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=5 name=(null) inode=23601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=6 name=(null) inode=23599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=7 name=(null) inode=23602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=8 name=(null) inode=23602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=9 name=(null) inode=23603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=10 name=(null) inode=23602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=11 name=(null) inode=23604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=12 name=(null) inode=23602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=13 name=(null) inode=23605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=14 name=(null) inode=23602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=15 name=(null) inode=23606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=16 name=(null) inode=23602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=17 name=(null) inode=23607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=18 name=(null) inode=23599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=19 name=(null) inode=23608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=20 name=(null) inode=23608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=21 name=(null) inode=23609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=22 name=(null) inode=23608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=23 name=(null) inode=23610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=24 name=(null) inode=23608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=25 name=(null) inode=23611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=26 name=(null) inode=23608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=27 name=(null) inode=23612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=28 name=(null) inode=23608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=29 name=(null) inode=23613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=30 name=(null) inode=23599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=31 name=(null) inode=23614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=32 name=(null) inode=23614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=33 name=(null) inode=23615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=34 name=(null) inode=23614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=35 name=(null) inode=23616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=36 name=(null) inode=23614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=37 name=(null) inode=23617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=38 name=(null) inode=23614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=39 name=(null) inode=23618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=40 name=(null) inode=23614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PATH item=41 name=(null) inode=23619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:41:40.000000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 08:41:40.088715 kernel: ACPI: button: Power Button [PWRF] Feb 9 08:41:40.112989 systemd[1]: Started systemd-userdbd.service. Feb 9 08:41:40.113646 kernel: IPMI message handler: version 39.2 Feb 9 08:41:40.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:40.126647 kernel: ipmi device interface Feb 9 08:41:40.126688 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 08:41:40.126789 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 08:41:40.220314 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 08:41:40.220561 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 08:41:40.240650 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 08:41:40.240780 kernel: ipmi_si: IPMI System Interface driver Feb 9 08:41:40.278857 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 08:41:40.278964 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 08:41:40.318561 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 08:41:40.337693 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 08:41:40.337811 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 08:41:40.405369 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 08:41:40.405650 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 08:41:40.405679 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 08:41:40.405710 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 08:41:40.410301 systemd-networkd[1012]: bond0: netdev ready Feb 9 08:41:40.412297 systemd-networkd[1012]: lo: Link UP Feb 9 08:41:40.412300 systemd-networkd[1012]: lo: Gained carrier Feb 9 08:41:40.412602 systemd-networkd[1012]: Enumeration completed Feb 9 08:41:40.412677 systemd[1]: Started systemd-networkd.service. Feb 9 08:41:40.412911 systemd-networkd[1012]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 08:41:40.413605 systemd-networkd[1012]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e9.network. Feb 9 08:41:40.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:40.487646 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 08:41:40.517651 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 9 08:41:40.517875 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 9 08:41:40.583515 kernel: intel_rapl_common: Found RAPL domain package Feb 9 08:41:40.583563 kernel: intel_rapl_common: Found RAPL domain core Feb 9 08:41:40.583591 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 08:41:40.583714 kernel: intel_rapl_common: Found RAPL domain uncore Feb 9 08:41:40.615685 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 9 08:41:40.615718 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 08:41:40.624710 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 08:41:40.624735 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 08:41:40.653646 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 08:41:40.709877 systemd-networkd[1012]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 08:41:40.726645 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 08:41:40.749699 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 08:41:40.924759 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 08:41:40.947672 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 9 08:41:40.950021 systemd-networkd[1012]: bond0: Link UP Feb 9 08:41:40.950286 systemd-networkd[1012]: enp2s0f1np1: Link UP Feb 9 08:41:40.950463 systemd-networkd[1012]: enp2s0f1np1: Gained carrier Feb 9 08:41:40.951780 systemd-networkd[1012]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 08:41:40.988897 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 08:41:40.988924 kernel: bond0: active interface up! Feb 9 08:41:41.017686 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 08:41:41.099650 systemd-networkd[1012]: bond0: Gained carrier Feb 9 08:41:41.100467 systemd-networkd[1012]: enp2s0f0np0: Link UP Feb 9 08:41:41.100727 systemd-networkd[1012]: enp2s0f1np1: Link DOWN Feb 9 08:41:41.100730 systemd-networkd[1012]: enp2s0f1np1: Lost carrier Feb 9 08:41:41.117900 systemd[1]: Finished systemd-udev-settle.service. Feb 9 08:41:41.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.135535 systemd[1]: Starting lvm2-activation-early.service... Feb 9 08:41:41.142683 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.157219 lvm[1074]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 08:41:41.165704 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.188677 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.210698 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.218092 systemd[1]: Finished lvm2-activation-early.service. Feb 9 08:41:41.232682 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.248778 systemd[1]: Reached target cryptsetup.target. Feb 9 08:41:41.254673 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.272370 systemd[1]: Starting lvm2-activation.service... Feb 9 08:41:41.274571 lvm[1075]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 08:41:41.276643 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.297679 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.318643 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.339644 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.348167 systemd[1]: Finished lvm2-activation.service. Feb 9 08:41:41.360646 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.377802 systemd[1]: Reached target local-fs-pre.target. Feb 9 08:41:41.382649 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.399686 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 08:41:41.399701 systemd[1]: Reached target local-fs.target. Feb 9 08:41:41.402685 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.419751 systemd[1]: Reached target machines.target. Feb 9 08:41:41.423643 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.441413 systemd[1]: Starting ldconfig.service... Feb 9 08:41:41.443643 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.460215 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 08:41:41.460236 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 08:41:41.460842 systemd[1]: Starting systemd-boot-update.service... Feb 9 08:41:41.464645 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.481921 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 08:41:41.484683 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.502312 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 08:41:41.504196 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 08:41:41.504217 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 08:41:41.504696 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.504695 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 08:41:41.504883 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1077 (bootctl) Feb 9 08:41:41.505453 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 08:41:41.511794 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 08:41:41.512784 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 08:41:41.513723 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 08:41:41.524646 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.524976 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 08:41:41.525259 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 08:41:41.543644 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.543671 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 08:41:41.566656 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 08:41:41.573257 systemd-networkd[1012]: enp2s0f1np1: Link UP Feb 9 08:41:41.573261 systemd-networkd[1012]: enp2s0f1np1: Gained carrier Feb 9 08:41:41.574643 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 9 08:41:41.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.589137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 08:41:41.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.598831 systemd-networkd[1012]: enp2s0f0np0: Gained carrier Feb 9 08:41:41.659927 systemd-fsck[1085]: fsck.fat 4.2 (2021-01-31) Feb 9 08:41:41.659927 systemd-fsck[1085]: /dev/sdb1: 789 files, 115332/258078 clusters Feb 9 08:41:41.660615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 08:41:41.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.671456 systemd[1]: Mounting boot.mount... Feb 9 08:41:41.695645 kernel: bond0: (slave enp2s0f1np1): link status up again after 100 ms Feb 9 08:41:41.712646 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 08:41:41.717340 systemd[1]: Mounted boot.mount. Feb 9 08:41:41.733927 systemd[1]: Finished systemd-boot-update.service. Feb 9 08:41:41.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.746758 ldconfig[1076]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 08:41:41.748838 systemd[1]: Finished ldconfig.service. Feb 9 08:41:41.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.763507 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 08:41:41.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:41:41.772506 systemd[1]: Starting audit-rules.service... Feb 9 08:41:41.780271 systemd[1]: Starting clean-ca-certificates.service... Feb 9 08:41:41.789239 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 08:41:41.788000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 08:41:41.788000 audit[1108]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5e1f47e0 a2=420 a3=0 items=0 ppid=1092 pid=1108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:41:41.788000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 08:41:41.789661 augenrules[1108]: No rules Feb 9 08:41:41.798575 systemd[1]: Starting systemd-resolved.service... Feb 9 08:41:41.806499 systemd[1]: Starting systemd-timesyncd.service... Feb 9 08:41:41.814160 systemd[1]: Starting systemd-update-utmp.service... Feb 9 08:41:41.821997 systemd[1]: Finished audit-rules.service. Feb 9 08:41:41.829808 systemd[1]: Finished clean-ca-certificates.service. Feb 9 08:41:41.837793 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 08:41:41.849400 systemd[1]: Starting systemd-update-done.service... Feb 9 08:41:41.855722 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 08:41:41.855960 systemd[1]: Finished systemd-update-utmp.service. Feb 9 08:41:41.863845 systemd[1]: Finished systemd-update-done.service. Feb 9 08:41:41.874845 systemd[1]: Started systemd-timesyncd.service. Feb 9 08:41:41.876388 systemd-resolved[1114]: Positive Trust Anchors: Feb 9 08:41:41.876395 systemd-resolved[1114]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 08:41:41.876415 systemd-resolved[1114]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 08:41:41.882894 systemd[1]: Reached target time-set.target. Feb 9 08:41:41.895100 systemd-resolved[1114]: Using system hostname 'ci-3510.3.2-a-98a543a057'. Feb 9 08:41:41.896202 systemd[1]: Started systemd-resolved.service. Feb 9 08:41:41.904745 systemd[1]: Reached target network.target. Feb 9 08:41:41.912734 systemd[1]: Reached target nss-lookup.target. Feb 9 08:41:41.920736 systemd[1]: Reached target sysinit.target. Feb 9 08:41:41.928753 systemd[1]: Started motdgen.path. Feb 9 08:41:41.935724 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 08:41:41.945785 systemd[1]: Started logrotate.timer. Feb 9 08:41:41.952746 systemd[1]: Started mdadm.timer. Feb 9 08:41:41.959708 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 08:41:41.967709 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 08:41:41.967725 systemd[1]: Reached target paths.target. Feb 9 08:41:41.974708 systemd[1]: Reached target timers.target. Feb 9 08:41:41.981830 systemd[1]: Listening on dbus.socket. Feb 9 08:41:41.989229 systemd[1]: Starting docker.socket... Feb 9 08:41:41.997137 systemd[1]: Listening on sshd.socket. Feb 9 08:41:42.003769 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 08:41:42.003985 systemd[1]: Listening on docker.socket. Feb 9 08:41:42.010752 systemd[1]: Reached target sockets.target. Feb 9 08:41:42.018712 systemd[1]: Reached target basic.target. Feb 9 08:41:42.025734 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 08:41:42.025748 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 08:41:42.026166 systemd[1]: Starting containerd.service... Feb 9 08:41:42.033106 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 08:41:42.042216 systemd[1]: Starting coreos-metadata.service... Feb 9 08:41:42.049353 systemd[1]: Starting dbus.service... Feb 9 08:41:42.055251 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 08:41:42.061710 jq[1128]: false Feb 9 08:41:42.062280 systemd[1]: Starting extend-filesystems.service... Feb 9 08:41:42.068696 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 08:41:42.069334 systemd[1]: Starting motdgen.service... Feb 9 08:41:42.070093 dbus-daemon[1127]: [system] SELinux support is enabled Feb 9 08:41:42.070555 extend-filesystems[1131]: Found sda Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb1 Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb2 Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb3 Feb 9 08:41:42.087618 extend-filesystems[1131]: Found usr Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb4 Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb6 Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb7 Feb 9 08:41:42.087618 extend-filesystems[1131]: Found sdb9 Feb 9 08:41:42.087618 extend-filesystems[1131]: Checking size of /dev/sdb9 Feb 9 08:41:42.087618 extend-filesystems[1131]: Resized partition /dev/sdb9 Feb 9 08:41:42.242719 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 9 08:41:42.076248 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 08:41:42.242815 coreos-metadata[1124]: Feb 09 08:41:42.091 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 08:41:42.242920 extend-filesystems[1146]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 08:41:42.257683 coreos-metadata[1123]: Feb 09 08:41:42.090 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 08:41:42.104595 systemd[1]: Starting prepare-critools.service... Feb 9 08:41:42.124297 systemd[1]: Starting prepare-helm.service... Feb 9 08:41:42.139209 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 08:41:42.146276 systemd[1]: Starting sshd-keygen.service... Feb 9 08:41:42.166079 systemd[1]: Starting systemd-logind.service... Feb 9 08:41:42.178686 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 08:41:42.258225 update_engine[1162]: I0209 08:41:42.234961 1162 main.cc:92] Flatcar Update Engine starting Feb 9 08:41:42.258225 update_engine[1162]: I0209 08:41:42.238492 1162 update_check_scheduler.cc:74] Next update check in 11m20s Feb 9 08:41:42.179241 systemd[1]: Starting tcsd.service... Feb 9 08:41:42.258400 jq[1163]: true Feb 9 08:41:42.186108 systemd-logind[1160]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 08:41:42.186118 systemd-logind[1160]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 08:41:42.258678 tar[1165]: ./ Feb 9 08:41:42.258678 tar[1165]: ./loopback Feb 9 08:41:42.186127 systemd-logind[1160]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 08:41:42.186268 systemd-logind[1160]: New seat seat0. Feb 9 08:41:42.191077 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 08:41:42.191446 systemd[1]: Starting update-engine.service... Feb 9 08:41:42.205249 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 08:41:42.220039 systemd[1]: Started dbus.service. Feb 9 08:41:42.236314 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 08:41:42.236404 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 08:41:42.236553 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 08:41:42.236632 systemd[1]: Finished motdgen.service. Feb 9 08:41:42.250708 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 08:41:42.250791 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 08:41:42.268940 tar[1166]: crictl Feb 9 08:41:42.269202 jq[1171]: false Feb 9 08:41:42.269493 dbus-daemon[1127]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 08:41:42.269925 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Feb 9 08:41:42.270017 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Feb 9 08:41:42.270202 tar[1167]: linux-amd64/helm Feb 9 08:41:42.275614 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 08:41:42.275741 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 08:41:42.275828 systemd[1]: Started systemd-logind.service. Feb 9 08:41:42.277874 env[1172]: time="2024-02-09T08:41:42.277851792Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 08:41:42.279818 tar[1165]: ./bandwidth Feb 9 08:41:42.286361 env[1172]: time="2024-02-09T08:41:42.286342204Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 08:41:42.286433 env[1172]: time="2024-02-09T08:41:42.286423371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287101 env[1172]: time="2024-02-09T08:41:42.287085072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287138 env[1172]: time="2024-02-09T08:41:42.287100198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287238 env[1172]: time="2024-02-09T08:41:42.287226161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287283 env[1172]: time="2024-02-09T08:41:42.287237765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287283 env[1172]: time="2024-02-09T08:41:42.287248719Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 08:41:42.287283 env[1172]: time="2024-02-09T08:41:42.287259033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287368 env[1172]: time="2024-02-09T08:41:42.287322899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287485 env[1172]: time="2024-02-09T08:41:42.287475241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287564 env[1172]: time="2024-02-09T08:41:42.287552059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 08:41:42.287596 env[1172]: time="2024-02-09T08:41:42.287563934Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 08:41:42.287625 env[1172]: time="2024-02-09T08:41:42.287601148Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 08:41:42.287625 env[1172]: time="2024-02-09T08:41:42.287612886Z" level=info msg="metadata content store policy set" policy=shared Feb 9 08:41:42.288010 systemd[1]: Started update-engine.service. Feb 9 08:41:42.297369 systemd[1]: Started locksmithd.service. Feb 9 08:41:42.303774 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 08:41:42.303853 systemd[1]: Reached target system-config.target. Feb 9 08:41:42.306067 env[1172]: time="2024-02-09T08:41:42.306044008Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 08:41:42.306112 env[1172]: time="2024-02-09T08:41:42.306071406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 08:41:42.306112 env[1172]: time="2024-02-09T08:41:42.306080801Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 08:41:42.306112 env[1172]: time="2024-02-09T08:41:42.306101697Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306112 env[1172]: time="2024-02-09T08:41:42.306110171Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306117596Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306125453Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306148370Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306157553Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306165504Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306173058Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306211 env[1172]: time="2024-02-09T08:41:42.306179729Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 08:41:42.306369 env[1172]: time="2024-02-09T08:41:42.306234082Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 08:41:42.306369 env[1172]: time="2024-02-09T08:41:42.306279500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 08:41:42.306433 env[1172]: time="2024-02-09T08:41:42.306418616Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 08:41:42.306460 env[1172]: time="2024-02-09T08:41:42.306442415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306460 env[1172]: time="2024-02-09T08:41:42.306450921Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 08:41:42.306507 env[1172]: time="2024-02-09T08:41:42.306479657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306507 env[1172]: time="2024-02-09T08:41:42.306487902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306507 env[1172]: time="2024-02-09T08:41:42.306495218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306507 env[1172]: time="2024-02-09T08:41:42.306501732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306601 env[1172]: time="2024-02-09T08:41:42.306508430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306601 env[1172]: time="2024-02-09T08:41:42.306515038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306601 env[1172]: time="2024-02-09T08:41:42.306521128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306601 env[1172]: time="2024-02-09T08:41:42.306527239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306601 env[1172]: time="2024-02-09T08:41:42.306534931Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306605234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306614309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306622360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306629071Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306637498Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306649190Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306658764Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 08:41:42.306722 env[1172]: time="2024-02-09T08:41:42.306679944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 08:41:42.306908 env[1172]: time="2024-02-09T08:41:42.306792744Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 08:41:42.306908 env[1172]: time="2024-02-09T08:41:42.306822907Z" level=info msg="Connect containerd service" Feb 9 08:41:42.306908 env[1172]: time="2024-02-09T08:41:42.306839700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307108481Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307193623Z" level=info msg="Start subscribing containerd event" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307223601Z" level=info msg="Start recovering state" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307237496Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307256470Z" level=info msg="Start event monitor" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307259435Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307269605Z" level=info msg="Start snapshots syncer" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307275100Z" level=info msg="Start cni network conf syncer for default" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307279458Z" level=info msg="Start streaming server" Feb 9 08:41:42.310272 env[1172]: time="2024-02-09T08:41:42.307283785Z" level=info msg="containerd successfully booted in 0.029817s" Feb 9 08:41:42.311786 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 08:41:42.311902 systemd[1]: Reached target user-config.target. Feb 9 08:41:42.316018 tar[1165]: ./ptp Feb 9 08:41:42.321310 systemd[1]: Started containerd.service. Feb 9 08:41:42.344508 tar[1165]: ./vlan Feb 9 08:41:42.371752 tar[1165]: ./host-device Feb 9 08:41:42.380227 locksmithd[1190]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 08:41:42.398810 tar[1165]: ./tuning Feb 9 08:41:42.422522 tar[1165]: ./vrf Feb 9 08:41:42.447602 tar[1165]: ./sbr Feb 9 08:41:42.471376 tar[1165]: ./tap Feb 9 08:41:42.498449 tar[1165]: ./dhcp Feb 9 08:41:42.572303 tar[1167]: linux-amd64/LICENSE Feb 9 08:41:42.572399 tar[1167]: linux-amd64/README.md Feb 9 08:41:42.573017 tar[1165]: ./static Feb 9 08:41:42.574360 systemd[1]: Finished prepare-helm.service. Feb 9 08:41:42.592568 tar[1165]: ./firewall Feb 9 08:41:42.610710 systemd[1]: Finished prepare-critools.service. Feb 9 08:41:42.622172 tar[1165]: ./macvlan Feb 9 08:41:42.631711 systemd-networkd[1012]: bond0: Gained IPv6LL Feb 9 08:41:42.631881 systemd-timesyncd[1115]: Network configuration changed, trying to establish connection. Feb 9 08:41:42.649499 tar[1165]: ./dummy Feb 9 08:41:42.676067 tar[1165]: ./bridge Feb 9 08:41:42.704542 tar[1165]: ./ipvlan Feb 9 08:41:42.730910 tar[1165]: ./portmap Feb 9 08:41:42.758136 tar[1165]: ./host-local Feb 9 08:41:42.759919 systemd-timesyncd[1115]: Network configuration changed, trying to establish connection. Feb 9 08:41:42.759969 systemd-timesyncd[1115]: Network configuration changed, trying to establish connection. Feb 9 08:41:42.783715 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 08:41:42.789710 sshd_keygen[1159]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 08:41:42.801045 systemd[1]: Finished sshd-keygen.service. Feb 9 08:41:42.809492 systemd[1]: Starting issuegen.service... Feb 9 08:41:42.821923 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 08:41:42.821998 systemd[1]: Finished issuegen.service. Feb 9 08:41:42.824695 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 9 08:41:42.831676 systemd[1]: Starting systemd-user-sessions.service... Feb 9 08:41:42.847290 systemd[1]: Finished systemd-user-sessions.service. Feb 9 08:41:42.861667 extend-filesystems[1146]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 9 08:41:42.861667 extend-filesystems[1146]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 08:41:42.861667 extend-filesystems[1146]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 9 08:41:42.859244 systemd[1]: Started getty@tty1.service. Feb 9 08:41:42.920127 extend-filesystems[1131]: Resized filesystem in /dev/sdb9 Feb 9 08:41:42.947950 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 08:41:42.868038 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 08:41:42.885326 systemd[1]: Reached target getty.target. Feb 9 08:41:42.904074 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 08:41:42.904457 systemd[1]: Finished extend-filesystems.service. Feb 9 08:41:47.888627 login[1215]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 08:41:47.894774 login[1213]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 08:41:47.896845 systemd-logind[1160]: New session 1 of user core. Feb 9 08:41:47.897326 systemd[1]: Created slice user-500.slice. Feb 9 08:41:47.897908 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 08:41:47.899108 systemd-logind[1160]: New session 2 of user core. Feb 9 08:41:47.902808 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 08:41:47.903503 systemd[1]: Starting user@500.service... Feb 9 08:41:47.905178 (systemd)[1219]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:41:47.976239 systemd[1219]: Queued start job for default target default.target. Feb 9 08:41:47.976882 systemd[1219]: Reached target paths.target. Feb 9 08:41:47.976915 systemd[1219]: Reached target sockets.target. Feb 9 08:41:47.976938 systemd[1219]: Reached target timers.target. Feb 9 08:41:47.976960 systemd[1219]: Reached target basic.target. Feb 9 08:41:47.977015 systemd[1219]: Reached target default.target. Feb 9 08:41:47.977053 systemd[1219]: Startup finished in 69ms. Feb 9 08:41:47.977125 systemd[1]: Started user@500.service. Feb 9 08:41:47.978542 systemd[1]: Started session-1.scope. Feb 9 08:41:47.979480 systemd[1]: Started session-2.scope. Feb 9 08:41:48.265141 coreos-metadata[1123]: Feb 09 08:41:48.264 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 08:41:48.265983 coreos-metadata[1124]: Feb 09 08:41:48.264 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 08:41:48.934712 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 9 08:41:48.934875 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 9 08:41:49.265482 coreos-metadata[1123]: Feb 09 08:41:49.265 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 08:41:49.266248 coreos-metadata[1124]: Feb 09 08:41:49.265 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 08:41:49.295437 coreos-metadata[1124]: Feb 09 08:41:49.295 INFO Fetch successful Feb 9 08:41:49.296293 coreos-metadata[1123]: Feb 09 08:41:49.296 INFO Fetch successful Feb 9 08:41:49.317009 systemd[1]: Finished coreos-metadata.service. Feb 9 08:41:49.317772 systemd[1]: Started packet-phone-home.service. Feb 9 08:41:49.320485 unknown[1123]: wrote ssh authorized keys file for user: core Feb 9 08:41:49.324238 curl[1241]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 08:41:49.324341 curl[1241]: Dload Upload Total Spent Left Speed Feb 9 08:41:49.337480 update-ssh-keys[1242]: Updated "/home/core/.ssh/authorized_keys" Feb 9 08:41:49.337629 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 08:41:49.337840 systemd[1]: Reached target multi-user.target. Feb 9 08:41:49.338400 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 08:41:49.342461 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 08:41:49.342531 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 08:41:49.342605 systemd[1]: Startup finished in 2.484s (kernel) + 6.523s (initrd) + 13.986s (userspace) = 22.994s. Feb 9 08:41:49.529168 curl[1241]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 08:41:49.531609 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 08:41:50.334997 systemd[1]: Created slice system-sshd.slice. Feb 9 08:41:50.335591 systemd[1]: Started sshd@0-139.178.90.113:22-147.75.109.163:56944.service. Feb 9 08:41:50.377562 sshd[1246]: Accepted publickey for core from 147.75.109.163 port 56944 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:41:50.378437 sshd[1246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:41:50.381373 systemd-logind[1160]: New session 3 of user core. Feb 9 08:41:50.381965 systemd[1]: Started session-3.scope. Feb 9 08:41:50.433608 systemd[1]: Started sshd@1-139.178.90.113:22-147.75.109.163:56956.service. Feb 9 08:41:50.464104 sshd[1251]: Accepted publickey for core from 147.75.109.163 port 56956 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:41:50.464785 sshd[1251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:41:50.466973 systemd-logind[1160]: New session 4 of user core. Feb 9 08:41:50.467423 systemd[1]: Started session-4.scope. Feb 9 08:41:50.519198 sshd[1251]: pam_unix(sshd:session): session closed for user core Feb 9 08:41:50.520624 systemd[1]: sshd@1-139.178.90.113:22-147.75.109.163:56956.service: Deactivated successfully. Feb 9 08:41:50.520969 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 08:41:50.521294 systemd-logind[1160]: Session 4 logged out. Waiting for processes to exit. Feb 9 08:41:50.521819 systemd[1]: Started sshd@2-139.178.90.113:22-147.75.109.163:56968.service. Feb 9 08:41:50.522311 systemd-logind[1160]: Removed session 4. Feb 9 08:41:50.553473 sshd[1257]: Accepted publickey for core from 147.75.109.163 port 56968 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:41:50.554361 sshd[1257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:41:50.557312 systemd-logind[1160]: New session 5 of user core. Feb 9 08:41:50.557894 systemd[1]: Started session-5.scope. Feb 9 08:41:50.609391 sshd[1257]: pam_unix(sshd:session): session closed for user core Feb 9 08:41:50.610842 systemd[1]: sshd@2-139.178.90.113:22-147.75.109.163:56968.service: Deactivated successfully. Feb 9 08:41:50.611132 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 08:41:50.611424 systemd-logind[1160]: Session 5 logged out. Waiting for processes to exit. Feb 9 08:41:50.611980 systemd[1]: Started sshd@3-139.178.90.113:22-147.75.109.163:56976.service. Feb 9 08:41:50.612451 systemd-logind[1160]: Removed session 5. Feb 9 08:41:50.642923 sshd[1263]: Accepted publickey for core from 147.75.109.163 port 56976 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:41:50.643779 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:41:50.646532 systemd-logind[1160]: New session 6 of user core. Feb 9 08:41:50.647147 systemd[1]: Started session-6.scope. Feb 9 08:41:50.711449 sshd[1263]: pam_unix(sshd:session): session closed for user core Feb 9 08:41:50.717727 systemd[1]: sshd@3-139.178.90.113:22-147.75.109.163:56976.service: Deactivated successfully. Feb 9 08:41:50.719249 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 08:41:50.720878 systemd-logind[1160]: Session 6 logged out. Waiting for processes to exit. Feb 9 08:41:50.723375 systemd[1]: Started sshd@4-139.178.90.113:22-147.75.109.163:56990.service. Feb 9 08:41:50.725789 systemd-logind[1160]: Removed session 6. Feb 9 08:41:50.758008 sshd[1269]: Accepted publickey for core from 147.75.109.163 port 56990 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:41:50.758690 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:41:50.761009 systemd-logind[1160]: New session 7 of user core. Feb 9 08:41:50.761422 systemd[1]: Started session-7.scope. Feb 9 08:41:50.829892 sudo[1272]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 08:41:50.830438 sudo[1272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 08:41:54.936474 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 08:41:54.940888 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 08:41:54.941085 systemd[1]: Reached target network-online.target. Feb 9 08:41:54.941843 systemd[1]: Starting docker.service... Feb 9 08:41:54.963776 env[1293]: time="2024-02-09T08:41:54.963718861Z" level=info msg="Starting up" Feb 9 08:41:54.964622 env[1293]: time="2024-02-09T08:41:54.964589311Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 08:41:54.964622 env[1293]: time="2024-02-09T08:41:54.964598342Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 08:41:54.964698 env[1293]: time="2024-02-09T08:41:54.964625086Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 08:41:54.964698 env[1293]: time="2024-02-09T08:41:54.964631592Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 08:41:54.966043 env[1293]: time="2024-02-09T08:41:54.966004015Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 08:41:54.966043 env[1293]: time="2024-02-09T08:41:54.966013154Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 08:41:54.966043 env[1293]: time="2024-02-09T08:41:54.966020381Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 08:41:54.966043 env[1293]: time="2024-02-09T08:41:54.966026697Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 08:41:54.977187 env[1293]: time="2024-02-09T08:41:54.977148295Z" level=info msg="Loading containers: start." Feb 9 08:41:55.090724 kernel: Initializing XFRM netlink socket Feb 9 08:41:55.163757 env[1293]: time="2024-02-09T08:41:55.163735226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 08:41:55.164570 systemd-timesyncd[1115]: Network configuration changed, trying to establish connection. Feb 9 08:41:55.227701 systemd-networkd[1012]: docker0: Link UP Feb 9 08:41:55.235676 env[1293]: time="2024-02-09T08:41:55.235611833Z" level=info msg="Loading containers: done." Feb 9 08:41:55.247078 env[1293]: time="2024-02-09T08:41:55.246999458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 08:41:55.247304 env[1293]: time="2024-02-09T08:41:55.247253193Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 08:41:55.247492 env[1293]: time="2024-02-09T08:41:55.247410972Z" level=info msg="Daemon has completed initialization" Feb 9 08:41:55.250562 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1921355896-merged.mount: Deactivated successfully. Feb 9 08:41:55.270263 systemd[1]: Started docker.service. Feb 9 08:41:55.284702 env[1293]: time="2024-02-09T08:41:55.284550386Z" level=info msg="API listen on /run/docker.sock" Feb 9 08:41:55.325827 systemd[1]: Reloading. Feb 9 08:41:55.387406 /usr/lib/systemd/system-generators/torcx-generator[1451]: time="2024-02-09T08:41:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:41:55.387431 /usr/lib/systemd/system-generators/torcx-generator[1451]: time="2024-02-09T08:41:55Z" level=info msg="torcx already run" Feb 9 08:41:55.465753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:41:55.465764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:41:55.481612 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:41:55.533529 systemd[1]: Started kubelet.service. Feb 9 08:41:56.142050 systemd-resolved[1114]: Clock change detected. Flushing caches. Feb 9 08:41:56.142237 systemd-timesyncd[1115]: Contacted time server [2604:8800:52:81:38:229:52:9]:123 (2.flatcar.pool.ntp.org). Feb 9 08:41:56.142369 systemd-timesyncd[1115]: Initial clock synchronization to Fri 2024-02-09 08:41:56.141901 UTC. Feb 9 08:41:56.569505 kubelet[1506]: E0209 08:41:56.569246 1506 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 08:41:56.574414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 08:41:56.574818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 08:41:57.248864 env[1172]: time="2024-02-09T08:41:57.248727047Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 08:41:57.925777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021916189.mount: Deactivated successfully. Feb 9 08:42:00.279894 env[1172]: time="2024-02-09T08:42:00.279824662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:00.280451 env[1172]: time="2024-02-09T08:42:00.280417358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:00.282315 env[1172]: time="2024-02-09T08:42:00.282276827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:00.283395 env[1172]: time="2024-02-09T08:42:00.283349115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:00.283846 env[1172]: time="2024-02-09T08:42:00.283795179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 9 08:42:00.289353 env[1172]: time="2024-02-09T08:42:00.289325640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 08:42:02.917765 env[1172]: time="2024-02-09T08:42:02.917717944Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:02.918217 env[1172]: time="2024-02-09T08:42:02.918205228Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:02.919145 env[1172]: time="2024-02-09T08:42:02.919098678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:02.920364 env[1172]: time="2024-02-09T08:42:02.920327974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:02.920774 env[1172]: time="2024-02-09T08:42:02.920739714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 9 08:42:02.928382 env[1172]: time="2024-02-09T08:42:02.928366288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 08:42:04.550995 env[1172]: time="2024-02-09T08:42:04.550945260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:04.551565 env[1172]: time="2024-02-09T08:42:04.551509333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:04.552657 env[1172]: time="2024-02-09T08:42:04.552609420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:04.553532 env[1172]: time="2024-02-09T08:42:04.553487339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:04.553988 env[1172]: time="2024-02-09T08:42:04.553936512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 9 08:42:04.562947 env[1172]: time="2024-02-09T08:42:04.562864994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 08:42:05.853322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882396341.mount: Deactivated successfully. Feb 9 08:42:06.183162 env[1172]: time="2024-02-09T08:42:06.183080285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.183841 env[1172]: time="2024-02-09T08:42:06.183816340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.184859 env[1172]: time="2024-02-09T08:42:06.184846808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.185397 env[1172]: time="2024-02-09T08:42:06.185387087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.185725 env[1172]: time="2024-02-09T08:42:06.185710246Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 08:42:06.191748 env[1172]: time="2024-02-09T08:42:06.191730140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 08:42:06.766769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 08:42:06.766902 systemd[1]: Stopped kubelet.service. Feb 9 08:42:06.767747 systemd[1]: Started kubelet.service. Feb 9 08:42:06.772785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523280142.mount: Deactivated successfully. Feb 9 08:42:06.775522 env[1172]: time="2024-02-09T08:42:06.775497116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.776913 env[1172]: time="2024-02-09T08:42:06.776900856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.777708 env[1172]: time="2024-02-09T08:42:06.777696955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.778590 env[1172]: time="2024-02-09T08:42:06.778579249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:06.779212 env[1172]: time="2024-02-09T08:42:06.779200809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 08:42:06.786144 env[1172]: time="2024-02-09T08:42:06.786073083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 08:42:06.799468 kubelet[1597]: E0209 08:42:06.799418 1597 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 08:42:06.801541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 08:42:06.801683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 08:42:07.483282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860868499.mount: Deactivated successfully. Feb 9 08:42:10.459778 env[1172]: time="2024-02-09T08:42:10.459713982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:10.460409 env[1172]: time="2024-02-09T08:42:10.460369109Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:10.461292 env[1172]: time="2024-02-09T08:42:10.461247619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:10.462494 env[1172]: time="2024-02-09T08:42:10.462454643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:10.462950 env[1172]: time="2024-02-09T08:42:10.462893370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 9 08:42:10.468157 env[1172]: time="2024-02-09T08:42:10.468095490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 08:42:11.033353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819898166.mount: Deactivated successfully. Feb 9 08:42:11.505659 env[1172]: time="2024-02-09T08:42:11.505530246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:11.506091 env[1172]: time="2024-02-09T08:42:11.506050378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:11.506833 env[1172]: time="2024-02-09T08:42:11.506791927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:11.507499 env[1172]: time="2024-02-09T08:42:11.507458956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:11.507828 env[1172]: time="2024-02-09T08:42:11.507786896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 08:42:12.870900 systemd[1]: Stopped kubelet.service. Feb 9 08:42:12.880664 systemd[1]: Reloading. Feb 9 08:42:12.915162 /usr/lib/systemd/system-generators/torcx-generator[1762]: time="2024-02-09T08:42:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:42:12.915180 /usr/lib/systemd/system-generators/torcx-generator[1762]: time="2024-02-09T08:42:12Z" level=info msg="torcx already run" Feb 9 08:42:12.966896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:42:12.966905 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:42:12.980147 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:42:13.035079 systemd[1]: Started kubelet.service. Feb 9 08:42:13.058205 kubelet[1819]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:42:13.058205 kubelet[1819]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 08:42:13.058205 kubelet[1819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:42:13.058435 kubelet[1819]: I0209 08:42:13.058228 1819 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 08:42:13.196479 kubelet[1819]: I0209 08:42:13.196389 1819 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 08:42:13.196479 kubelet[1819]: I0209 08:42:13.196404 1819 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 08:42:13.196586 kubelet[1819]: I0209 08:42:13.196549 1819 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 08:42:13.207321 kubelet[1819]: I0209 08:42:13.207283 1819 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 08:42:13.209131 kubelet[1819]: E0209 08:42:13.209095 1819 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.90.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.229148 kubelet[1819]: I0209 08:42:13.229140 1819 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 08:42:13.230167 kubelet[1819]: I0209 08:42:13.230088 1819 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 08:42:13.230197 kubelet[1819]: I0209 08:42:13.230178 1819 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 08:42:13.230197 kubelet[1819]: I0209 08:42:13.230188 1819 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 08:42:13.230197 kubelet[1819]: I0209 08:42:13.230194 1819 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 08:42:13.230543 kubelet[1819]: I0209 08:42:13.230516 1819 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:42:13.237805 kubelet[1819]: W0209 08:42:13.237763 1819 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.90.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-98a543a057&limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.237805 kubelet[1819]: E0209 08:42:13.237792 1819 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.90.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-98a543a057&limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.238336 kubelet[1819]: I0209 08:42:13.238299 1819 kubelet.go:405] "Attempting to sync node with API server" Feb 9 08:42:13.238336 kubelet[1819]: I0209 08:42:13.238313 1819 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 08:42:13.238830 kubelet[1819]: I0209 08:42:13.238811 1819 kubelet.go:309] "Adding apiserver pod source" Feb 9 08:42:13.238881 kubelet[1819]: I0209 08:42:13.238836 1819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 08:42:13.239102 kubelet[1819]: W0209 08:42:13.239080 1819 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.239134 kubelet[1819]: E0209 08:42:13.239107 1819 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.240930 kubelet[1819]: I0209 08:42:13.240918 1819 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 08:42:13.242685 kubelet[1819]: W0209 08:42:13.242591 1819 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 08:42:13.244194 kubelet[1819]: I0209 08:42:13.244183 1819 server.go:1168] "Started kubelet" Feb 9 08:42:13.244252 kubelet[1819]: I0209 08:42:13.244243 1819 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 08:42:13.244336 kubelet[1819]: I0209 08:42:13.244327 1819 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 08:42:13.244766 kubelet[1819]: I0209 08:42:13.244735 1819 server.go:461] "Adding debug handlers to kubelet server" Feb 9 08:42:13.254101 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 08:42:13.254190 kubelet[1819]: I0209 08:42:13.254180 1819 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 08:42:13.254234 kubelet[1819]: I0209 08:42:13.254226 1819 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 08:42:13.254266 kubelet[1819]: I0209 08:42:13.254249 1819 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 08:42:13.256608 kubelet[1819]: E0209 08:42:13.256591 1819 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 08:42:13.256705 kubelet[1819]: E0209 08:42:13.256694 1819 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 08:42:13.256799 kubelet[1819]: E0209 08:42:13.256789 1819 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-98a543a057?timeout=10s\": dial tcp 139.178.90.113:6443: connect: connection refused" interval="200ms" Feb 9 08:42:13.256999 kubelet[1819]: W0209 08:42:13.256971 1819 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.90.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.257049 kubelet[1819]: E0209 08:42:13.257007 1819 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.90.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.257262 kubelet[1819]: E0209 08:42:13.257186 1819 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-98a543a057.17b225387b51718b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-98a543a057", UID:"ci-3510.3.2-a-98a543a057", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-98a543a057"}, FirstTimestamp:time.Date(2024, time.February, 9, 8, 42, 13, 244170635, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 8, 42, 13, 244170635, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.90.113:6443/api/v1/namespaces/default/events": dial tcp 139.178.90.113:6443: connect: connection refused'(may retry after sleeping) Feb 9 08:42:13.263989 kubelet[1819]: I0209 08:42:13.263977 1819 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 08:42:13.264464 kubelet[1819]: I0209 08:42:13.264457 1819 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 08:42:13.264507 kubelet[1819]: I0209 08:42:13.264473 1819 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 08:42:13.264507 kubelet[1819]: I0209 08:42:13.264489 1819 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 08:42:13.264559 kubelet[1819]: E0209 08:42:13.264525 1819 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 08:42:13.265347 kubelet[1819]: W0209 08:42:13.265322 1819 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.90.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.265386 kubelet[1819]: E0209 08:42:13.265353 1819 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.90.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:13.272991 kubelet[1819]: I0209 08:42:13.272955 1819 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 08:42:13.272991 kubelet[1819]: I0209 08:42:13.272963 1819 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 08:42:13.272991 kubelet[1819]: I0209 08:42:13.272973 1819 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:42:13.274169 kubelet[1819]: I0209 08:42:13.274095 1819 policy_none.go:49] "None policy: Start" Feb 9 08:42:13.275262 kubelet[1819]: I0209 08:42:13.275186 1819 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 08:42:13.275262 kubelet[1819]: I0209 08:42:13.275236 1819 state_mem.go:35] "Initializing new in-memory state store" Feb 9 08:42:13.284393 systemd[1]: Created slice kubepods.slice. Feb 9 08:42:13.292982 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 08:42:13.299505 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 08:42:13.317994 kubelet[1819]: I0209 08:42:13.317918 1819 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 08:42:13.318982 kubelet[1819]: E0209 08:42:13.318940 1819 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-98a543a057\" not found" Feb 9 08:42:13.319283 kubelet[1819]: I0209 08:42:13.319205 1819 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 08:42:13.360217 kubelet[1819]: I0209 08:42:13.360127 1819 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.360941 kubelet[1819]: E0209 08:42:13.360868 1819 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.113:6443/api/v1/nodes\": dial tcp 139.178.90.113:6443: connect: connection refused" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.365159 kubelet[1819]: I0209 08:42:13.365079 1819 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:13.368400 kubelet[1819]: I0209 08:42:13.368320 1819 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:13.371772 kubelet[1819]: I0209 08:42:13.371726 1819 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:13.384504 systemd[1]: Created slice kubepods-burstable-pod42db86427266b87187e662038d061316.slice. Feb 9 08:42:13.404930 systemd[1]: Created slice kubepods-burstable-pod1a7c06ce94c501cda85e2145911ad4df.slice. Feb 9 08:42:13.412640 systemd[1]: Created slice kubepods-burstable-pod5c461cb5ed6979ff1ec37d6b4751d22e.slice. Feb 9 08:42:13.457641 kubelet[1819]: I0209 08:42:13.457420 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.457641 kubelet[1819]: I0209 08:42:13.457626 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42db86427266b87187e662038d061316-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98a543a057\" (UID: \"42db86427266b87187e662038d061316\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458088 kubelet[1819]: E0209 08:42:13.457654 1819 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-98a543a057?timeout=10s\": dial tcp 139.178.90.113:6443: connect: connection refused" interval="400ms" Feb 9 08:42:13.458088 kubelet[1819]: I0209 08:42:13.457742 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42db86427266b87187e662038d061316-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98a543a057\" (UID: \"42db86427266b87187e662038d061316\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458088 kubelet[1819]: I0209 08:42:13.457808 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42db86427266b87187e662038d061316-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-98a543a057\" (UID: \"42db86427266b87187e662038d061316\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458088 kubelet[1819]: I0209 08:42:13.457866 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458088 kubelet[1819]: I0209 08:42:13.457922 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458866 kubelet[1819]: I0209 08:42:13.457977 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458866 kubelet[1819]: I0209 08:42:13.458079 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.458866 kubelet[1819]: I0209 08:42:13.458162 1819 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c461cb5ed6979ff1ec37d6b4751d22e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-98a543a057\" (UID: \"5c461cb5ed6979ff1ec37d6b4751d22e\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.565146 kubelet[1819]: I0209 08:42:13.565055 1819 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.565810 kubelet[1819]: E0209 08:42:13.565731 1819 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.113:6443/api/v1/nodes\": dial tcp 139.178.90.113:6443: connect: connection refused" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.701863 env[1172]: time="2024-02-09T08:42:13.701730770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-98a543a057,Uid:42db86427266b87187e662038d061316,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:13.711089 env[1172]: time="2024-02-09T08:42:13.710892713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-98a543a057,Uid:1a7c06ce94c501cda85e2145911ad4df,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:13.718121 env[1172]: time="2024-02-09T08:42:13.718016600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-98a543a057,Uid:5c461cb5ed6979ff1ec37d6b4751d22e,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:13.859621 kubelet[1819]: E0209 08:42:13.859508 1819 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-98a543a057?timeout=10s\": dial tcp 139.178.90.113:6443: connect: connection refused" interval="800ms" Feb 9 08:42:13.969799 kubelet[1819]: I0209 08:42:13.969618 1819 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:13.970418 kubelet[1819]: E0209 08:42:13.970349 1819 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.113:6443/api/v1/nodes\": dial tcp 139.178.90.113:6443: connect: connection refused" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:14.185252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526389188.mount: Deactivated successfully. Feb 9 08:42:14.186334 env[1172]: time="2024-02-09T08:42:14.186285176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.187877 env[1172]: time="2024-02-09T08:42:14.187833624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.188756 env[1172]: time="2024-02-09T08:42:14.188568186Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.190220 env[1172]: time="2024-02-09T08:42:14.190173798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.190675 env[1172]: time="2024-02-09T08:42:14.190635800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.191041 env[1172]: time="2024-02-09T08:42:14.191001790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.192572 env[1172]: time="2024-02-09T08:42:14.192516279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.194476 env[1172]: time="2024-02-09T08:42:14.194435823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.195276 env[1172]: time="2024-02-09T08:42:14.195264595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.196065 env[1172]: time="2024-02-09T08:42:14.196016325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.196476 env[1172]: time="2024-02-09T08:42:14.196427569Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.196888 env[1172]: time="2024-02-09T08:42:14.196854114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:14.200700 env[1172]: time="2024-02-09T08:42:14.200613686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:14.200700 env[1172]: time="2024-02-09T08:42:14.200634022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:14.200700 env[1172]: time="2024-02-09T08:42:14.200655298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:14.200796 env[1172]: time="2024-02-09T08:42:14.200753547Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c46e3f5475c023bc91c71ba576cb84067950099797472f6d8dbca79b7b807ef2 pid=1866 runtime=io.containerd.runc.v2 Feb 9 08:42:14.203867 env[1172]: time="2024-02-09T08:42:14.203830160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:14.203867 env[1172]: time="2024-02-09T08:42:14.203851906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:14.203867 env[1172]: time="2024-02-09T08:42:14.203859133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:14.204014 env[1172]: time="2024-02-09T08:42:14.203932950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a0391928bf148e1b06b4ad2667a1035763ff6989f0cdf044df62143689b3622 pid=1888 runtime=io.containerd.runc.v2 Feb 9 08:42:14.204091 env[1172]: time="2024-02-09T08:42:14.204065731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:14.204091 env[1172]: time="2024-02-09T08:42:14.204082217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:14.204133 env[1172]: time="2024-02-09T08:42:14.204088810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:14.204157 env[1172]: time="2024-02-09T08:42:14.204142705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e074fb59310775273708056e46adbcbd35aacbe33e19a9a8ed4194e97b266612 pid=1899 runtime=io.containerd.runc.v2 Feb 9 08:42:14.211073 systemd[1]: Started cri-containerd-0a0391928bf148e1b06b4ad2667a1035763ff6989f0cdf044df62143689b3622.scope. Feb 9 08:42:14.212035 systemd[1]: Started cri-containerd-e074fb59310775273708056e46adbcbd35aacbe33e19a9a8ed4194e97b266612.scope. Feb 9 08:42:14.218788 systemd[1]: Started cri-containerd-c46e3f5475c023bc91c71ba576cb84067950099797472f6d8dbca79b7b807ef2.scope. Feb 9 08:42:14.234241 env[1172]: time="2024-02-09T08:42:14.234179051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-98a543a057,Uid:5c461cb5ed6979ff1ec37d6b4751d22e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a0391928bf148e1b06b4ad2667a1035763ff6989f0cdf044df62143689b3622\"" Feb 9 08:42:14.235784 env[1172]: time="2024-02-09T08:42:14.235770041Z" level=info msg="CreateContainer within sandbox \"0a0391928bf148e1b06b4ad2667a1035763ff6989f0cdf044df62143689b3622\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 08:42:14.240616 env[1172]: time="2024-02-09T08:42:14.240594290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-98a543a057,Uid:1a7c06ce94c501cda85e2145911ad4df,Namespace:kube-system,Attempt:0,} returns sandbox id \"c46e3f5475c023bc91c71ba576cb84067950099797472f6d8dbca79b7b807ef2\"" Feb 9 08:42:14.240687 env[1172]: time="2024-02-09T08:42:14.240653791Z" level=info msg="CreateContainer within sandbox \"0a0391928bf148e1b06b4ad2667a1035763ff6989f0cdf044df62143689b3622\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7bbf39dda2799628f6d065d90e9fb043feff75f39c8d893677144249fed1548\"" Feb 9 08:42:14.240940 env[1172]: time="2024-02-09T08:42:14.240929035Z" level=info msg="StartContainer for \"a7bbf39dda2799628f6d065d90e9fb043feff75f39c8d893677144249fed1548\"" Feb 9 08:42:14.241737 env[1172]: time="2024-02-09T08:42:14.241723334Z" level=info msg="CreateContainer within sandbox \"c46e3f5475c023bc91c71ba576cb84067950099797472f6d8dbca79b7b807ef2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 08:42:14.244969 env[1172]: time="2024-02-09T08:42:14.244931002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-98a543a057,Uid:42db86427266b87187e662038d061316,Namespace:kube-system,Attempt:0,} returns sandbox id \"e074fb59310775273708056e46adbcbd35aacbe33e19a9a8ed4194e97b266612\"" Feb 9 08:42:14.246131 env[1172]: time="2024-02-09T08:42:14.246115679Z" level=info msg="CreateContainer within sandbox \"e074fb59310775273708056e46adbcbd35aacbe33e19a9a8ed4194e97b266612\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 08:42:14.247117 env[1172]: time="2024-02-09T08:42:14.247102574Z" level=info msg="CreateContainer within sandbox \"c46e3f5475c023bc91c71ba576cb84067950099797472f6d8dbca79b7b807ef2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"37c7281d644e2c948c8fb170f3fb4cafc6cc4bd66b9c660c6a2f0598908b75de\"" Feb 9 08:42:14.247243 env[1172]: time="2024-02-09T08:42:14.247233516Z" level=info msg="StartContainer for \"37c7281d644e2c948c8fb170f3fb4cafc6cc4bd66b9c660c6a2f0598908b75de\"" Feb 9 08:42:14.250111 env[1172]: time="2024-02-09T08:42:14.250067183Z" level=info msg="CreateContainer within sandbox \"e074fb59310775273708056e46adbcbd35aacbe33e19a9a8ed4194e97b266612\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ef300e4ead56c93d2c12e2550c214a51eb61fcc3948fbcd479c54297a50e2ba\"" Feb 9 08:42:14.250307 env[1172]: time="2024-02-09T08:42:14.250272182Z" level=info msg="StartContainer for \"7ef300e4ead56c93d2c12e2550c214a51eb61fcc3948fbcd479c54297a50e2ba\"" Feb 9 08:42:14.255064 systemd[1]: Started cri-containerd-37c7281d644e2c948c8fb170f3fb4cafc6cc4bd66b9c660c6a2f0598908b75de.scope. Feb 9 08:42:14.258144 systemd[1]: Started cri-containerd-7ef300e4ead56c93d2c12e2550c214a51eb61fcc3948fbcd479c54297a50e2ba.scope. Feb 9 08:42:14.261565 systemd[1]: Started cri-containerd-a7bbf39dda2799628f6d065d90e9fb043feff75f39c8d893677144249fed1548.scope. Feb 9 08:42:14.290449 env[1172]: time="2024-02-09T08:42:14.290413410Z" level=info msg="StartContainer for \"a7bbf39dda2799628f6d065d90e9fb043feff75f39c8d893677144249fed1548\" returns successfully" Feb 9 08:42:14.290557 env[1172]: time="2024-02-09T08:42:14.290495639Z" level=info msg="StartContainer for \"7ef300e4ead56c93d2c12e2550c214a51eb61fcc3948fbcd479c54297a50e2ba\" returns successfully" Feb 9 08:42:14.293608 env[1172]: time="2024-02-09T08:42:14.293579158Z" level=info msg="StartContainer for \"37c7281d644e2c948c8fb170f3fb4cafc6cc4bd66b9c660c6a2f0598908b75de\" returns successfully" Feb 9 08:42:14.293975 kubelet[1819]: W0209 08:42:14.293948 1819 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:14.294136 kubelet[1819]: E0209 08:42:14.293984 1819 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 08:42:14.772359 kubelet[1819]: I0209 08:42:14.772343 1819 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:14.952114 kubelet[1819]: E0209 08:42:14.952093 1819 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-98a543a057\" not found" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:15.058171 kubelet[1819]: I0209 08:42:15.058081 1819 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:15.240546 kubelet[1819]: I0209 08:42:15.240444 1819 apiserver.go:52] "Watching apiserver" Feb 9 08:42:15.255057 kubelet[1819]: I0209 08:42:15.254948 1819 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 08:42:15.269212 kubelet[1819]: I0209 08:42:15.269166 1819 reconciler.go:41] "Reconciler: start to sync state" Feb 9 08:42:15.282963 kubelet[1819]: E0209 08:42:15.282877 1819 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-98a543a057\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98a543a057" Feb 9 08:42:15.282963 kubelet[1819]: E0209 08:42:15.282914 1819 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:15.282963 kubelet[1819]: E0209 08:42:15.282919 1819 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-98a543a057\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:16.287698 kubelet[1819]: W0209 08:42:16.287634 1819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:16.288510 kubelet[1819]: W0209 08:42:16.287730 1819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:16.288510 kubelet[1819]: W0209 08:42:16.288385 1819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:18.215919 systemd[1]: Reloading. Feb 9 08:42:18.265037 /usr/lib/systemd/system-generators/torcx-generator[2149]: time="2024-02-09T08:42:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:42:18.265059 /usr/lib/systemd/system-generators/torcx-generator[2149]: time="2024-02-09T08:42:18Z" level=info msg="torcx already run" Feb 9 08:42:18.317841 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:42:18.317850 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:42:18.330638 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:42:18.394947 systemd[1]: Stopping kubelet.service... Feb 9 08:42:18.413932 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 08:42:18.414040 systemd[1]: Stopped kubelet.service. Feb 9 08:42:18.414957 systemd[1]: Started kubelet.service. Feb 9 08:42:18.443960 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:42:18.443960 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 08:42:18.443960 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:42:18.443960 kubelet[2206]: I0209 08:42:18.443948 2206 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 08:42:18.446447 kubelet[2206]: I0209 08:42:18.446437 2206 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 08:42:18.446447 kubelet[2206]: I0209 08:42:18.446448 2206 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 08:42:18.446586 kubelet[2206]: I0209 08:42:18.446556 2206 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 08:42:18.447417 kubelet[2206]: I0209 08:42:18.447411 2206 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 08:42:18.447967 kubelet[2206]: I0209 08:42:18.447955 2206 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 08:42:18.465728 kubelet[2206]: I0209 08:42:18.465673 2206 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 08:42:18.465792 kubelet[2206]: I0209 08:42:18.465774 2206 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 08:42:18.465868 kubelet[2206]: I0209 08:42:18.465832 2206 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 08:42:18.465868 kubelet[2206]: I0209 08:42:18.465845 2206 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 08:42:18.465868 kubelet[2206]: I0209 08:42:18.465851 2206 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 08:42:18.465868 kubelet[2206]: I0209 08:42:18.465867 2206 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:42:18.467137 kubelet[2206]: I0209 08:42:18.467106 2206 kubelet.go:405] "Attempting to sync node with API server" Feb 9 08:42:18.467137 kubelet[2206]: I0209 08:42:18.467116 2206 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 08:42:18.467137 kubelet[2206]: I0209 08:42:18.467126 2206 kubelet.go:309] "Adding apiserver pod source" Feb 9 08:42:18.467137 kubelet[2206]: I0209 08:42:18.467134 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 08:42:18.467368 kubelet[2206]: I0209 08:42:18.467356 2206 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 08:42:18.467700 kubelet[2206]: I0209 08:42:18.467660 2206 server.go:1168] "Started kubelet" Feb 9 08:42:18.467742 kubelet[2206]: I0209 08:42:18.467703 2206 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 08:42:18.467768 kubelet[2206]: I0209 08:42:18.467739 2206 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 08:42:18.468003 kubelet[2206]: E0209 08:42:18.467992 2206 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 08:42:18.468035 kubelet[2206]: E0209 08:42:18.468009 2206 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 08:42:18.468274 kubelet[2206]: I0209 08:42:18.468265 2206 server.go:461] "Adding debug handlers to kubelet server" Feb 9 08:42:18.468362 kubelet[2206]: I0209 08:42:18.468354 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 08:42:18.468411 kubelet[2206]: I0209 08:42:18.468382 2206 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 08:42:18.468583 kubelet[2206]: I0209 08:42:18.468571 2206 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 08:42:18.473490 kubelet[2206]: I0209 08:42:18.473476 2206 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 08:42:18.474077 kubelet[2206]: I0209 08:42:18.474066 2206 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 08:42:18.474137 kubelet[2206]: I0209 08:42:18.474084 2206 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 08:42:18.474137 kubelet[2206]: I0209 08:42:18.474097 2206 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 08:42:18.474137 kubelet[2206]: E0209 08:42:18.474136 2206 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 08:42:18.488254 kubelet[2206]: I0209 08:42:18.488204 2206 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 08:42:18.488254 kubelet[2206]: I0209 08:42:18.488220 2206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 08:42:18.488254 kubelet[2206]: I0209 08:42:18.488232 2206 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:42:18.488374 kubelet[2206]: I0209 08:42:18.488315 2206 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 08:42:18.488374 kubelet[2206]: I0209 08:42:18.488324 2206 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 08:42:18.488374 kubelet[2206]: I0209 08:42:18.488327 2206 policy_none.go:49] "None policy: Start" Feb 9 08:42:18.488611 kubelet[2206]: I0209 08:42:18.488603 2206 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 08:42:18.488646 kubelet[2206]: I0209 08:42:18.488615 2206 state_mem.go:35] "Initializing new in-memory state store" Feb 9 08:42:18.488688 kubelet[2206]: I0209 08:42:18.488682 2206 state_mem.go:75] "Updated machine memory state" Feb 9 08:42:18.490403 kubelet[2206]: I0209 08:42:18.490395 2206 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 08:42:18.490515 kubelet[2206]: I0209 08:42:18.490509 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 08:42:18.551692 sudo[2248]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 08:42:18.552057 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 08:42:18.573891 kubelet[2206]: I0209 08:42:18.573811 2206 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.574414 kubelet[2206]: I0209 08:42:18.574334 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:18.574603 kubelet[2206]: I0209 08:42:18.574490 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:18.574747 kubelet[2206]: I0209 08:42:18.574615 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:18.582805 kubelet[2206]: W0209 08:42:18.582757 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:18.583017 kubelet[2206]: E0209 08:42:18.582914 2206 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.583351 kubelet[2206]: W0209 08:42:18.583292 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:18.583476 kubelet[2206]: E0209 08:42:18.583398 2206 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-98a543a057\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.583476 kubelet[2206]: W0209 08:42:18.583417 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:18.583698 kubelet[2206]: E0209 08:42:18.583590 2206 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-98a543a057\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.584216 kubelet[2206]: I0209 08:42:18.584174 2206 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.584389 kubelet[2206]: I0209 08:42:18.584305 2206 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.769883 kubelet[2206]: I0209 08:42:18.769771 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.769883 kubelet[2206]: I0209 08:42:18.769794 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.769883 kubelet[2206]: I0209 08:42:18.769864 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.770062 kubelet[2206]: I0209 08:42:18.769889 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.770062 kubelet[2206]: I0209 08:42:18.769910 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42db86427266b87187e662038d061316-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98a543a057\" (UID: \"42db86427266b87187e662038d061316\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.770062 kubelet[2206]: I0209 08:42:18.769924 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a7c06ce94c501cda85e2145911ad4df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" (UID: \"1a7c06ce94c501cda85e2145911ad4df\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.770062 kubelet[2206]: I0209 08:42:18.769939 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c461cb5ed6979ff1ec37d6b4751d22e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-98a543a057\" (UID: \"5c461cb5ed6979ff1ec37d6b4751d22e\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.770062 kubelet[2206]: I0209 08:42:18.769969 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42db86427266b87187e662038d061316-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98a543a057\" (UID: \"42db86427266b87187e662038d061316\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.770203 kubelet[2206]: I0209 08:42:18.769992 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42db86427266b87187e662038d061316-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-98a543a057\" (UID: \"42db86427266b87187e662038d061316\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:18.981043 sudo[2248]: pam_unix(sudo:session): session closed for user root Feb 9 08:42:19.468456 kubelet[2206]: I0209 08:42:19.468337 2206 apiserver.go:52] "Watching apiserver" Feb 9 08:42:19.488087 kubelet[2206]: W0209 08:42:19.488001 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:19.488729 kubelet[2206]: E0209 08:42:19.488611 2206 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-98a543a057\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98a543a057" Feb 9 08:42:19.491033 kubelet[2206]: W0209 08:42:19.491023 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:19.491261 kubelet[2206]: E0209 08:42:19.491251 2206 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-98a543a057\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" Feb 9 08:42:19.491337 kubelet[2206]: W0209 08:42:19.491306 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 08:42:19.491424 kubelet[2206]: E0209 08:42:19.491350 2206 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-98a543a057\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" Feb 9 08:42:19.499589 kubelet[2206]: I0209 08:42:19.499563 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98a543a057" podStartSLOduration=3.499513744 podCreationTimestamp="2024-02-09 08:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:42:19.499481573 +0000 UTC m=+1.082907441" watchObservedRunningTime="2024-02-09 08:42:19.499513744 +0000 UTC m=+1.082939610" Feb 9 08:42:19.503739 kubelet[2206]: I0209 08:42:19.503727 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98a543a057" podStartSLOduration=3.503689138 podCreationTimestamp="2024-02-09 08:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:42:19.503609541 +0000 UTC m=+1.087035411" watchObservedRunningTime="2024-02-09 08:42:19.503689138 +0000 UTC m=+1.087115003" Feb 9 08:42:19.507671 kubelet[2206]: I0209 08:42:19.507629 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98a543a057" podStartSLOduration=3.507601114 podCreationTimestamp="2024-02-09 08:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:42:19.507591879 +0000 UTC m=+1.091017748" watchObservedRunningTime="2024-02-09 08:42:19.507601114 +0000 UTC m=+1.091026979" Feb 9 08:42:19.570188 kubelet[2206]: I0209 08:42:19.570097 2206 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 08:42:19.575059 kubelet[2206]: I0209 08:42:19.574960 2206 reconciler.go:41] "Reconciler: start to sync state" Feb 9 08:42:20.331747 sudo[1272]: pam_unix(sudo:session): session closed for user root Feb 9 08:42:20.332672 sshd[1269]: pam_unix(sshd:session): session closed for user core Feb 9 08:42:20.334312 systemd[1]: sshd@4-139.178.90.113:22-147.75.109.163:56990.service: Deactivated successfully. Feb 9 08:42:20.334796 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 08:42:20.334901 systemd[1]: session-7.scope: Consumed 3.033s CPU time. Feb 9 08:42:20.335243 systemd-logind[1160]: Session 7 logged out. Waiting for processes to exit. Feb 9 08:42:20.335850 systemd-logind[1160]: Removed session 7. Feb 9 08:42:27.827264 update_engine[1162]: I0209 08:42:27.827150 1162 update_attempter.cc:509] Updating boot flags... Feb 9 08:42:30.459663 kubelet[2206]: I0209 08:42:30.459604 2206 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 08:42:30.460791 env[1172]: time="2024-02-09T08:42:30.460666795Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 08:42:30.461672 kubelet[2206]: I0209 08:42:30.461134 2206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 08:42:31.227269 kubelet[2206]: I0209 08:42:31.227174 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:31.235031 kubelet[2206]: I0209 08:42:31.234924 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:31.245005 systemd[1]: Created slice kubepods-besteffort-pod4fe42f1b_558d_43cc_804c_2d55dac41a56.slice. Feb 9 08:42:31.253655 kubelet[2206]: I0209 08:42:31.253608 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-lib-modules\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.253855 kubelet[2206]: I0209 08:42:31.253698 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-xtables-lock\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.253855 kubelet[2206]: I0209 08:42:31.253768 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-hostproc\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.253855 kubelet[2206]: I0209 08:42:31.253817 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-config-path\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254157 kubelet[2206]: I0209 08:42:31.253867 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fe42f1b-558d-43cc-804c-2d55dac41a56-lib-modules\") pod \"kube-proxy-t8nv5\" (UID: \"4fe42f1b-558d-43cc-804c-2d55dac41a56\") " pod="kube-system/kube-proxy-t8nv5" Feb 9 08:42:31.254157 kubelet[2206]: I0209 08:42:31.253917 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-bpf-maps\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254157 kubelet[2206]: I0209 08:42:31.253995 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cni-path\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254157 kubelet[2206]: I0209 08:42:31.254063 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75b8h\" (UniqueName: \"kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-kube-api-access-75b8h\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254157 kubelet[2206]: I0209 08:42:31.254115 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xzjs\" (UniqueName: \"kubernetes.io/projected/4fe42f1b-558d-43cc-804c-2d55dac41a56-kube-api-access-6xzjs\") pod \"kube-proxy-t8nv5\" (UID: \"4fe42f1b-558d-43cc-804c-2d55dac41a56\") " pod="kube-system/kube-proxy-t8nv5" Feb 9 08:42:31.254157 kubelet[2206]: I0209 08:42:31.254161 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-run\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254782 kubelet[2206]: I0209 08:42:31.254203 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-net\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254782 kubelet[2206]: I0209 08:42:31.254247 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-kernel\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254782 kubelet[2206]: I0209 08:42:31.254378 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fe42f1b-558d-43cc-804c-2d55dac41a56-xtables-lock\") pod \"kube-proxy-t8nv5\" (UID: \"4fe42f1b-558d-43cc-804c-2d55dac41a56\") " pod="kube-system/kube-proxy-t8nv5" Feb 9 08:42:31.254782 kubelet[2206]: I0209 08:42:31.254463 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-etc-cni-netd\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.254782 kubelet[2206]: I0209 08:42:31.254549 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c083d8-4038-4a17-96ef-f77304ed2f26-clustermesh-secrets\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.255194 kubelet[2206]: I0209 08:42:31.254622 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4fe42f1b-558d-43cc-804c-2d55dac41a56-kube-proxy\") pod \"kube-proxy-t8nv5\" (UID: \"4fe42f1b-558d-43cc-804c-2d55dac41a56\") " pod="kube-system/kube-proxy-t8nv5" Feb 9 08:42:31.255194 kubelet[2206]: I0209 08:42:31.254715 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-hubble-tls\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.255194 kubelet[2206]: I0209 08:42:31.254768 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-cgroup\") pod \"cilium-clq4m\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " pod="kube-system/cilium-clq4m" Feb 9 08:42:31.272009 systemd[1]: Created slice kubepods-burstable-podd8c083d8_4038_4a17_96ef_f77304ed2f26.slice. Feb 9 08:42:31.438326 kubelet[2206]: I0209 08:42:31.438256 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:31.444295 systemd[1]: Created slice kubepods-besteffort-pod78b54691_079f_4b9a_987c_a77a9fec16d7.slice. Feb 9 08:42:31.457020 kubelet[2206]: I0209 08:42:31.456919 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78b54691-079f-4b9a-987c-a77a9fec16d7-cilium-config-path\") pod \"cilium-operator-574c4bb98d-rncpt\" (UID: \"78b54691-079f-4b9a-987c-a77a9fec16d7\") " pod="kube-system/cilium-operator-574c4bb98d-rncpt" Feb 9 08:42:31.457218 kubelet[2206]: I0209 08:42:31.457091 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf288\" (UniqueName: \"kubernetes.io/projected/78b54691-079f-4b9a-987c-a77a9fec16d7-kube-api-access-bf288\") pod \"cilium-operator-574c4bb98d-rncpt\" (UID: \"78b54691-079f-4b9a-987c-a77a9fec16d7\") " pod="kube-system/cilium-operator-574c4bb98d-rncpt" Feb 9 08:42:31.572882 env[1172]: time="2024-02-09T08:42:31.572728900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8nv5,Uid:4fe42f1b-558d-43cc-804c-2d55dac41a56,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:31.574971 env[1172]: time="2024-02-09T08:42:31.574877490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clq4m,Uid:d8c083d8-4038-4a17-96ef-f77304ed2f26,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:31.599585 env[1172]: time="2024-02-09T08:42:31.599422454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:31.599585 env[1172]: time="2024-02-09T08:42:31.599547981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:31.599943 env[1172]: time="2024-02-09T08:42:31.599595983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:31.600220 env[1172]: time="2024-02-09T08:42:31.600052161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a77a532913b7238fe5244ad99456f0029d575bd9544727901e71fd3c2ec126e pid=2381 runtime=io.containerd.runc.v2 Feb 9 08:42:31.604246 env[1172]: time="2024-02-09T08:42:31.604082670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:31.604246 env[1172]: time="2024-02-09T08:42:31.604175098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:31.604620 env[1172]: time="2024-02-09T08:42:31.604232552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:31.604777 env[1172]: time="2024-02-09T08:42:31.604585756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283 pid=2394 runtime=io.containerd.runc.v2 Feb 9 08:42:31.629633 systemd[1]: Started cri-containerd-80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283.scope. Feb 9 08:42:31.640962 systemd[1]: Started cri-containerd-2a77a532913b7238fe5244ad99456f0029d575bd9544727901e71fd3c2ec126e.scope. Feb 9 08:42:31.675156 env[1172]: time="2024-02-09T08:42:31.675059947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8nv5,Uid:4fe42f1b-558d-43cc-804c-2d55dac41a56,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a77a532913b7238fe5244ad99456f0029d575bd9544727901e71fd3c2ec126e\"" Feb 9 08:42:31.679171 env[1172]: time="2024-02-09T08:42:31.679079479Z" level=info msg="CreateContainer within sandbox \"2a77a532913b7238fe5244ad99456f0029d575bd9544727901e71fd3c2ec126e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 08:42:31.679171 env[1172]: time="2024-02-09T08:42:31.679142879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clq4m,Uid:d8c083d8-4038-4a17-96ef-f77304ed2f26,Namespace:kube-system,Attempt:0,} returns sandbox id \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\"" Feb 9 08:42:31.681438 env[1172]: time="2024-02-09T08:42:31.681372105Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 08:42:31.692445 env[1172]: time="2024-02-09T08:42:31.692387772Z" level=info msg="CreateContainer within sandbox \"2a77a532913b7238fe5244ad99456f0029d575bd9544727901e71fd3c2ec126e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65ba6c46e395d5e094607fa87bde181e65cc1c660c47fabb266a11806715da2a\"" Feb 9 08:42:31.693204 env[1172]: time="2024-02-09T08:42:31.693123157Z" level=info msg="StartContainer for \"65ba6c46e395d5e094607fa87bde181e65cc1c660c47fabb266a11806715da2a\"" Feb 9 08:42:31.719237 systemd[1]: Started cri-containerd-65ba6c46e395d5e094607fa87bde181e65cc1c660c47fabb266a11806715da2a.scope. Feb 9 08:42:31.747883 env[1172]: time="2024-02-09T08:42:31.747798009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-rncpt,Uid:78b54691-079f-4b9a-987c-a77a9fec16d7,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:31.765336 env[1172]: time="2024-02-09T08:42:31.765198124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:31.765336 env[1172]: time="2024-02-09T08:42:31.765260424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:31.765336 env[1172]: time="2024-02-09T08:42:31.765284228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:31.765756 env[1172]: time="2024-02-09T08:42:31.765477321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf pid=2489 runtime=io.containerd.runc.v2 Feb 9 08:42:31.775222 env[1172]: time="2024-02-09T08:42:31.775129974Z" level=info msg="StartContainer for \"65ba6c46e395d5e094607fa87bde181e65cc1c660c47fabb266a11806715da2a\" returns successfully" Feb 9 08:42:31.794843 systemd[1]: Started cri-containerd-99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf.scope. Feb 9 08:42:31.856065 env[1172]: time="2024-02-09T08:42:31.855958983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-rncpt,Uid:78b54691-079f-4b9a-987c-a77a9fec16d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\"" Feb 9 08:42:32.535020 kubelet[2206]: I0209 08:42:32.534921 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-t8nv5" podStartSLOduration=1.534840019 podCreationTimestamp="2024-02-09 08:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:42:32.534692543 +0000 UTC m=+14.118118488" watchObservedRunningTime="2024-02-09 08:42:32.534840019 +0000 UTC m=+14.118265950" Feb 9 08:42:35.865237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044473057.mount: Deactivated successfully. Feb 9 08:42:37.554814 env[1172]: time="2024-02-09T08:42:37.554734459Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:37.555376 env[1172]: time="2024-02-09T08:42:37.555341793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:37.556216 env[1172]: time="2024-02-09T08:42:37.556171190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:37.556952 env[1172]: time="2024-02-09T08:42:37.556916952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 08:42:37.557265 env[1172]: time="2024-02-09T08:42:37.557231879Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 08:42:37.557961 env[1172]: time="2024-02-09T08:42:37.557920287Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 08:42:37.563583 env[1172]: time="2024-02-09T08:42:37.563530420Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\"" Feb 9 08:42:37.563786 env[1172]: time="2024-02-09T08:42:37.563740972Z" level=info msg="StartContainer for \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\"" Feb 9 08:42:37.586039 systemd[1]: Started cri-containerd-45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823.scope. Feb 9 08:42:37.614389 env[1172]: time="2024-02-09T08:42:37.614308528Z" level=info msg="StartContainer for \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\" returns successfully" Feb 9 08:42:37.622995 systemd[1]: cri-containerd-45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823.scope: Deactivated successfully. Feb 9 08:42:38.563121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823-rootfs.mount: Deactivated successfully. Feb 9 08:42:39.974734 env[1172]: time="2024-02-09T08:42:39.974594382Z" level=info msg="shim disconnected" id=45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823 Feb 9 08:42:39.974734 env[1172]: time="2024-02-09T08:42:39.974701515Z" level=warning msg="cleaning up after shim disconnected" id=45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823 namespace=k8s.io Feb 9 08:42:39.974734 env[1172]: time="2024-02-09T08:42:39.974730842Z" level=info msg="cleaning up dead shim" Feb 9 08:42:39.990318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693168382.mount: Deactivated successfully. Feb 9 08:42:40.000429 env[1172]: time="2024-02-09T08:42:40.000389573Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:42:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2710 runtime=io.containerd.runc.v2\n" Feb 9 08:42:40.389555 env[1172]: time="2024-02-09T08:42:40.389508250Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:40.390072 env[1172]: time="2024-02-09T08:42:40.390030873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:40.390742 env[1172]: time="2024-02-09T08:42:40.390698608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:42:40.391246 env[1172]: time="2024-02-09T08:42:40.391203313Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 08:42:40.392188 env[1172]: time="2024-02-09T08:42:40.392175571Z" level=info msg="CreateContainer within sandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 08:42:40.396749 env[1172]: time="2024-02-09T08:42:40.396706386Z" level=info msg="CreateContainer within sandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\"" Feb 9 08:42:40.397055 env[1172]: time="2024-02-09T08:42:40.397012355Z" level=info msg="StartContainer for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\"" Feb 9 08:42:40.417231 systemd[1]: Started cri-containerd-0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210.scope. Feb 9 08:42:40.441387 env[1172]: time="2024-02-09T08:42:40.441360978Z" level=info msg="StartContainer for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" returns successfully" Feb 9 08:42:40.526934 env[1172]: time="2024-02-09T08:42:40.526906950Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 08:42:40.531668 env[1172]: time="2024-02-09T08:42:40.531617902Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\"" Feb 9 08:42:40.531905 env[1172]: time="2024-02-09T08:42:40.531890000Z" level=info msg="StartContainer for \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\"" Feb 9 08:42:40.545025 kubelet[2206]: I0209 08:42:40.545004 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-rncpt" podStartSLOduration=1.010359138 podCreationTimestamp="2024-02-09 08:42:31 +0000 UTC" firstStartedPulling="2024-02-09 08:42:31.856756901 +0000 UTC m=+13.440182773" lastFinishedPulling="2024-02-09 08:42:40.391373062 +0000 UTC m=+21.974798931" observedRunningTime="2024-02-09 08:42:40.544719442 +0000 UTC m=+22.128145317" watchObservedRunningTime="2024-02-09 08:42:40.544975296 +0000 UTC m=+22.128401171" Feb 9 08:42:40.553982 systemd[1]: Started cri-containerd-adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407.scope. Feb 9 08:42:40.572927 env[1172]: time="2024-02-09T08:42:40.572882849Z" level=info msg="StartContainer for \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\" returns successfully" Feb 9 08:42:40.584947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 08:42:40.585267 systemd[1]: Stopped systemd-sysctl.service. Feb 9 08:42:40.585445 systemd[1]: Stopping systemd-sysctl.service... Feb 9 08:42:40.586951 systemd[1]: Starting systemd-sysctl.service... Feb 9 08:42:40.587367 systemd[1]: cri-containerd-adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407.scope: Deactivated successfully. Feb 9 08:42:40.594708 systemd[1]: Finished systemd-sysctl.service. Feb 9 08:42:40.762038 env[1172]: time="2024-02-09T08:42:40.761959581Z" level=info msg="shim disconnected" id=adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407 Feb 9 08:42:40.762038 env[1172]: time="2024-02-09T08:42:40.762000192Z" level=warning msg="cleaning up after shim disconnected" id=adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407 namespace=k8s.io Feb 9 08:42:40.762038 env[1172]: time="2024-02-09T08:42:40.762011260Z" level=info msg="cleaning up dead shim" Feb 9 08:42:40.766707 env[1172]: time="2024-02-09T08:42:40.766653821Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:42:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2822 runtime=io.containerd.runc.v2\n" Feb 9 08:42:41.538704 env[1172]: time="2024-02-09T08:42:41.538569513Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 08:42:41.560950 env[1172]: time="2024-02-09T08:42:41.560803008Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\"" Feb 9 08:42:41.561949 env[1172]: time="2024-02-09T08:42:41.561864328Z" level=info msg="StartContainer for \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\"" Feb 9 08:42:41.598989 systemd[1]: Started cri-containerd-d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1.scope. Feb 9 08:42:41.642043 env[1172]: time="2024-02-09T08:42:41.641994239Z" level=info msg="StartContainer for \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\" returns successfully" Feb 9 08:42:41.644954 systemd[1]: cri-containerd-d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1.scope: Deactivated successfully. Feb 9 08:42:41.695155 env[1172]: time="2024-02-09T08:42:41.695065147Z" level=info msg="shim disconnected" id=d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1 Feb 9 08:42:41.695533 env[1172]: time="2024-02-09T08:42:41.695157447Z" level=warning msg="cleaning up after shim disconnected" id=d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1 namespace=k8s.io Feb 9 08:42:41.695533 env[1172]: time="2024-02-09T08:42:41.695185717Z" level=info msg="cleaning up dead shim" Feb 9 08:42:41.711185 env[1172]: time="2024-02-09T08:42:41.711070724Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:42:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2879 runtime=io.containerd.runc.v2\n" Feb 9 08:42:41.988465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1-rootfs.mount: Deactivated successfully. Feb 9 08:42:42.545778 env[1172]: time="2024-02-09T08:42:42.545650655Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 08:42:42.559430 env[1172]: time="2024-02-09T08:42:42.559334136Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\"" Feb 9 08:42:42.560127 env[1172]: time="2024-02-09T08:42:42.560057832Z" level=info msg="StartContainer for \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\"" Feb 9 08:42:42.565702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663726644.mount: Deactivated successfully. Feb 9 08:42:42.588676 systemd[1]: Started cri-containerd-0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15.scope. Feb 9 08:42:42.600495 systemd[1]: cri-containerd-0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15.scope: Deactivated successfully. Feb 9 08:42:42.600731 env[1172]: time="2024-02-09T08:42:42.600709977Z" level=info msg="StartContainer for \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\" returns successfully" Feb 9 08:42:42.640927 env[1172]: time="2024-02-09T08:42:42.640872366Z" level=info msg="shim disconnected" id=0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15 Feb 9 08:42:42.641072 env[1172]: time="2024-02-09T08:42:42.640927008Z" level=warning msg="cleaning up after shim disconnected" id=0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15 namespace=k8s.io Feb 9 08:42:42.641072 env[1172]: time="2024-02-09T08:42:42.640939410Z" level=info msg="cleaning up dead shim" Feb 9 08:42:42.647132 env[1172]: time="2024-02-09T08:42:42.647099317Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:42:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2933 runtime=io.containerd.runc.v2\n" Feb 9 08:42:42.989564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15-rootfs.mount: Deactivated successfully. Feb 9 08:42:43.555554 env[1172]: time="2024-02-09T08:42:43.555410509Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 08:42:43.579446 env[1172]: time="2024-02-09T08:42:43.579324698Z" level=info msg="CreateContainer within sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\"" Feb 9 08:42:43.580074 env[1172]: time="2024-02-09T08:42:43.579984164Z" level=info msg="StartContainer for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\"" Feb 9 08:42:43.611858 systemd[1]: Started cri-containerd-664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e.scope. Feb 9 08:42:43.649479 env[1172]: time="2024-02-09T08:42:43.649433348Z" level=info msg="StartContainer for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" returns successfully" Feb 9 08:42:43.728592 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 08:42:43.786079 kubelet[2206]: I0209 08:42:43.786062 2206 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 08:42:43.796095 kubelet[2206]: I0209 08:42:43.796077 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:43.796976 kubelet[2206]: I0209 08:42:43.796962 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 08:42:43.799574 systemd[1]: Created slice kubepods-burstable-pod6d4d49ef_b4f5_4cb3_b0d1_0843baedb4f8.slice. Feb 9 08:42:43.802133 systemd[1]: Created slice kubepods-burstable-pod2ea858a4_78e9_424c_b13c_d92fbee0e7f4.slice. Feb 9 08:42:43.843948 kubelet[2206]: I0209 08:42:43.843873 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ea858a4-78e9-424c-b13c-d92fbee0e7f4-config-volume\") pod \"coredns-5d78c9869d-kwfhr\" (UID: \"2ea858a4-78e9-424c-b13c-d92fbee0e7f4\") " pod="kube-system/coredns-5d78c9869d-kwfhr" Feb 9 08:42:43.843948 kubelet[2206]: I0209 08:42:43.843900 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d4d49ef-b4f5-4cb3-b0d1-0843baedb4f8-config-volume\") pod \"coredns-5d78c9869d-p7ll4\" (UID: \"6d4d49ef-b4f5-4cb3-b0d1-0843baedb4f8\") " pod="kube-system/coredns-5d78c9869d-p7ll4" Feb 9 08:42:43.843948 kubelet[2206]: I0209 08:42:43.843916 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qzf\" (UniqueName: \"kubernetes.io/projected/6d4d49ef-b4f5-4cb3-b0d1-0843baedb4f8-kube-api-access-h9qzf\") pod \"coredns-5d78c9869d-p7ll4\" (UID: \"6d4d49ef-b4f5-4cb3-b0d1-0843baedb4f8\") " pod="kube-system/coredns-5d78c9869d-p7ll4" Feb 9 08:42:43.843948 kubelet[2206]: I0209 08:42:43.843930 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhj5\" (UniqueName: \"kubernetes.io/projected/2ea858a4-78e9-424c-b13c-d92fbee0e7f4-kube-api-access-gjhj5\") pod \"coredns-5d78c9869d-kwfhr\" (UID: \"2ea858a4-78e9-424c-b13c-d92fbee0e7f4\") " pod="kube-system/coredns-5d78c9869d-kwfhr" Feb 9 08:42:43.882530 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 08:42:44.102327 env[1172]: time="2024-02-09T08:42:44.102120953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-p7ll4,Uid:6d4d49ef-b4f5-4cb3-b0d1-0843baedb4f8,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:44.104407 env[1172]: time="2024-02-09T08:42:44.104304381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-kwfhr,Uid:2ea858a4-78e9-424c-b13c-d92fbee0e7f4,Namespace:kube-system,Attempt:0,}" Feb 9 08:42:44.597042 kubelet[2206]: I0209 08:42:44.596992 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-clq4m" podStartSLOduration=7.720477471 podCreationTimestamp="2024-02-09 08:42:31 +0000 UTC" firstStartedPulling="2024-02-09 08:42:31.68068204 +0000 UTC m=+13.264107933" lastFinishedPulling="2024-02-09 08:42:37.557128245 +0000 UTC m=+19.140554113" observedRunningTime="2024-02-09 08:42:44.596085408 +0000 UTC m=+26.179511317" watchObservedRunningTime="2024-02-09 08:42:44.596923651 +0000 UTC m=+26.180349547" Feb 9 08:42:45.478393 systemd-networkd[1012]: cilium_host: Link UP Feb 9 08:42:45.478466 systemd-networkd[1012]: cilium_net: Link UP Feb 9 08:42:45.492630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 08:42:45.492711 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 08:42:45.492865 systemd-networkd[1012]: cilium_net: Gained carrier Feb 9 08:42:45.492995 systemd-networkd[1012]: cilium_host: Gained carrier Feb 9 08:42:45.541594 systemd-networkd[1012]: cilium_vxlan: Link UP Feb 9 08:42:45.541598 systemd-networkd[1012]: cilium_vxlan: Gained carrier Feb 9 08:42:45.623596 systemd-networkd[1012]: cilium_host: Gained IPv6LL Feb 9 08:42:45.675605 kernel: NET: Registered PF_ALG protocol family Feb 9 08:42:45.991582 systemd-networkd[1012]: cilium_net: Gained IPv6LL Feb 9 08:42:46.158424 systemd-networkd[1012]: lxc_health: Link UP Feb 9 08:42:46.180425 systemd-networkd[1012]: lxc_health: Gained carrier Feb 9 08:42:46.180553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 08:42:46.652049 systemd-networkd[1012]: lxc8899410def05: Link UP Feb 9 08:42:46.671537 kernel: eth0: renamed from tmpd8e5e Feb 9 08:42:46.700630 kernel: eth0: renamed from tmp455c8 Feb 9 08:42:46.722842 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 08:42:46.722874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8899410def05: link becomes ready Feb 9 08:42:46.723561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd6ba57e1b505: link becomes ready Feb 9 08:42:46.730621 systemd-networkd[1012]: lxcd6ba57e1b505: Link UP Feb 9 08:42:46.730872 systemd-networkd[1012]: lxc8899410def05: Gained carrier Feb 9 08:42:46.731088 systemd-networkd[1012]: lxcd6ba57e1b505: Gained carrier Feb 9 08:42:47.295645 systemd-networkd[1012]: cilium_vxlan: Gained IPv6LL Feb 9 08:42:47.615720 systemd-networkd[1012]: lxc_health: Gained IPv6LL Feb 9 08:42:48.127698 systemd-networkd[1012]: lxcd6ba57e1b505: Gained IPv6LL Feb 9 08:42:48.703674 systemd-networkd[1012]: lxc8899410def05: Gained IPv6LL Feb 9 08:42:49.082400 env[1172]: time="2024-02-09T08:42:49.082362495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:49.082400 env[1172]: time="2024-02-09T08:42:49.082385251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:49.082400 env[1172]: time="2024-02-09T08:42:49.082392001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:49.082664 env[1172]: time="2024-02-09T08:42:49.082453527Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8e5ee1c6ee434db608febb72538e04b6aae77b9def8b48885cc7cda569dd013 pid=3621 runtime=io.containerd.runc.v2 Feb 9 08:42:49.082664 env[1172]: time="2024-02-09T08:42:49.082476325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:42:49.082664 env[1172]: time="2024-02-09T08:42:49.082497504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:42:49.082664 env[1172]: time="2024-02-09T08:42:49.082504791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:42:49.082664 env[1172]: time="2024-02-09T08:42:49.082630576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/455c82f9c6466533247fe783fcb13a7e5f1c16fe48ed63ed47520477429a6085 pid=3622 runtime=io.containerd.runc.v2 Feb 9 08:42:49.089237 systemd[1]: Started cri-containerd-455c82f9c6466533247fe783fcb13a7e5f1c16fe48ed63ed47520477429a6085.scope. Feb 9 08:42:49.100908 systemd[1]: Started cri-containerd-d8e5ee1c6ee434db608febb72538e04b6aae77b9def8b48885cc7cda569dd013.scope. Feb 9 08:42:49.110497 env[1172]: time="2024-02-09T08:42:49.110471037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-p7ll4,Uid:6d4d49ef-b4f5-4cb3-b0d1-0843baedb4f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"455c82f9c6466533247fe783fcb13a7e5f1c16fe48ed63ed47520477429a6085\"" Feb 9 08:42:49.111640 env[1172]: time="2024-02-09T08:42:49.111625741Z" level=info msg="CreateContainer within sandbox \"455c82f9c6466533247fe783fcb13a7e5f1c16fe48ed63ed47520477429a6085\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 08:42:49.116289 env[1172]: time="2024-02-09T08:42:49.116243820Z" level=info msg="CreateContainer within sandbox \"455c82f9c6466533247fe783fcb13a7e5f1c16fe48ed63ed47520477429a6085\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcd5cf3b0bb07a276422a56d673701cf8dbc59b3c2fe0d17958798c84b6e331d\"" Feb 9 08:42:49.116484 env[1172]: time="2024-02-09T08:42:49.116470797Z" level=info msg="StartContainer for \"bcd5cf3b0bb07a276422a56d673701cf8dbc59b3c2fe0d17958798c84b6e331d\"" Feb 9 08:42:49.124229 env[1172]: time="2024-02-09T08:42:49.124205515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-kwfhr,Uid:2ea858a4-78e9-424c-b13c-d92fbee0e7f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8e5ee1c6ee434db608febb72538e04b6aae77b9def8b48885cc7cda569dd013\"" Feb 9 08:42:49.125473 env[1172]: time="2024-02-09T08:42:49.125459614Z" level=info msg="CreateContainer within sandbox \"d8e5ee1c6ee434db608febb72538e04b6aae77b9def8b48885cc7cda569dd013\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 08:42:49.129399 env[1172]: time="2024-02-09T08:42:49.129355815Z" level=info msg="CreateContainer within sandbox \"d8e5ee1c6ee434db608febb72538e04b6aae77b9def8b48885cc7cda569dd013\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"122ef173ccdc6c17538dc79a38f3b9314fbd0121a38666f8d788bd83217cd575\"" Feb 9 08:42:49.129619 env[1172]: time="2024-02-09T08:42:49.129571075Z" level=info msg="StartContainer for \"122ef173ccdc6c17538dc79a38f3b9314fbd0121a38666f8d788bd83217cd575\"" Feb 9 08:42:49.136663 systemd[1]: Started cri-containerd-bcd5cf3b0bb07a276422a56d673701cf8dbc59b3c2fe0d17958798c84b6e331d.scope. Feb 9 08:42:49.149042 systemd[1]: Started cri-containerd-122ef173ccdc6c17538dc79a38f3b9314fbd0121a38666f8d788bd83217cd575.scope. Feb 9 08:42:49.161507 env[1172]: time="2024-02-09T08:42:49.161478989Z" level=info msg="StartContainer for \"bcd5cf3b0bb07a276422a56d673701cf8dbc59b3c2fe0d17958798c84b6e331d\" returns successfully" Feb 9 08:42:49.175475 env[1172]: time="2024-02-09T08:42:49.175418593Z" level=info msg="StartContainer for \"122ef173ccdc6c17538dc79a38f3b9314fbd0121a38666f8d788bd83217cd575\" returns successfully" Feb 9 08:42:49.595320 kubelet[2206]: I0209 08:42:49.595259 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-p7ll4" podStartSLOduration=18.595163117 podCreationTimestamp="2024-02-09 08:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:42:49.594195858 +0000 UTC m=+31.177621790" watchObservedRunningTime="2024-02-09 08:42:49.595163117 +0000 UTC m=+31.178589028" Feb 9 08:42:49.633463 kubelet[2206]: I0209 08:42:49.633394 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-kwfhr" podStartSLOduration=18.633302115 podCreationTimestamp="2024-02-09 08:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:42:49.633042518 +0000 UTC m=+31.216468499" watchObservedRunningTime="2024-02-09 08:42:49.633302115 +0000 UTC m=+31.216728031" Feb 9 08:44:43.442359 systemd[1]: Started sshd@5-139.178.90.113:22-170.64.196.239:39058.service. Feb 9 08:44:44.048045 sshd[3804]: Invalid user secret from 170.64.196.239 port 39058 Feb 9 08:44:44.201121 sshd[3804]: pam_faillock(sshd:auth): User unknown Feb 9 08:44:44.202307 sshd[3804]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:44:44.202454 sshd[3804]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.196.239 Feb 9 08:44:44.203396 sshd[3804]: pam_faillock(sshd:auth): User unknown Feb 9 08:44:46.182139 sshd[3804]: Failed password for invalid user secret from 170.64.196.239 port 39058 ssh2 Feb 9 08:44:46.605206 sshd[3804]: Connection closed by invalid user secret 170.64.196.239 port 39058 [preauth] Feb 9 08:44:46.607690 systemd[1]: sshd@5-139.178.90.113:22-170.64.196.239:39058.service: Deactivated successfully. Feb 9 08:46:50.430346 systemd[1]: Started sshd@6-139.178.90.113:22-218.92.0.52:8144.service. Feb 9 08:46:50.620649 sshd[3826]: Unable to negotiate with 218.92.0.52 port 8144: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 08:46:50.622577 systemd[1]: sshd@6-139.178.90.113:22-218.92.0.52:8144.service: Deactivated successfully. Feb 9 08:46:54.809008 systemd[1]: Started sshd@7-139.178.90.113:22-61.177.172.179:54033.service. Feb 9 08:46:55.739956 sshd[3830]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 9 08:46:57.702772 sshd[3830]: Failed password for root from 61.177.172.179 port 54033 ssh2 Feb 9 08:47:00.960910 sshd[3830]: Failed password for root from 61.177.172.179 port 54033 ssh2 Feb 9 08:47:03.550724 sshd[3830]: Failed password for root from 61.177.172.179 port 54033 ssh2 Feb 9 08:47:04.278185 sshd[3830]: Received disconnect from 61.177.172.179 port 54033:11: [preauth] Feb 9 08:47:04.278185 sshd[3830]: Disconnected from authenticating user root 61.177.172.179 port 54033 [preauth] Feb 9 08:47:04.278713 sshd[3830]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 9 08:47:04.280747 systemd[1]: sshd@7-139.178.90.113:22-61.177.172.179:54033.service: Deactivated successfully. Feb 9 08:47:04.433774 systemd[1]: Started sshd@8-139.178.90.113:22-61.177.172.179:10247.service. Feb 9 08:47:05.387236 sshd[3837]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 9 08:47:07.390923 sshd[3837]: Failed password for root from 61.177.172.179 port 10247 ssh2 Feb 9 08:47:08.237310 sshd[3837]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 08:47:09.277153 systemd[1]: Started sshd@9-139.178.90.113:22-218.92.0.34:44144.service. Feb 9 08:47:10.652233 sshd[3837]: Failed password for root from 61.177.172.179 port 10247 ssh2 Feb 9 08:47:10.862186 sshd[3840]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 08:47:12.578862 sshd[3837]: Failed password for root from 61.177.172.179 port 10247 ssh2 Feb 9 08:47:13.216889 sshd[3840]: Failed password for root from 218.92.0.34 port 44144 ssh2 Feb 9 08:47:13.936415 sshd[3837]: Received disconnect from 61.177.172.179 port 10247:11: [preauth] Feb 9 08:47:13.936415 sshd[3837]: Disconnected from authenticating user root 61.177.172.179 port 10247 [preauth] Feb 9 08:47:13.936976 sshd[3837]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 9 08:47:13.938993 systemd[1]: sshd@8-139.178.90.113:22-61.177.172.179:10247.service: Deactivated successfully. Feb 9 08:47:14.080330 systemd[1]: Started sshd@10-139.178.90.113:22-61.177.172.179:21708.service. Feb 9 08:47:15.015606 sshd[3845]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 9 08:47:15.527896 sshd[3840]: Failed password for root from 218.92.0.34 port 44144 ssh2 Feb 9 08:47:17.725807 sshd[3845]: Failed password for root from 61.177.172.179 port 21708 ssh2 Feb 9 08:47:19.504746 sshd[3840]: Failed password for root from 218.92.0.34 port 44144 ssh2 Feb 9 08:47:22.261197 sshd[3840]: Received disconnect from 218.92.0.34 port 44144:11: [preauth] Feb 9 08:47:22.261197 sshd[3840]: Disconnected from authenticating user root 218.92.0.34 port 44144 [preauth] Feb 9 08:47:22.261740 sshd[3840]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 08:47:22.264222 systemd[1]: sshd@9-139.178.90.113:22-218.92.0.34:44144.service: Deactivated successfully. Feb 9 08:47:22.383422 systemd[1]: Started sshd@11-139.178.90.113:22-218.92.0.34:56797.service. Feb 9 08:47:22.963773 sshd[3845]: Failed password for root from 61.177.172.179 port 21708 ssh2 Feb 9 08:47:23.387122 sshd[3853]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 08:47:24.859307 sshd[3853]: Failed password for root from 218.92.0.34 port 56797 ssh2 Feb 9 08:47:24.886155 sshd[3845]: Failed password for root from 61.177.172.179 port 21708 ssh2 Feb 9 08:47:26.261712 sshd[3845]: Received disconnect from 61.177.172.179 port 21708:11: [preauth] Feb 9 08:47:26.261712 sshd[3845]: Disconnected from authenticating user root 61.177.172.179 port 21708 [preauth] Feb 9 08:47:26.262240 sshd[3845]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 9 08:47:26.264233 systemd[1]: sshd@10-139.178.90.113:22-61.177.172.179:21708.service: Deactivated successfully. Feb 9 08:47:28.465793 sshd[3853]: Failed password for root from 218.92.0.34 port 56797 ssh2 Feb 9 08:47:31.067571 sshd[3853]: Failed password for root from 218.92.0.34 port 56797 ssh2 Feb 9 08:47:31.962696 sshd[3853]: Received disconnect from 218.92.0.34 port 56797:11: [preauth] Feb 9 08:47:31.962696 sshd[3853]: Disconnected from authenticating user root 218.92.0.34 port 56797 [preauth] Feb 9 08:47:31.963227 sshd[3853]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 08:47:31.965283 systemd[1]: sshd@11-139.178.90.113:22-218.92.0.34:56797.service: Deactivated successfully. Feb 9 08:47:32.139424 systemd[1]: Started sshd@12-139.178.90.113:22-218.92.0.34:56418.service. Feb 9 08:47:33.209722 sshd[3862]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 08:47:35.389114 sshd[3862]: Failed password for root from 218.92.0.34 port 56418 ssh2 Feb 9 08:47:37.669779 sshd[3862]: Failed password for root from 218.92.0.34 port 56418 ssh2 Feb 9 08:47:41.146676 sshd[3862]: Failed password for root from 218.92.0.34 port 56418 ssh2 Feb 9 08:47:41.815927 sshd[3862]: Received disconnect from 218.92.0.34 port 56418:11: [preauth] Feb 9 08:47:41.815927 sshd[3862]: Disconnected from authenticating user root 218.92.0.34 port 56418 [preauth] Feb 9 08:47:41.816437 sshd[3862]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 08:47:41.818479 systemd[1]: sshd@12-139.178.90.113:22-218.92.0.34:56418.service: Deactivated successfully. Feb 9 08:51:17.603095 systemd[1]: Started sshd@13-139.178.90.113:22-178.25.120.214:33058.service. Feb 9 08:51:17.682267 systemd[1]: Started sshd@14-139.178.90.113:22-178.25.120.214:33068.service. Feb 9 08:51:18.743639 sshd[3886]: Invalid user pi from 178.25.120.214 port 33058 Feb 9 08:51:18.797093 sshd[3888]: Invalid user pi from 178.25.120.214 port 33068 Feb 9 08:51:18.917983 sshd[3886]: pam_faillock(sshd:auth): User unknown Feb 9 08:51:18.919137 sshd[3886]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:51:18.919226 sshd[3886]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=178.25.120.214 Feb 9 08:51:18.920199 sshd[3886]: pam_faillock(sshd:auth): User unknown Feb 9 08:51:18.971397 sshd[3888]: pam_faillock(sshd:auth): User unknown Feb 9 08:51:18.972379 sshd[3888]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:51:18.972468 sshd[3888]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=178.25.120.214 Feb 9 08:51:18.973361 sshd[3888]: pam_faillock(sshd:auth): User unknown Feb 9 08:51:20.989127 sshd[3886]: Failed password for invalid user pi from 178.25.120.214 port 33058 ssh2 Feb 9 08:51:21.042100 sshd[3888]: Failed password for invalid user pi from 178.25.120.214 port 33068 ssh2 Feb 9 08:51:21.320814 sshd[3886]: Connection closed by invalid user pi 178.25.120.214 port 33058 [preauth] Feb 9 08:51:21.323283 systemd[1]: sshd@13-139.178.90.113:22-178.25.120.214:33058.service: Deactivated successfully. Feb 9 08:51:21.372559 sshd[3888]: Connection closed by invalid user pi 178.25.120.214 port 33068 [preauth] Feb 9 08:51:21.375077 systemd[1]: sshd@14-139.178.90.113:22-178.25.120.214:33068.service: Deactivated successfully. Feb 9 08:51:27.381279 systemd[1]: Started sshd@15-139.178.90.113:22-170.64.196.239:36502.service. Feb 9 08:51:27.982825 sshd[3898]: Invalid user access from 170.64.196.239 port 36502 Feb 9 08:51:28.137740 sshd[3898]: pam_faillock(sshd:auth): User unknown Feb 9 08:51:28.138699 sshd[3898]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:51:28.138791 sshd[3898]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.196.239 Feb 9 08:51:28.139781 sshd[3898]: pam_faillock(sshd:auth): User unknown Feb 9 08:51:30.248766 sshd[3898]: Failed password for invalid user access from 170.64.196.239 port 36502 ssh2 Feb 9 08:51:31.009360 sshd[3898]: Connection closed by invalid user access 170.64.196.239 port 36502 [preauth] Feb 9 08:51:31.011933 systemd[1]: sshd@15-139.178.90.113:22-170.64.196.239:36502.service: Deactivated successfully. Feb 9 08:52:24.555299 systemd[1]: Started sshd@16-139.178.90.113:22-61.177.172.136:18547.service. Feb 9 08:52:25.661116 sshd[3913]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 08:52:27.593800 sshd[3913]: Failed password for root from 61.177.172.136 port 18547 ssh2 Feb 9 08:52:30.878799 sshd[3913]: Failed password for root from 61.177.172.136 port 18547 ssh2 Feb 9 08:52:31.474105 systemd[1]: Started sshd@17-139.178.90.113:22-61.177.172.140:63127.service. Feb 9 08:52:32.495949 sshd[3916]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:52:33.899918 sshd[3913]: Failed password for root from 61.177.172.136 port 18547 ssh2 Feb 9 08:52:34.389057 sshd[3916]: Failed password for root from 61.177.172.140 port 63127 ssh2 Feb 9 08:52:34.863543 systemd[1]: Started sshd@18-139.178.90.113:22-61.177.172.136:26818.service. Feb 9 08:52:34.902051 sshd[3913]: Received disconnect from 61.177.172.136 port 18547:11: [preauth] Feb 9 08:52:34.902051 sshd[3913]: Disconnected from authenticating user root 61.177.172.136 port 18547 [preauth] Feb 9 08:52:34.902260 sshd[3913]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 08:52:34.903020 systemd[1]: sshd@16-139.178.90.113:22-61.177.172.136:18547.service: Deactivated successfully. Feb 9 08:52:35.357652 sshd[3916]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 08:52:35.966693 sshd[3921]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 08:52:37.998336 sshd[3916]: Failed password for root from 61.177.172.140 port 63127 ssh2 Feb 9 08:52:38.606716 sshd[3921]: Failed password for root from 61.177.172.136 port 26818 ssh2 Feb 9 08:52:40.603863 sshd[3916]: Failed password for root from 61.177.172.140 port 63127 ssh2 Feb 9 08:52:41.079515 sshd[3916]: Received disconnect from 61.177.172.140 port 63127:11: [preauth] Feb 9 08:52:41.079515 sshd[3916]: Disconnected from authenticating user root 61.177.172.140 port 63127 [preauth] Feb 9 08:52:41.080043 sshd[3916]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:52:41.082087 systemd[1]: sshd@17-139.178.90.113:22-61.177.172.140:63127.service: Deactivated successfully. Feb 9 08:52:41.102736 sshd[3921]: Failed password for root from 61.177.172.136 port 26818 ssh2 Feb 9 08:52:41.231180 systemd[1]: Started sshd@19-139.178.90.113:22-61.177.172.140:61218.service. Feb 9 08:52:42.218306 sshd[3926]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:52:43.815729 sshd[3926]: Failed password for root from 61.177.172.140 port 61218 ssh2 Feb 9 08:52:44.131899 sshd[3921]: Failed password for root from 61.177.172.136 port 26818 ssh2 Feb 9 08:52:45.413829 sshd[3921]: Received disconnect from 61.177.172.136 port 26818:11: [preauth] Feb 9 08:52:45.413829 sshd[3921]: Disconnected from authenticating user root 61.177.172.136 port 26818 [preauth] Feb 9 08:52:45.414331 sshd[3921]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 08:52:45.416295 systemd[1]: sshd@18-139.178.90.113:22-61.177.172.136:26818.service: Deactivated successfully. Feb 9 08:52:47.418736 sshd[3926]: Failed password for root from 61.177.172.140 port 61218 ssh2 Feb 9 08:52:50.548071 sshd[3926]: Failed password for root from 61.177.172.140 port 61218 ssh2 Feb 9 08:52:50.584371 systemd[1]: Started sshd@20-139.178.90.113:22-61.177.172.136:30018.service. Feb 9 08:52:50.784175 sshd[3926]: Received disconnect from 61.177.172.140 port 61218:11: [preauth] Feb 9 08:52:50.784175 sshd[3926]: Disconnected from authenticating user root 61.177.172.140 port 61218 [preauth] Feb 9 08:52:50.784694 sshd[3926]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:52:50.786719 systemd[1]: sshd@19-139.178.90.113:22-61.177.172.140:61218.service: Deactivated successfully. Feb 9 08:52:50.945088 systemd[1]: Started sshd@21-139.178.90.113:22-61.177.172.140:62889.service. Feb 9 08:52:51.950421 sshd[3935]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:52:54.118745 sshd[3935]: Failed password for root from 61.177.172.140 port 62889 ssh2 Feb 9 08:52:56.722527 sshd[3935]: Failed password for root from 61.177.172.140 port 62889 ssh2 Feb 9 08:52:59.661176 sshd[3935]: Failed password for root from 61.177.172.140 port 62889 ssh2 Feb 9 08:53:00.525826 sshd[3935]: Received disconnect from 61.177.172.140 port 62889:11: [preauth] Feb 9 08:53:00.525826 sshd[3935]: Disconnected from authenticating user root 61.177.172.140 port 62889 [preauth] Feb 9 08:53:00.526387 sshd[3935]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:53:00.528397 systemd[1]: sshd@21-139.178.90.113:22-61.177.172.140:62889.service: Deactivated successfully. Feb 9 08:53:00.951770 systemd[1]: Started sshd@22-139.178.90.113:22-61.177.172.136:49483.service. Feb 9 08:53:02.840826 update_engine[1162]: I0209 08:53:02.840747 1162 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 08:53:02.840826 update_engine[1162]: I0209 08:53:02.840834 1162 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 08:53:02.842771 update_engine[1162]: I0209 08:53:02.842684 1162 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 08:53:02.843697 update_engine[1162]: I0209 08:53:02.843608 1162 omaha_request_params.cc:62] Current group set to lts Feb 9 08:53:02.843950 update_engine[1162]: I0209 08:53:02.843914 1162 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 08:53:02.843950 update_engine[1162]: I0209 08:53:02.843936 1162 update_attempter.cc:643] Scheduling an action processor start. Feb 9 08:53:02.844188 update_engine[1162]: I0209 08:53:02.843971 1162 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 08:53:02.844188 update_engine[1162]: I0209 08:53:02.844035 1162 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 08:53:02.844188 update_engine[1162]: I0209 08:53:02.844180 1162 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 08:53:02.844496 update_engine[1162]: I0209 08:53:02.844204 1162 omaha_request_action.cc:271] Request: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: Feb 9 08:53:02.844496 update_engine[1162]: I0209 08:53:02.844221 1162 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:53:02.845496 locksmithd[1190]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 08:53:02.847439 update_engine[1162]: I0209 08:53:02.847389 1162 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:53:02.847670 update_engine[1162]: E0209 08:53:02.847648 1162 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:53:02.847896 update_engine[1162]: I0209 08:53:02.847807 1162 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 08:53:12.828796 update_engine[1162]: I0209 08:53:12.828758 1162 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:53:12.829247 update_engine[1162]: I0209 08:53:12.828989 1162 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:53:12.829247 update_engine[1162]: E0209 08:53:12.829087 1162 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:53:12.829247 update_engine[1162]: I0209 08:53:12.829169 1162 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 08:53:16.319350 systemd[1]: Started sshd@23-139.178.90.113:22-61.177.172.136:43411.service. Feb 9 08:53:18.501935 sshd[3944]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 08:53:20.710770 sshd[3944]: Failed password for root from 61.177.172.136 port 43411 ssh2 Feb 9 08:53:22.830238 update_engine[1162]: I0209 08:53:22.830206 1162 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:53:22.830661 update_engine[1162]: I0209 08:53:22.830415 1162 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:53:22.830661 update_engine[1162]: E0209 08:53:22.830510 1162 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:53:22.830661 update_engine[1162]: I0209 08:53:22.830613 1162 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 08:53:24.005331 sshd[3944]: Failed password for root from 61.177.172.136 port 43411 ssh2 Feb 9 08:53:25.949703 sshd[3944]: Failed password for root from 61.177.172.136 port 43411 ssh2 Feb 9 08:53:27.119570 sshd[3944]: Received disconnect from 61.177.172.136 port 43411:11: [preauth] Feb 9 08:53:27.119570 sshd[3944]: Disconnected from authenticating user root 61.177.172.136 port 43411 [preauth] Feb 9 08:53:27.120113 sshd[3944]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 08:53:27.122633 systemd[1]: sshd@23-139.178.90.113:22-61.177.172.136:43411.service: Deactivated successfully. Feb 9 08:53:32.829756 update_engine[1162]: I0209 08:53:32.829689 1162 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:53:32.830161 update_engine[1162]: I0209 08:53:32.829932 1162 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:53:32.830161 update_engine[1162]: E0209 08:53:32.830020 1162 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:53:32.830161 update_engine[1162]: I0209 08:53:32.830085 1162 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 08:53:32.830161 update_engine[1162]: I0209 08:53:32.830092 1162 omaha_request_action.cc:621] Omaha request response: Feb 9 08:53:32.830161 update_engine[1162]: E0209 08:53:32.830155 1162 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830168 1162 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830170 1162 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830174 1162 update_attempter.cc:306] Processing Done. Feb 9 08:53:32.830385 update_engine[1162]: E0209 08:53:32.830185 1162 update_attempter.cc:619] Update failed. Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830189 1162 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830193 1162 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830197 1162 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830264 1162 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830286 1162 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830289 1162 omaha_request_action.cc:271] Request: Feb 9 08:53:32.830385 update_engine[1162]: Feb 9 08:53:32.830385 update_engine[1162]: Feb 9 08:53:32.830385 update_engine[1162]: Feb 9 08:53:32.830385 update_engine[1162]: Feb 9 08:53:32.830385 update_engine[1162]: Feb 9 08:53:32.830385 update_engine[1162]: Feb 9 08:53:32.830385 update_engine[1162]: I0209 08:53:32.830292 1162 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830408 1162 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:53:32.830985 update_engine[1162]: E0209 08:53:32.830478 1162 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830554 1162 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830560 1162 omaha_request_action.cc:621] Omaha request response: Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830564 1162 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830567 1162 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830570 1162 update_attempter.cc:306] Processing Done. Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830574 1162 update_attempter.cc:310] Error event sent. Feb 9 08:53:32.830985 update_engine[1162]: I0209 08:53:32.830585 1162 update_check_scheduler.cc:74] Next update check in 47m31s Feb 9 08:53:32.831275 locksmithd[1190]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 08:53:32.831275 locksmithd[1190]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 08:54:49.063846 systemd[1]: Started sshd@24-139.178.90.113:22-141.98.11.90:41384.service. Feb 9 08:54:50.589636 sshd[3932]: Timeout before authentication for 61.177.172.136 port 30018 Feb 9 08:54:50.591084 systemd[1]: sshd@20-139.178.90.113:22-61.177.172.136:30018.service: Deactivated successfully. Feb 9 08:54:51.656019 sshd[3959]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.90 user=root Feb 9 08:54:53.965723 sshd[3959]: Failed password for root from 141.98.11.90 port 41384 ssh2 Feb 9 08:54:54.570871 sshd[3959]: Connection closed by authenticating user root 141.98.11.90 port 41384 [preauth] Feb 9 08:54:54.573319 systemd[1]: sshd@24-139.178.90.113:22-141.98.11.90:41384.service: Deactivated successfully. Feb 9 08:55:00.956786 sshd[3939]: Timeout before authentication for 61.177.172.136 port 49483 Feb 9 08:55:00.958274 systemd[1]: sshd@22-139.178.90.113:22-61.177.172.136:49483.service: Deactivated successfully. Feb 9 08:56:15.886945 systemd[1]: Started sshd@25-139.178.90.113:22-147.75.109.163:58490.service. Feb 9 08:56:15.922362 sshd[3973]: Accepted publickey for core from 147.75.109.163 port 58490 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:15.923092 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:15.925592 systemd-logind[1160]: New session 8 of user core. Feb 9 08:56:15.926106 systemd[1]: Started session-8.scope. Feb 9 08:56:16.018673 sshd[3973]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:16.020013 systemd[1]: sshd@25-139.178.90.113:22-147.75.109.163:58490.service: Deactivated successfully. Feb 9 08:56:16.020430 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 08:56:16.020870 systemd-logind[1160]: Session 8 logged out. Waiting for processes to exit. Feb 9 08:56:16.021371 systemd-logind[1160]: Removed session 8. Feb 9 08:56:21.028159 systemd[1]: Started sshd@26-139.178.90.113:22-147.75.109.163:58496.service. Feb 9 08:56:21.059773 sshd[4004]: Accepted publickey for core from 147.75.109.163 port 58496 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:21.060413 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:21.062712 systemd-logind[1160]: New session 9 of user core. Feb 9 08:56:21.063270 systemd[1]: Started session-9.scope. Feb 9 08:56:21.188177 sshd[4004]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:21.189665 systemd[1]: sshd@26-139.178.90.113:22-147.75.109.163:58496.service: Deactivated successfully. Feb 9 08:56:21.190103 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 08:56:21.190458 systemd-logind[1160]: Session 9 logged out. Waiting for processes to exit. Feb 9 08:56:21.191097 systemd-logind[1160]: Removed session 9. Feb 9 08:56:26.197573 systemd[1]: Started sshd@27-139.178.90.113:22-147.75.109.163:47728.service. Feb 9 08:56:26.229934 sshd[4030]: Accepted publickey for core from 147.75.109.163 port 47728 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:26.230852 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:26.234043 systemd-logind[1160]: New session 10 of user core. Feb 9 08:56:26.234896 systemd[1]: Started session-10.scope. Feb 9 08:56:26.320845 sshd[4030]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:26.322244 systemd[1]: sshd@27-139.178.90.113:22-147.75.109.163:47728.service: Deactivated successfully. Feb 9 08:56:26.322694 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 08:56:26.323071 systemd-logind[1160]: Session 10 logged out. Waiting for processes to exit. Feb 9 08:56:26.323435 systemd-logind[1160]: Removed session 10. Feb 9 08:56:28.328384 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 9 08:56:28.334211 systemd-tmpfiles[4057]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 08:56:28.334437 systemd-tmpfiles[4057]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 08:56:28.335134 systemd-tmpfiles[4057]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 08:56:28.344545 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 9 08:56:28.344657 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 9 08:56:28.345914 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 9 08:56:31.331078 systemd[1]: Started sshd@28-139.178.90.113:22-147.75.109.163:47740.service. Feb 9 08:56:31.362748 sshd[4060]: Accepted publickey for core from 147.75.109.163 port 47740 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:31.363618 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:31.366538 systemd-logind[1160]: New session 11 of user core. Feb 9 08:56:31.367182 systemd[1]: Started session-11.scope. Feb 9 08:56:31.454113 sshd[4060]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:31.456035 systemd[1]: sshd@28-139.178.90.113:22-147.75.109.163:47740.service: Deactivated successfully. Feb 9 08:56:31.456395 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 08:56:31.456830 systemd-logind[1160]: Session 11 logged out. Waiting for processes to exit. Feb 9 08:56:31.457407 systemd[1]: Started sshd@29-139.178.90.113:22-147.75.109.163:47746.service. Feb 9 08:56:31.457896 systemd-logind[1160]: Removed session 11. Feb 9 08:56:31.489842 sshd[4086]: Accepted publickey for core from 147.75.109.163 port 47746 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:31.490716 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:31.493541 systemd-logind[1160]: New session 12 of user core. Feb 9 08:56:31.494191 systemd[1]: Started session-12.scope. Feb 9 08:56:31.896023 sshd[4086]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:31.898033 systemd[1]: sshd@29-139.178.90.113:22-147.75.109.163:47746.service: Deactivated successfully. Feb 9 08:56:31.898435 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 08:56:31.898812 systemd-logind[1160]: Session 12 logged out. Waiting for processes to exit. Feb 9 08:56:31.899441 systemd[1]: Started sshd@30-139.178.90.113:22-147.75.109.163:47756.service. Feb 9 08:56:31.899932 systemd-logind[1160]: Removed session 12. Feb 9 08:56:31.931161 sshd[4113]: Accepted publickey for core from 147.75.109.163 port 47756 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:31.931941 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:31.934318 systemd-logind[1160]: New session 13 of user core. Feb 9 08:56:31.934785 systemd[1]: Started session-13.scope. Feb 9 08:56:32.040000 sshd[4113]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:32.041461 systemd[1]: sshd@30-139.178.90.113:22-147.75.109.163:47756.service: Deactivated successfully. Feb 9 08:56:32.041961 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 08:56:32.042332 systemd-logind[1160]: Session 13 logged out. Waiting for processes to exit. Feb 9 08:56:32.042908 systemd-logind[1160]: Removed session 13. Feb 9 08:56:37.050105 systemd[1]: Started sshd@31-139.178.90.113:22-147.75.109.163:35982.service. Feb 9 08:56:37.081769 sshd[4142]: Accepted publickey for core from 147.75.109.163 port 35982 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:37.082674 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:37.085783 systemd-logind[1160]: New session 14 of user core. Feb 9 08:56:37.086473 systemd[1]: Started session-14.scope. Feb 9 08:56:37.183023 sshd[4142]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:37.188603 systemd[1]: sshd@31-139.178.90.113:22-147.75.109.163:35982.service: Deactivated successfully. Feb 9 08:56:37.190447 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 08:56:37.192240 systemd-logind[1160]: Session 14 logged out. Waiting for processes to exit. Feb 9 08:56:37.194498 systemd-logind[1160]: Removed session 14. Feb 9 08:56:42.191603 systemd[1]: Started sshd@32-139.178.90.113:22-147.75.109.163:35988.service. Feb 9 08:56:42.223577 sshd[4167]: Accepted publickey for core from 147.75.109.163 port 35988 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:42.224422 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:42.227251 systemd-logind[1160]: New session 15 of user core. Feb 9 08:56:42.227843 systemd[1]: Started session-15.scope. Feb 9 08:56:42.319737 sshd[4167]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:42.321250 systemd[1]: sshd@32-139.178.90.113:22-147.75.109.163:35988.service: Deactivated successfully. Feb 9 08:56:42.321695 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 08:56:42.322094 systemd-logind[1160]: Session 15 logged out. Waiting for processes to exit. Feb 9 08:56:42.322495 systemd-logind[1160]: Removed session 15. Feb 9 08:56:47.329738 systemd[1]: Started sshd@33-139.178.90.113:22-147.75.109.163:48126.service. Feb 9 08:56:47.361810 sshd[4193]: Accepted publickey for core from 147.75.109.163 port 48126 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:47.362658 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:47.365492 systemd-logind[1160]: New session 16 of user core. Feb 9 08:56:47.366059 systemd[1]: Started session-16.scope. Feb 9 08:56:47.456005 sshd[4193]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:47.457466 systemd[1]: sshd@33-139.178.90.113:22-147.75.109.163:48126.service: Deactivated successfully. Feb 9 08:56:47.457887 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 08:56:47.458229 systemd-logind[1160]: Session 16 logged out. Waiting for processes to exit. Feb 9 08:56:47.458733 systemd-logind[1160]: Removed session 16. Feb 9 08:56:52.465282 systemd[1]: Started sshd@34-139.178.90.113:22-147.75.109.163:48130.service. Feb 9 08:56:52.496926 sshd[4218]: Accepted publickey for core from 147.75.109.163 port 48130 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:52.497785 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:52.500813 systemd-logind[1160]: New session 17 of user core. Feb 9 08:56:52.501386 systemd[1]: Started session-17.scope. Feb 9 08:56:52.588695 sshd[4218]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:52.590209 systemd[1]: sshd@34-139.178.90.113:22-147.75.109.163:48130.service: Deactivated successfully. Feb 9 08:56:52.590668 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 08:56:52.591034 systemd-logind[1160]: Session 17 logged out. Waiting for processes to exit. Feb 9 08:56:52.591475 systemd-logind[1160]: Removed session 17. Feb 9 08:56:57.598168 systemd[1]: Started sshd@35-139.178.90.113:22-147.75.109.163:33638.service. Feb 9 08:56:57.630375 sshd[4243]: Accepted publickey for core from 147.75.109.163 port 33638 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:56:57.631304 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:57.634401 systemd-logind[1160]: New session 18 of user core. Feb 9 08:56:57.635111 systemd[1]: Started session-18.scope. Feb 9 08:56:57.726301 sshd[4243]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:57.727852 systemd[1]: sshd@35-139.178.90.113:22-147.75.109.163:33638.service: Deactivated successfully. Feb 9 08:56:57.728297 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 08:56:57.728632 systemd-logind[1160]: Session 18 logged out. Waiting for processes to exit. Feb 9 08:56:57.729030 systemd-logind[1160]: Removed session 18. Feb 9 08:57:02.736362 systemd[1]: Started sshd@36-139.178.90.113:22-147.75.109.163:33650.service. Feb 9 08:57:02.767914 sshd[4272]: Accepted publickey for core from 147.75.109.163 port 33650 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:02.768757 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:02.771860 systemd-logind[1160]: New session 19 of user core. Feb 9 08:57:02.772499 systemd[1]: Started session-19.scope. Feb 9 08:57:02.899622 sshd[4272]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:02.901129 systemd[1]: sshd@36-139.178.90.113:22-147.75.109.163:33650.service: Deactivated successfully. Feb 9 08:57:02.901555 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 08:57:02.901904 systemd-logind[1160]: Session 19 logged out. Waiting for processes to exit. Feb 9 08:57:02.902354 systemd-logind[1160]: Removed session 19. Feb 9 08:57:07.909340 systemd[1]: Started sshd@37-139.178.90.113:22-147.75.109.163:40800.service. Feb 9 08:57:07.941064 sshd[4297]: Accepted publickey for core from 147.75.109.163 port 40800 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:07.941891 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:07.944910 systemd-logind[1160]: New session 20 of user core. Feb 9 08:57:07.945514 systemd[1]: Started session-20.scope. Feb 9 08:57:08.036234 sshd[4297]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:08.037842 systemd[1]: sshd@37-139.178.90.113:22-147.75.109.163:40800.service: Deactivated successfully. Feb 9 08:57:08.038297 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 08:57:08.038686 systemd-logind[1160]: Session 20 logged out. Waiting for processes to exit. Feb 9 08:57:08.039266 systemd-logind[1160]: Removed session 20. Feb 9 08:57:13.046132 systemd[1]: Started sshd@38-139.178.90.113:22-147.75.109.163:40808.service. Feb 9 08:57:13.077849 sshd[4322]: Accepted publickey for core from 147.75.109.163 port 40808 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:13.078801 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:13.081837 systemd-logind[1160]: New session 21 of user core. Feb 9 08:57:13.082481 systemd[1]: Started session-21.scope. Feb 9 08:57:13.168595 sshd[4322]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:13.170095 systemd[1]: sshd@38-139.178.90.113:22-147.75.109.163:40808.service: Deactivated successfully. Feb 9 08:57:13.170524 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 08:57:13.170892 systemd-logind[1160]: Session 21 logged out. Waiting for processes to exit. Feb 9 08:57:13.171374 systemd-logind[1160]: Removed session 21. Feb 9 08:57:18.178808 systemd[1]: Started sshd@39-139.178.90.113:22-147.75.109.163:49650.service. Feb 9 08:57:18.210147 sshd[4347]: Accepted publickey for core from 147.75.109.163 port 49650 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:18.210752 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:18.213163 systemd-logind[1160]: New session 22 of user core. Feb 9 08:57:18.213612 systemd[1]: Started session-22.scope. Feb 9 08:57:18.304164 sshd[4347]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:18.305684 systemd[1]: sshd@39-139.178.90.113:22-147.75.109.163:49650.service: Deactivated successfully. Feb 9 08:57:18.306146 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 08:57:18.306467 systemd-logind[1160]: Session 22 logged out. Waiting for processes to exit. Feb 9 08:57:18.307059 systemd-logind[1160]: Removed session 22. Feb 9 08:57:23.313417 systemd[1]: Started sshd@40-139.178.90.113:22-147.75.109.163:49652.service. Feb 9 08:57:23.344867 sshd[4374]: Accepted publickey for core from 147.75.109.163 port 49652 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:23.345619 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:23.348233 systemd-logind[1160]: New session 23 of user core. Feb 9 08:57:23.348790 systemd[1]: Started session-23.scope. Feb 9 08:57:23.438875 sshd[4374]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:23.440316 systemd[1]: sshd@40-139.178.90.113:22-147.75.109.163:49652.service: Deactivated successfully. Feb 9 08:57:23.440750 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 08:57:23.441121 systemd-logind[1160]: Session 23 logged out. Waiting for processes to exit. Feb 9 08:57:23.441618 systemd-logind[1160]: Removed session 23. Feb 9 08:57:28.448399 systemd[1]: Started sshd@41-139.178.90.113:22-147.75.109.163:55280.service. Feb 9 08:57:28.479745 sshd[4399]: Accepted publickey for core from 147.75.109.163 port 55280 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:28.480623 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:28.483790 systemd-logind[1160]: New session 24 of user core. Feb 9 08:57:28.484420 systemd[1]: Started session-24.scope. Feb 9 08:57:28.573874 sshd[4399]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:28.575318 systemd[1]: sshd@41-139.178.90.113:22-147.75.109.163:55280.service: Deactivated successfully. Feb 9 08:57:28.575772 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 08:57:28.576163 systemd-logind[1160]: Session 24 logged out. Waiting for processes to exit. Feb 9 08:57:28.576688 systemd-logind[1160]: Removed session 24. Feb 9 08:57:33.582471 systemd[1]: Started sshd@42-139.178.90.113:22-147.75.109.163:55290.service. Feb 9 08:57:33.613822 sshd[4426]: Accepted publickey for core from 147.75.109.163 port 55290 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:33.614627 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:33.617594 systemd-logind[1160]: New session 25 of user core. Feb 9 08:57:33.618340 systemd[1]: Started session-25.scope. Feb 9 08:57:33.709063 sshd[4426]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:33.710642 systemd[1]: sshd@42-139.178.90.113:22-147.75.109.163:55290.service: Deactivated successfully. Feb 9 08:57:33.711093 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 08:57:33.711447 systemd-logind[1160]: Session 25 logged out. Waiting for processes to exit. Feb 9 08:57:33.712059 systemd-logind[1160]: Removed session 25. Feb 9 08:57:38.718001 systemd[1]: Started sshd@43-139.178.90.113:22-147.75.109.163:53674.service. Feb 9 08:57:38.750227 sshd[4452]: Accepted publickey for core from 147.75.109.163 port 53674 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:38.751156 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:38.754332 systemd-logind[1160]: New session 26 of user core. Feb 9 08:57:38.755016 systemd[1]: Started session-26.scope. Feb 9 08:57:38.843357 sshd[4452]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:38.844828 systemd[1]: sshd@43-139.178.90.113:22-147.75.109.163:53674.service: Deactivated successfully. Feb 9 08:57:38.845250 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 08:57:38.845604 systemd-logind[1160]: Session 26 logged out. Waiting for processes to exit. Feb 9 08:57:38.846214 systemd-logind[1160]: Removed session 26. Feb 9 08:57:43.853152 systemd[1]: Started sshd@44-139.178.90.113:22-147.75.109.163:53688.service. Feb 9 08:57:43.884576 sshd[4477]: Accepted publickey for core from 147.75.109.163 port 53688 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:43.885413 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:43.888336 systemd-logind[1160]: New session 27 of user core. Feb 9 08:57:43.888939 systemd[1]: Started session-27.scope. Feb 9 08:57:43.978647 sshd[4477]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:43.980363 systemd[1]: sshd@44-139.178.90.113:22-147.75.109.163:53688.service: Deactivated successfully. Feb 9 08:57:43.980871 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 08:57:43.981341 systemd-logind[1160]: Session 27 logged out. Waiting for processes to exit. Feb 9 08:57:43.982081 systemd-logind[1160]: Removed session 27. Feb 9 08:57:48.988036 systemd[1]: Started sshd@45-139.178.90.113:22-147.75.109.163:54216.service. Feb 9 08:57:49.019337 sshd[4502]: Accepted publickey for core from 147.75.109.163 port 54216 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:49.020193 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:49.023227 systemd-logind[1160]: New session 28 of user core. Feb 9 08:57:49.023827 systemd[1]: Started session-28.scope. Feb 9 08:57:49.110660 sshd[4502]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:49.112170 systemd[1]: sshd@45-139.178.90.113:22-147.75.109.163:54216.service: Deactivated successfully. Feb 9 08:57:49.112622 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 08:57:49.113080 systemd-logind[1160]: Session 28 logged out. Waiting for processes to exit. Feb 9 08:57:49.113479 systemd-logind[1160]: Removed session 28. Feb 9 08:57:54.113620 systemd[1]: Started sshd@46-139.178.90.113:22-147.75.109.163:54220.service. Feb 9 08:57:54.145939 sshd[4527]: Accepted publickey for core from 147.75.109.163 port 54220 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:54.146798 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:54.149737 systemd-logind[1160]: New session 29 of user core. Feb 9 08:57:54.150467 systemd[1]: Started session-29.scope. Feb 9 08:57:54.239548 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:54.241151 systemd[1]: sshd@46-139.178.90.113:22-147.75.109.163:54220.service: Deactivated successfully. Feb 9 08:57:54.241615 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 08:57:54.242040 systemd-logind[1160]: Session 29 logged out. Waiting for processes to exit. Feb 9 08:57:54.242502 systemd-logind[1160]: Removed session 29. Feb 9 08:57:59.248921 systemd[1]: Started sshd@47-139.178.90.113:22-147.75.109.163:46592.service. Feb 9 08:57:59.286850 sshd[4553]: Accepted publickey for core from 147.75.109.163 port 46592 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:57:59.287533 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:59.289944 systemd-logind[1160]: New session 30 of user core. Feb 9 08:57:59.290402 systemd[1]: Started session-30.scope. Feb 9 08:57:59.377744 sshd[4553]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:59.379199 systemd[1]: sshd@47-139.178.90.113:22-147.75.109.163:46592.service: Deactivated successfully. Feb 9 08:57:59.379627 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 08:57:59.379971 systemd-logind[1160]: Session 30 logged out. Waiting for processes to exit. Feb 9 08:57:59.380389 systemd-logind[1160]: Removed session 30. Feb 9 08:58:04.387047 systemd[1]: Started sshd@48-139.178.90.113:22-147.75.109.163:36896.service. Feb 9 08:58:04.419356 sshd[4580]: Accepted publickey for core from 147.75.109.163 port 36896 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:04.422405 sshd[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:04.432028 systemd-logind[1160]: New session 31 of user core. Feb 9 08:58:04.434628 systemd[1]: Started session-31.scope. Feb 9 08:58:04.539073 sshd[4580]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:04.540610 systemd[1]: sshd@48-139.178.90.113:22-147.75.109.163:36896.service: Deactivated successfully. Feb 9 08:58:04.541029 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 08:58:04.541362 systemd-logind[1160]: Session 31 logged out. Waiting for processes to exit. Feb 9 08:58:04.542026 systemd-logind[1160]: Removed session 31. Feb 9 08:58:09.548589 systemd[1]: Started sshd@49-139.178.90.113:22-147.75.109.163:36902.service. Feb 9 08:58:09.579712 sshd[4606]: Accepted publickey for core from 147.75.109.163 port 36902 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:09.580539 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:09.583313 systemd-logind[1160]: New session 32 of user core. Feb 9 08:58:09.584001 systemd[1]: Started session-32.scope. Feb 9 08:58:09.672686 sshd[4606]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:09.674157 systemd[1]: sshd@49-139.178.90.113:22-147.75.109.163:36902.service: Deactivated successfully. Feb 9 08:58:09.674590 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 08:58:09.675041 systemd-logind[1160]: Session 32 logged out. Waiting for processes to exit. Feb 9 08:58:09.675475 systemd-logind[1160]: Removed session 32. Feb 9 08:58:12.849160 systemd[1]: Started sshd@50-139.178.90.113:22-170.64.196.239:33906.service. Feb 9 08:58:13.444151 sshd[4630]: Invalid user activemq from 170.64.196.239 port 33906 Feb 9 08:58:13.598680 sshd[4630]: pam_faillock(sshd:auth): User unknown Feb 9 08:58:13.599676 sshd[4630]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:58:13.599765 sshd[4630]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.196.239 Feb 9 08:58:13.600872 sshd[4630]: pam_faillock(sshd:auth): User unknown Feb 9 08:58:14.681786 systemd[1]: Started sshd@51-139.178.90.113:22-147.75.109.163:57030.service. Feb 9 08:58:14.712814 sshd[4633]: Accepted publickey for core from 147.75.109.163 port 57030 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:14.713688 sshd[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:14.716417 systemd-logind[1160]: New session 33 of user core. Feb 9 08:58:14.717090 systemd[1]: Started session-33.scope. Feb 9 08:58:14.805974 sshd[4633]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:14.807510 systemd[1]: sshd@51-139.178.90.113:22-147.75.109.163:57030.service: Deactivated successfully. Feb 9 08:58:14.807937 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 08:58:14.808342 systemd-logind[1160]: Session 33 logged out. Waiting for processes to exit. Feb 9 08:58:14.808956 systemd-logind[1160]: Removed session 33. Feb 9 08:58:15.307770 sshd[4630]: Failed password for invalid user activemq from 170.64.196.239 port 33906 ssh2 Feb 9 08:58:16.329557 sshd[4630]: Connection closed by invalid user activemq 170.64.196.239 port 33906 [preauth] Feb 9 08:58:16.332207 systemd[1]: sshd@50-139.178.90.113:22-170.64.196.239:33906.service: Deactivated successfully. Feb 9 08:58:19.815844 systemd[1]: Started sshd@52-139.178.90.113:22-147.75.109.163:57044.service. Feb 9 08:58:19.846824 sshd[4660]: Accepted publickey for core from 147.75.109.163 port 57044 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:19.847652 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:19.850338 systemd-logind[1160]: New session 34 of user core. Feb 9 08:58:19.851018 systemd[1]: Started session-34.scope. Feb 9 08:58:19.938521 sshd[4660]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:19.940026 systemd[1]: sshd@52-139.178.90.113:22-147.75.109.163:57044.service: Deactivated successfully. Feb 9 08:58:19.940478 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 08:58:19.940948 systemd-logind[1160]: Session 34 logged out. Waiting for processes to exit. Feb 9 08:58:19.941473 systemd-logind[1160]: Removed session 34. Feb 9 08:58:24.947751 systemd[1]: Started sshd@53-139.178.90.113:22-147.75.109.163:48632.service. Feb 9 08:58:24.978780 sshd[4685]: Accepted publickey for core from 147.75.109.163 port 48632 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:24.979667 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:24.982537 systemd-logind[1160]: New session 35 of user core. Feb 9 08:58:24.983171 systemd[1]: Started session-35.scope. Feb 9 08:58:25.070686 sshd[4685]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:25.072182 systemd[1]: sshd@53-139.178.90.113:22-147.75.109.163:48632.service: Deactivated successfully. Feb 9 08:58:25.072606 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 08:58:25.073027 systemd-logind[1160]: Session 35 logged out. Waiting for processes to exit. Feb 9 08:58:25.073475 systemd-logind[1160]: Removed session 35. Feb 9 08:58:30.080775 systemd[1]: Started sshd@54-139.178.90.113:22-147.75.109.163:48638.service. Feb 9 08:58:30.112043 sshd[4710]: Accepted publickey for core from 147.75.109.163 port 48638 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:30.112909 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:30.116084 systemd-logind[1160]: New session 36 of user core. Feb 9 08:58:30.116676 systemd[1]: Started session-36.scope. Feb 9 08:58:30.206473 sshd[4710]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:30.208048 systemd[1]: sshd@54-139.178.90.113:22-147.75.109.163:48638.service: Deactivated successfully. Feb 9 08:58:30.208464 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 08:58:30.208917 systemd-logind[1160]: Session 36 logged out. Waiting for processes to exit. Feb 9 08:58:30.209401 systemd-logind[1160]: Removed session 36. Feb 9 08:58:35.215797 systemd[1]: Started sshd@55-139.178.90.113:22-147.75.109.163:47444.service. Feb 9 08:58:35.246943 sshd[4737]: Accepted publickey for core from 147.75.109.163 port 47444 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:35.247785 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:35.250848 systemd-logind[1160]: New session 37 of user core. Feb 9 08:58:35.251413 systemd[1]: Started session-37.scope. Feb 9 08:58:35.339998 sshd[4737]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:35.341257 systemd[1]: sshd@55-139.178.90.113:22-147.75.109.163:47444.service: Deactivated successfully. Feb 9 08:58:35.341696 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 08:58:35.342114 systemd-logind[1160]: Session 37 logged out. Waiting for processes to exit. Feb 9 08:58:35.342607 systemd-logind[1160]: Removed session 37. Feb 9 08:58:40.349094 systemd[1]: Started sshd@56-139.178.90.113:22-147.75.109.163:47458.service. Feb 9 08:58:40.380780 sshd[4762]: Accepted publickey for core from 147.75.109.163 port 47458 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:40.381696 sshd[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:40.384943 systemd-logind[1160]: New session 38 of user core. Feb 9 08:58:40.385593 systemd[1]: Started session-38.scope. Feb 9 08:58:40.477060 sshd[4762]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:40.478419 systemd[1]: sshd@56-139.178.90.113:22-147.75.109.163:47458.service: Deactivated successfully. Feb 9 08:58:40.478861 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 08:58:40.479287 systemd-logind[1160]: Session 38 logged out. Waiting for processes to exit. Feb 9 08:58:40.479932 systemd-logind[1160]: Removed session 38. Feb 9 08:58:45.486823 systemd[1]: Started sshd@57-139.178.90.113:22-147.75.109.163:48848.service. Feb 9 08:58:45.517925 sshd[4787]: Accepted publickey for core from 147.75.109.163 port 48848 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:45.518763 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:45.521600 systemd-logind[1160]: New session 39 of user core. Feb 9 08:58:45.522256 systemd[1]: Started session-39.scope. Feb 9 08:58:45.609356 sshd[4787]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:45.610903 systemd[1]: sshd@57-139.178.90.113:22-147.75.109.163:48848.service: Deactivated successfully. Feb 9 08:58:45.611359 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 08:58:45.611838 systemd-logind[1160]: Session 39 logged out. Waiting for processes to exit. Feb 9 08:58:45.612367 systemd-logind[1160]: Removed session 39. Feb 9 08:58:50.619147 systemd[1]: Started sshd@58-139.178.90.113:22-147.75.109.163:48858.service. Feb 9 08:58:50.650811 sshd[4812]: Accepted publickey for core from 147.75.109.163 port 48858 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:50.651714 sshd[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:50.654986 systemd-logind[1160]: New session 40 of user core. Feb 9 08:58:50.655651 systemd[1]: Started session-40.scope. Feb 9 08:58:50.743880 sshd[4812]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:50.745255 systemd[1]: sshd@58-139.178.90.113:22-147.75.109.163:48858.service: Deactivated successfully. Feb 9 08:58:50.745689 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 08:58:50.746101 systemd-logind[1160]: Session 40 logged out. Waiting for processes to exit. Feb 9 08:58:50.746521 systemd-logind[1160]: Removed session 40. Feb 9 08:58:55.753243 systemd[1]: Started sshd@59-139.178.90.113:22-147.75.109.163:34924.service. Feb 9 08:58:55.784744 sshd[4837]: Accepted publickey for core from 147.75.109.163 port 34924 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:58:55.785376 sshd[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:58:55.787927 systemd-logind[1160]: New session 41 of user core. Feb 9 08:58:55.788439 systemd[1]: Started session-41.scope. Feb 9 08:58:55.873495 sshd[4837]: pam_unix(sshd:session): session closed for user core Feb 9 08:58:55.875034 systemd[1]: sshd@59-139.178.90.113:22-147.75.109.163:34924.service: Deactivated successfully. Feb 9 08:58:55.875473 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 08:58:55.875922 systemd-logind[1160]: Session 41 logged out. Waiting for processes to exit. Feb 9 08:58:55.876413 systemd-logind[1160]: Removed session 41. Feb 9 08:59:00.883441 systemd[1]: Started sshd@60-139.178.90.113:22-147.75.109.163:34926.service. Feb 9 08:59:00.915210 sshd[4862]: Accepted publickey for core from 147.75.109.163 port 34926 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:00.916101 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:00.919269 systemd-logind[1160]: New session 42 of user core. Feb 9 08:59:00.919886 systemd[1]: Started session-42.scope. Feb 9 08:59:01.009417 sshd[4862]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:01.010966 systemd[1]: sshd@60-139.178.90.113:22-147.75.109.163:34926.service: Deactivated successfully. Feb 9 08:59:01.011401 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 08:59:01.011800 systemd-logind[1160]: Session 42 logged out. Waiting for processes to exit. Feb 9 08:59:01.012388 systemd-logind[1160]: Removed session 42. Feb 9 08:59:06.018717 systemd[1]: Started sshd@61-139.178.90.113:22-147.75.109.163:38712.service. Feb 9 08:59:06.049826 sshd[4889]: Accepted publickey for core from 147.75.109.163 port 38712 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:06.050713 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:06.053740 systemd-logind[1160]: New session 43 of user core. Feb 9 08:59:06.054370 systemd[1]: Started session-43.scope. Feb 9 08:59:06.140892 sshd[4889]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:06.142361 systemd[1]: sshd@61-139.178.90.113:22-147.75.109.163:38712.service: Deactivated successfully. Feb 9 08:59:06.142814 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 08:59:06.143152 systemd-logind[1160]: Session 43 logged out. Waiting for processes to exit. Feb 9 08:59:06.143629 systemd-logind[1160]: Removed session 43. Feb 9 08:59:11.145233 systemd[1]: Started sshd@62-139.178.90.113:22-147.75.109.163:38714.service. Feb 9 08:59:11.179000 sshd[4914]: Accepted publickey for core from 147.75.109.163 port 38714 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:11.179835 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:11.182634 systemd-logind[1160]: New session 44 of user core. Feb 9 08:59:11.183270 systemd[1]: Started session-44.scope. Feb 9 08:59:11.274620 sshd[4914]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:11.276131 systemd[1]: sshd@62-139.178.90.113:22-147.75.109.163:38714.service: Deactivated successfully. Feb 9 08:59:11.276601 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 08:59:11.276956 systemd-logind[1160]: Session 44 logged out. Waiting for processes to exit. Feb 9 08:59:11.277414 systemd-logind[1160]: Removed session 44. Feb 9 08:59:16.285705 systemd[1]: Started sshd@63-139.178.90.113:22-147.75.109.163:47798.service. Feb 9 08:59:16.320911 sshd[4939]: Accepted publickey for core from 147.75.109.163 port 47798 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:16.321700 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:16.324441 systemd-logind[1160]: New session 45 of user core. Feb 9 08:59:16.325010 systemd[1]: Started session-45.scope. Feb 9 08:59:16.413531 sshd[4939]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:16.415081 systemd[1]: sshd@63-139.178.90.113:22-147.75.109.163:47798.service: Deactivated successfully. Feb 9 08:59:16.415495 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 08:59:16.415935 systemd-logind[1160]: Session 45 logged out. Waiting for processes to exit. Feb 9 08:59:16.416429 systemd-logind[1160]: Removed session 45. Feb 9 08:59:21.422469 systemd[1]: Started sshd@64-139.178.90.113:22-147.75.109.163:47804.service. Feb 9 08:59:21.454665 sshd[4966]: Accepted publickey for core from 147.75.109.163 port 47804 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:21.455581 sshd[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:21.458807 systemd-logind[1160]: New session 46 of user core. Feb 9 08:59:21.459419 systemd[1]: Started session-46.scope. Feb 9 08:59:21.549733 sshd[4966]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:21.551190 systemd[1]: sshd@64-139.178.90.113:22-147.75.109.163:47804.service: Deactivated successfully. Feb 9 08:59:21.551627 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 08:59:21.552015 systemd-logind[1160]: Session 46 logged out. Waiting for processes to exit. Feb 9 08:59:21.552445 systemd-logind[1160]: Removed session 46. Feb 9 08:59:26.559691 systemd[1]: Started sshd@65-139.178.90.113:22-147.75.109.163:41644.service. Feb 9 08:59:26.592291 sshd[4990]: Accepted publickey for core from 147.75.109.163 port 41644 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:26.593176 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:26.596301 systemd-logind[1160]: New session 47 of user core. Feb 9 08:59:26.596953 systemd[1]: Started session-47.scope. Feb 9 08:59:26.685706 sshd[4990]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:26.687255 systemd[1]: sshd@65-139.178.90.113:22-147.75.109.163:41644.service: Deactivated successfully. Feb 9 08:59:26.687694 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 08:59:26.688167 systemd-logind[1160]: Session 47 logged out. Waiting for processes to exit. Feb 9 08:59:26.688755 systemd-logind[1160]: Removed session 47. Feb 9 08:59:31.695256 systemd[1]: Started sshd@66-139.178.90.113:22-147.75.109.163:41660.service. Feb 9 08:59:31.726922 sshd[5015]: Accepted publickey for core from 147.75.109.163 port 41660 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:31.727801 sshd[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:31.730886 systemd-logind[1160]: New session 48 of user core. Feb 9 08:59:31.731536 systemd[1]: Started session-48.scope. Feb 9 08:59:31.827594 sshd[5015]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:31.834015 systemd[1]: sshd@66-139.178.90.113:22-147.75.109.163:41660.service: Deactivated successfully. Feb 9 08:59:31.835573 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 08:59:31.837174 systemd-logind[1160]: Session 48 logged out. Waiting for processes to exit. Feb 9 08:59:31.839812 systemd[1]: Started sshd@67-139.178.90.113:22-147.75.109.163:41676.service. Feb 9 08:59:31.841988 systemd-logind[1160]: Removed session 48. Feb 9 08:59:31.874840 sshd[5041]: Accepted publickey for core from 147.75.109.163 port 41676 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:31.875599 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:31.878129 systemd-logind[1160]: New session 49 of user core. Feb 9 08:59:31.878639 systemd[1]: Started session-49.scope. Feb 9 08:59:32.986565 sshd[5041]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:32.988453 systemd[1]: sshd@67-139.178.90.113:22-147.75.109.163:41676.service: Deactivated successfully. Feb 9 08:59:32.988926 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 08:59:32.989303 systemd-logind[1160]: Session 49 logged out. Waiting for processes to exit. Feb 9 08:59:32.989978 systemd[1]: Started sshd@68-139.178.90.113:22-147.75.109.163:41690.service. Feb 9 08:59:32.990436 systemd-logind[1160]: Removed session 49. Feb 9 08:59:33.049292 sshd[5066]: Accepted publickey for core from 147.75.109.163 port 41690 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:33.052333 sshd[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:33.061766 systemd-logind[1160]: New session 50 of user core. Feb 9 08:59:33.062356 systemd[1]: Started session-50.scope. Feb 9 08:59:33.885677 sshd[5066]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:33.888455 systemd[1]: sshd@68-139.178.90.113:22-147.75.109.163:41690.service: Deactivated successfully. Feb 9 08:59:33.889203 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 08:59:33.889742 systemd-logind[1160]: Session 50 logged out. Waiting for processes to exit. Feb 9 08:59:33.890834 systemd[1]: Started sshd@69-139.178.90.113:22-147.75.109.163:41692.service. Feb 9 08:59:33.891557 systemd-logind[1160]: Removed session 50. Feb 9 08:59:33.928988 sshd[5098]: Accepted publickey for core from 147.75.109.163 port 41692 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:33.930113 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:33.933526 systemd-logind[1160]: New session 51 of user core. Feb 9 08:59:33.934436 systemd[1]: Started session-51.scope. Feb 9 08:59:34.162447 sshd[5098]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:34.164147 systemd[1]: sshd@69-139.178.90.113:22-147.75.109.163:41692.service: Deactivated successfully. Feb 9 08:59:34.164468 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 08:59:34.164822 systemd-logind[1160]: Session 51 logged out. Waiting for processes to exit. Feb 9 08:59:34.165381 systemd[1]: Started sshd@70-139.178.90.113:22-147.75.109.163:41698.service. Feb 9 08:59:34.165847 systemd-logind[1160]: Removed session 51. Feb 9 08:59:34.196740 sshd[5123]: Accepted publickey for core from 147.75.109.163 port 41698 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:34.199724 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:34.209429 systemd-logind[1160]: New session 52 of user core. Feb 9 08:59:34.211839 systemd[1]: Started session-52.scope. Feb 9 08:59:34.357575 sshd[5123]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:34.361910 systemd[1]: sshd@70-139.178.90.113:22-147.75.109.163:41698.service: Deactivated successfully. Feb 9 08:59:34.363256 systemd[1]: session-52.scope: Deactivated successfully. Feb 9 08:59:34.364493 systemd-logind[1160]: Session 52 logged out. Waiting for processes to exit. Feb 9 08:59:34.366299 systemd-logind[1160]: Removed session 52. Feb 9 08:59:37.229272 systemd[1]: Started sshd@71-139.178.90.113:22-218.92.0.43:10232.service. Feb 9 08:59:37.385661 sshd[5148]: Unable to negotiate with 218.92.0.43 port 10232: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 08:59:37.387548 systemd[1]: sshd@71-139.178.90.113:22-218.92.0.43:10232.service: Deactivated successfully. Feb 9 08:59:39.366233 systemd[1]: Started sshd@72-139.178.90.113:22-147.75.109.163:39328.service. Feb 9 08:59:39.397730 sshd[5152]: Accepted publickey for core from 147.75.109.163 port 39328 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:39.398612 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:39.401750 systemd-logind[1160]: New session 53 of user core. Feb 9 08:59:39.402516 systemd[1]: Started session-53.scope. Feb 9 08:59:39.488933 sshd[5152]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:39.490363 systemd[1]: sshd@72-139.178.90.113:22-147.75.109.163:39328.service: Deactivated successfully. Feb 9 08:59:39.490829 systemd[1]: session-53.scope: Deactivated successfully. Feb 9 08:59:39.491257 systemd-logind[1160]: Session 53 logged out. Waiting for processes to exit. Feb 9 08:59:39.491876 systemd-logind[1160]: Removed session 53. Feb 9 08:59:44.492351 systemd[1]: Started sshd@73-139.178.90.113:22-147.75.109.163:40692.service. Feb 9 08:59:44.523808 sshd[5177]: Accepted publickey for core from 147.75.109.163 port 40692 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:44.524581 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:44.527297 systemd-logind[1160]: New session 54 of user core. Feb 9 08:59:44.528017 systemd[1]: Started session-54.scope. Feb 9 08:59:44.615219 sshd[5177]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:44.616784 systemd[1]: sshd@73-139.178.90.113:22-147.75.109.163:40692.service: Deactivated successfully. Feb 9 08:59:44.617265 systemd[1]: session-54.scope: Deactivated successfully. Feb 9 08:59:44.617727 systemd-logind[1160]: Session 54 logged out. Waiting for processes to exit. Feb 9 08:59:44.618368 systemd-logind[1160]: Removed session 54. Feb 9 08:59:49.624930 systemd[1]: Started sshd@74-139.178.90.113:22-147.75.109.163:40694.service. Feb 9 08:59:49.655936 sshd[5202]: Accepted publickey for core from 147.75.109.163 port 40694 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:49.656787 sshd[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:49.659844 systemd-logind[1160]: New session 55 of user core. Feb 9 08:59:49.660669 systemd[1]: Started session-55.scope. Feb 9 08:59:49.749708 sshd[5202]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:49.751176 systemd[1]: sshd@74-139.178.90.113:22-147.75.109.163:40694.service: Deactivated successfully. Feb 9 08:59:49.751649 systemd[1]: session-55.scope: Deactivated successfully. Feb 9 08:59:49.752070 systemd-logind[1160]: Session 55 logged out. Waiting for processes to exit. Feb 9 08:59:49.752453 systemd-logind[1160]: Removed session 55. Feb 9 08:59:54.759068 systemd[1]: Started sshd@75-139.178.90.113:22-147.75.109.163:40204.service. Feb 9 08:59:54.790595 sshd[5228]: Accepted publickey for core from 147.75.109.163 port 40204 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:54.791369 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:54.794006 systemd-logind[1160]: New session 56 of user core. Feb 9 08:59:54.794707 systemd[1]: Started session-56.scope. Feb 9 08:59:54.923197 sshd[5228]: pam_unix(sshd:session): session closed for user core Feb 9 08:59:54.924673 systemd[1]: sshd@75-139.178.90.113:22-147.75.109.163:40204.service: Deactivated successfully. Feb 9 08:59:54.925105 systemd[1]: session-56.scope: Deactivated successfully. Feb 9 08:59:54.925399 systemd-logind[1160]: Session 56 logged out. Waiting for processes to exit. Feb 9 08:59:54.925907 systemd-logind[1160]: Removed session 56. Feb 9 08:59:59.933200 systemd[1]: Started sshd@76-139.178.90.113:22-147.75.109.163:40212.service. Feb 9 08:59:59.965016 sshd[5253]: Accepted publickey for core from 147.75.109.163 port 40212 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:59:59.965991 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:59:59.969086 systemd-logind[1160]: New session 57 of user core. Feb 9 08:59:59.969951 systemd[1]: Started session-57.scope. Feb 9 09:00:00.055749 sshd[5253]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:00.057258 systemd[1]: sshd@76-139.178.90.113:22-147.75.109.163:40212.service: Deactivated successfully. Feb 9 09:00:00.057703 systemd[1]: session-57.scope: Deactivated successfully. Feb 9 09:00:00.058022 systemd-logind[1160]: Session 57 logged out. Waiting for processes to exit. Feb 9 09:00:00.058391 systemd-logind[1160]: Removed session 57. Feb 9 09:00:05.065404 systemd[1]: Started sshd@77-139.178.90.113:22-147.75.109.163:47412.service. Feb 9 09:00:05.096624 sshd[5280]: Accepted publickey for core from 147.75.109.163 port 47412 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:05.097489 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:05.100359 systemd-logind[1160]: New session 58 of user core. Feb 9 09:00:05.101146 systemd[1]: Started session-58.scope. Feb 9 09:00:05.195703 sshd[5280]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:05.197467 systemd[1]: sshd@77-139.178.90.113:22-147.75.109.163:47412.service: Deactivated successfully. Feb 9 09:00:05.198026 systemd[1]: session-58.scope: Deactivated successfully. Feb 9 09:00:05.198475 systemd-logind[1160]: Session 58 logged out. Waiting for processes to exit. Feb 9 09:00:05.199241 systemd-logind[1160]: Removed session 58. Feb 9 09:00:10.204362 systemd[1]: Started sshd@78-139.178.90.113:22-147.75.109.163:47426.service. Feb 9 09:00:10.235912 sshd[5305]: Accepted publickey for core from 147.75.109.163 port 47426 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:10.236760 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:10.239864 systemd-logind[1160]: New session 59 of user core. Feb 9 09:00:10.240523 systemd[1]: Started session-59.scope. Feb 9 09:00:10.328748 sshd[5305]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:10.330231 systemd[1]: sshd@78-139.178.90.113:22-147.75.109.163:47426.service: Deactivated successfully. Feb 9 09:00:10.330678 systemd[1]: session-59.scope: Deactivated successfully. Feb 9 09:00:10.331076 systemd-logind[1160]: Session 59 logged out. Waiting for processes to exit. Feb 9 09:00:10.331496 systemd-logind[1160]: Removed session 59. Feb 9 09:00:15.338714 systemd[1]: Started sshd@79-139.178.90.113:22-147.75.109.163:57554.service. Feb 9 09:00:15.411549 sshd[5330]: Accepted publickey for core from 147.75.109.163 port 57554 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:15.413384 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:15.418981 systemd-logind[1160]: New session 60 of user core. Feb 9 09:00:15.420206 systemd[1]: Started session-60.scope. Feb 9 09:00:15.513258 sshd[5330]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:15.514768 systemd[1]: sshd@79-139.178.90.113:22-147.75.109.163:57554.service: Deactivated successfully. Feb 9 09:00:15.515191 systemd[1]: session-60.scope: Deactivated successfully. Feb 9 09:00:15.515479 systemd-logind[1160]: Session 60 logged out. Waiting for processes to exit. Feb 9 09:00:15.515996 systemd-logind[1160]: Removed session 60. Feb 9 09:00:20.523157 systemd[1]: Started sshd@80-139.178.90.113:22-147.75.109.163:57562.service. Feb 9 09:00:20.554756 sshd[5353]: Accepted publickey for core from 147.75.109.163 port 57562 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:20.555710 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:20.558602 systemd-logind[1160]: New session 61 of user core. Feb 9 09:00:20.559511 systemd[1]: Started session-61.scope. Feb 9 09:00:20.645364 sshd[5353]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:20.646919 systemd[1]: sshd@80-139.178.90.113:22-147.75.109.163:57562.service: Deactivated successfully. Feb 9 09:00:20.647397 systemd[1]: session-61.scope: Deactivated successfully. Feb 9 09:00:20.647829 systemd-logind[1160]: Session 61 logged out. Waiting for processes to exit. Feb 9 09:00:20.648322 systemd-logind[1160]: Removed session 61. Feb 9 09:00:25.654678 systemd[1]: Started sshd@81-139.178.90.113:22-147.75.109.163:34920.service. Feb 9 09:00:25.686003 sshd[5376]: Accepted publickey for core from 147.75.109.163 port 34920 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:25.686864 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:25.689769 systemd-logind[1160]: New session 62 of user core. Feb 9 09:00:25.690494 systemd[1]: Started session-62.scope. Feb 9 09:00:25.774690 sshd[5376]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:25.776165 systemd[1]: sshd@81-139.178.90.113:22-147.75.109.163:34920.service: Deactivated successfully. Feb 9 09:00:25.776621 systemd[1]: session-62.scope: Deactivated successfully. Feb 9 09:00:25.777009 systemd-logind[1160]: Session 62 logged out. Waiting for processes to exit. Feb 9 09:00:25.777431 systemd-logind[1160]: Removed session 62. Feb 9 09:00:30.783787 systemd[1]: Started sshd@82-139.178.90.113:22-147.75.109.163:34936.service. Feb 9 09:00:30.815334 sshd[5400]: Accepted publickey for core from 147.75.109.163 port 34936 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:30.816048 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:30.818481 systemd-logind[1160]: New session 63 of user core. Feb 9 09:00:30.819147 systemd[1]: Started session-63.scope. Feb 9 09:00:30.906563 sshd[5400]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:30.908110 systemd[1]: sshd@82-139.178.90.113:22-147.75.109.163:34936.service: Deactivated successfully. Feb 9 09:00:30.908545 systemd[1]: session-63.scope: Deactivated successfully. Feb 9 09:00:30.908945 systemd-logind[1160]: Session 63 logged out. Waiting for processes to exit. Feb 9 09:00:30.909433 systemd-logind[1160]: Removed session 63. Feb 9 09:00:35.915823 systemd[1]: Started sshd@83-139.178.90.113:22-147.75.109.163:46994.service. Feb 9 09:00:35.946997 sshd[5424]: Accepted publickey for core from 147.75.109.163 port 46994 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:35.947846 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:35.950862 systemd-logind[1160]: New session 64 of user core. Feb 9 09:00:35.951505 systemd[1]: Started session-64.scope. Feb 9 09:00:36.039880 sshd[5424]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:36.041275 systemd[1]: sshd@83-139.178.90.113:22-147.75.109.163:46994.service: Deactivated successfully. Feb 9 09:00:36.041714 systemd[1]: session-64.scope: Deactivated successfully. Feb 9 09:00:36.042090 systemd-logind[1160]: Session 64 logged out. Waiting for processes to exit. Feb 9 09:00:36.042488 systemd-logind[1160]: Removed session 64. Feb 9 09:00:41.049854 systemd[1]: Started sshd@84-139.178.90.113:22-147.75.109.163:47010.service. Feb 9 09:00:41.080942 sshd[5449]: Accepted publickey for core from 147.75.109.163 port 47010 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:41.081762 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:41.084494 systemd-logind[1160]: New session 65 of user core. Feb 9 09:00:41.085161 systemd[1]: Started session-65.scope. Feb 9 09:00:41.172053 sshd[5449]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:41.173461 systemd[1]: sshd@84-139.178.90.113:22-147.75.109.163:47010.service: Deactivated successfully. Feb 9 09:00:41.173896 systemd[1]: session-65.scope: Deactivated successfully. Feb 9 09:00:41.174295 systemd-logind[1160]: Session 65 logged out. Waiting for processes to exit. Feb 9 09:00:41.174691 systemd-logind[1160]: Removed session 65. Feb 9 09:00:46.182235 systemd[1]: Started sshd@85-139.178.90.113:22-147.75.109.163:52560.service. Feb 9 09:00:46.213145 sshd[5473]: Accepted publickey for core from 147.75.109.163 port 52560 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:46.213936 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:46.216817 systemd-logind[1160]: New session 66 of user core. Feb 9 09:00:46.217575 systemd[1]: Started session-66.scope. Feb 9 09:00:46.307404 sshd[5473]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:46.308797 systemd[1]: sshd@85-139.178.90.113:22-147.75.109.163:52560.service: Deactivated successfully. Feb 9 09:00:46.309243 systemd[1]: session-66.scope: Deactivated successfully. Feb 9 09:00:46.309550 systemd-logind[1160]: Session 66 logged out. Waiting for processes to exit. Feb 9 09:00:46.310128 systemd-logind[1160]: Removed session 66. Feb 9 09:00:51.317143 systemd[1]: Started sshd@86-139.178.90.113:22-147.75.109.163:52568.service. Feb 9 09:00:51.348838 sshd[5497]: Accepted publickey for core from 147.75.109.163 port 52568 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:51.349654 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:51.352271 systemd-logind[1160]: New session 67 of user core. Feb 9 09:00:51.353070 systemd[1]: Started session-67.scope. Feb 9 09:00:51.438977 sshd[5497]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:51.440209 systemd[1]: sshd@86-139.178.90.113:22-147.75.109.163:52568.service: Deactivated successfully. Feb 9 09:00:51.440647 systemd[1]: session-67.scope: Deactivated successfully. Feb 9 09:00:51.441015 systemd-logind[1160]: Session 67 logged out. Waiting for processes to exit. Feb 9 09:00:51.441396 systemd-logind[1160]: Removed session 67. Feb 9 09:00:51.538719 systemd[1]: Started sshd@87-139.178.90.113:22-218.92.0.31:20413.service. Feb 9 09:00:52.590356 sshd[5522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 09:00:54.127057 sshd[5522]: Failed password for root from 218.92.0.31 port 20413 ssh2 Feb 9 09:00:56.449136 systemd[1]: Started sshd@88-139.178.90.113:22-147.75.109.163:48606.service. Feb 9 09:00:56.480452 sshd[5525]: Accepted publickey for core from 147.75.109.163 port 48606 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:00:56.481312 sshd[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:00:56.484196 systemd-logind[1160]: New session 68 of user core. Feb 9 09:00:56.484934 systemd[1]: Started session-68.scope. Feb 9 09:00:56.572209 sshd[5525]: pam_unix(sshd:session): session closed for user core Feb 9 09:00:56.573661 systemd[1]: sshd@88-139.178.90.113:22-147.75.109.163:48606.service: Deactivated successfully. Feb 9 09:00:56.574070 systemd[1]: session-68.scope: Deactivated successfully. Feb 9 09:00:56.574401 systemd-logind[1160]: Session 68 logged out. Waiting for processes to exit. Feb 9 09:00:56.575052 systemd-logind[1160]: Removed session 68. Feb 9 09:00:58.067059 sshd[5522]: Failed password for root from 218.92.0.31 port 20413 ssh2 Feb 9 09:01:00.002930 sshd[5522]: Failed password for root from 218.92.0.31 port 20413 ssh2 Feb 9 09:01:01.180197 sshd[5522]: Received disconnect from 218.92.0.31 port 20413:11: [preauth] Feb 9 09:01:01.180197 sshd[5522]: Disconnected from authenticating user root 218.92.0.31 port 20413 [preauth] Feb 9 09:01:01.180717 sshd[5522]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 09:01:01.182782 systemd[1]: sshd@87-139.178.90.113:22-218.92.0.31:20413.service: Deactivated successfully. Feb 9 09:01:01.376636 systemd[1]: Started sshd@89-139.178.90.113:22-218.92.0.31:10812.service. Feb 9 09:01:01.583755 systemd[1]: Started sshd@90-139.178.90.113:22-147.75.109.163:48620.service. Feb 9 09:01:01.618860 sshd[5554]: Accepted publickey for core from 147.75.109.163 port 48620 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:01.619636 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:01.622269 systemd-logind[1160]: New session 69 of user core. Feb 9 09:01:01.622861 systemd[1]: Started session-69.scope. Feb 9 09:01:01.714430 sshd[5554]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:01.715910 systemd[1]: sshd@90-139.178.90.113:22-147.75.109.163:48620.service: Deactivated successfully. Feb 9 09:01:01.716372 systemd[1]: session-69.scope: Deactivated successfully. Feb 9 09:01:01.716777 systemd-logind[1160]: Session 69 logged out. Waiting for processes to exit. Feb 9 09:01:01.717336 systemd-logind[1160]: Removed session 69. Feb 9 09:01:02.459475 sshd[5551]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 09:01:04.703585 sshd[5551]: Failed password for root from 218.92.0.31 port 10812 ssh2 Feb 9 09:01:05.329491 sshd[5551]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 09:01:06.724183 systemd[1]: Started sshd@91-139.178.90.113:22-147.75.109.163:60610.service. Feb 9 09:01:06.755486 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 60610 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:06.756333 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:06.759184 systemd-logind[1160]: New session 70 of user core. Feb 9 09:01:06.759932 systemd[1]: Started session-70.scope. Feb 9 09:01:06.849089 sshd[5581]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:06.850579 systemd[1]: sshd@91-139.178.90.113:22-147.75.109.163:60610.service: Deactivated successfully. Feb 9 09:01:06.851016 systemd[1]: session-70.scope: Deactivated successfully. Feb 9 09:01:06.851312 systemd-logind[1160]: Session 70 logged out. Waiting for processes to exit. Feb 9 09:01:06.851795 systemd-logind[1160]: Removed session 70. Feb 9 09:01:07.317842 sshd[5551]: Failed password for root from 218.92.0.31 port 10812 ssh2 Feb 9 09:01:10.600206 sshd[5551]: Failed password for root from 218.92.0.31 port 10812 ssh2 Feb 9 09:01:11.071713 sshd[5551]: Received disconnect from 218.92.0.31 port 10812:11: [preauth] Feb 9 09:01:11.071713 sshd[5551]: Disconnected from authenticating user root 218.92.0.31 port 10812 [preauth] Feb 9 09:01:11.072255 sshd[5551]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 09:01:11.074298 systemd[1]: sshd@89-139.178.90.113:22-218.92.0.31:10812.service: Deactivated successfully. Feb 9 09:01:11.221291 systemd[1]: Started sshd@92-139.178.90.113:22-218.92.0.31:57880.service. Feb 9 09:01:11.858230 systemd[1]: Started sshd@93-139.178.90.113:22-147.75.109.163:60624.service. Feb 9 09:01:11.889920 sshd[5610]: Accepted publickey for core from 147.75.109.163 port 60624 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:11.890758 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:11.893755 systemd-logind[1160]: New session 71 of user core. Feb 9 09:01:11.894602 systemd[1]: Started session-71.scope. Feb 9 09:01:11.983690 sshd[5610]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:11.985207 systemd[1]: sshd@93-139.178.90.113:22-147.75.109.163:60624.service: Deactivated successfully. Feb 9 09:01:11.985671 systemd[1]: session-71.scope: Deactivated successfully. Feb 9 09:01:11.986106 systemd-logind[1160]: Session 71 logged out. Waiting for processes to exit. Feb 9 09:01:11.986680 systemd-logind[1160]: Removed session 71. Feb 9 09:01:12.238112 sshd[5607]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 09:01:13.854931 sshd[5607]: Failed password for root from 218.92.0.31 port 57880 ssh2 Feb 9 09:01:16.460326 sshd[5607]: Failed password for root from 218.92.0.31 port 57880 ssh2 Feb 9 09:01:16.993526 systemd[1]: Started sshd@94-139.178.90.113:22-147.75.109.163:33964.service. Feb 9 09:01:17.024931 sshd[5635]: Accepted publickey for core from 147.75.109.163 port 33964 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:17.025759 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:17.028617 systemd-logind[1160]: New session 72 of user core. Feb 9 09:01:17.029246 systemd[1]: Started session-72.scope. Feb 9 09:01:17.116731 sshd[5635]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:17.118230 systemd[1]: sshd@94-139.178.90.113:22-147.75.109.163:33964.service: Deactivated successfully. Feb 9 09:01:17.118663 systemd[1]: session-72.scope: Deactivated successfully. Feb 9 09:01:17.119101 systemd-logind[1160]: Session 72 logged out. Waiting for processes to exit. Feb 9 09:01:17.119490 systemd-logind[1160]: Removed session 72. Feb 9 09:01:19.927995 sshd[5607]: Failed password for root from 218.92.0.31 port 57880 ssh2 Feb 9 09:01:20.826360 sshd[5607]: Received disconnect from 218.92.0.31 port 57880:11: [preauth] Feb 9 09:01:20.826360 sshd[5607]: Disconnected from authenticating user root 218.92.0.31 port 57880 [preauth] Feb 9 09:01:20.826897 sshd[5607]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.31 user=root Feb 9 09:01:20.828924 systemd[1]: sshd@92-139.178.90.113:22-218.92.0.31:57880.service: Deactivated successfully. Feb 9 09:01:22.120221 systemd[1]: Started sshd@95-139.178.90.113:22-147.75.109.163:33980.service. Feb 9 09:01:22.152909 sshd[5663]: Accepted publickey for core from 147.75.109.163 port 33980 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:22.153696 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:22.156265 systemd-logind[1160]: New session 73 of user core. Feb 9 09:01:22.157019 systemd[1]: Started session-73.scope. Feb 9 09:01:22.243173 sshd[5663]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:22.244429 systemd[1]: sshd@95-139.178.90.113:22-147.75.109.163:33980.service: Deactivated successfully. Feb 9 09:01:22.244869 systemd[1]: session-73.scope: Deactivated successfully. Feb 9 09:01:22.245209 systemd-logind[1160]: Session 73 logged out. Waiting for processes to exit. Feb 9 09:01:22.245704 systemd-logind[1160]: Removed session 73. Feb 9 09:01:27.253383 systemd[1]: Started sshd@96-139.178.90.113:22-147.75.109.163:55966.service. Feb 9 09:01:27.284932 sshd[5687]: Accepted publickey for core from 147.75.109.163 port 55966 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:27.285685 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:27.288192 systemd-logind[1160]: New session 74 of user core. Feb 9 09:01:27.288696 systemd[1]: Started session-74.scope. Feb 9 09:01:27.373740 sshd[5687]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:27.375202 systemd[1]: sshd@96-139.178.90.113:22-147.75.109.163:55966.service: Deactivated successfully. Feb 9 09:01:27.375634 systemd[1]: session-74.scope: Deactivated successfully. Feb 9 09:01:27.376078 systemd-logind[1160]: Session 74 logged out. Waiting for processes to exit. Feb 9 09:01:27.376487 systemd-logind[1160]: Removed session 74. Feb 9 09:01:32.383408 systemd[1]: Started sshd@97-139.178.90.113:22-147.75.109.163:55970.service. Feb 9 09:01:32.414677 sshd[5715]: Accepted publickey for core from 147.75.109.163 port 55970 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:32.415532 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:32.418337 systemd-logind[1160]: New session 75 of user core. Feb 9 09:01:32.419169 systemd[1]: Started session-75.scope. Feb 9 09:01:32.503530 sshd[5715]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:32.505011 systemd[1]: sshd@97-139.178.90.113:22-147.75.109.163:55970.service: Deactivated successfully. Feb 9 09:01:32.505449 systemd[1]: session-75.scope: Deactivated successfully. Feb 9 09:01:32.505888 systemd-logind[1160]: Session 75 logged out. Waiting for processes to exit. Feb 9 09:01:32.506400 systemd-logind[1160]: Removed session 75. Feb 9 09:01:37.513389 systemd[1]: Started sshd@98-139.178.90.113:22-147.75.109.163:56766.service. Feb 9 09:01:37.544953 sshd[5739]: Accepted publickey for core from 147.75.109.163 port 56766 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:37.545814 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:37.548777 systemd-logind[1160]: New session 76 of user core. Feb 9 09:01:37.549378 systemd[1]: Started session-76.scope. Feb 9 09:01:37.637547 sshd[5739]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:37.639021 systemd[1]: sshd@98-139.178.90.113:22-147.75.109.163:56766.service: Deactivated successfully. Feb 9 09:01:37.639457 systemd[1]: session-76.scope: Deactivated successfully. Feb 9 09:01:37.639854 systemd-logind[1160]: Session 76 logged out. Waiting for processes to exit. Feb 9 09:01:37.640337 systemd-logind[1160]: Removed session 76. Feb 9 09:01:42.646681 systemd[1]: Started sshd@99-139.178.90.113:22-147.75.109.163:56782.service. Feb 9 09:01:42.677938 sshd[5764]: Accepted publickey for core from 147.75.109.163 port 56782 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:42.678885 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:42.681997 systemd-logind[1160]: New session 77 of user core. Feb 9 09:01:42.682823 systemd[1]: Started session-77.scope. Feb 9 09:01:42.771599 sshd[5764]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:42.773082 systemd[1]: sshd@99-139.178.90.113:22-147.75.109.163:56782.service: Deactivated successfully. Feb 9 09:01:42.773512 systemd[1]: session-77.scope: Deactivated successfully. Feb 9 09:01:42.773899 systemd-logind[1160]: Session 77 logged out. Waiting for processes to exit. Feb 9 09:01:42.774386 systemd-logind[1160]: Removed session 77. Feb 9 09:01:47.783861 systemd[1]: Started sshd@100-139.178.90.113:22-147.75.109.163:58532.service. Feb 9 09:01:47.818830 sshd[5789]: Accepted publickey for core from 147.75.109.163 port 58532 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:47.819571 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:47.822214 systemd-logind[1160]: New session 78 of user core. Feb 9 09:01:47.822773 systemd[1]: Started session-78.scope. Feb 9 09:01:47.910826 sshd[5789]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:47.912242 systemd[1]: sshd@100-139.178.90.113:22-147.75.109.163:58532.service: Deactivated successfully. Feb 9 09:01:47.912674 systemd[1]: session-78.scope: Deactivated successfully. Feb 9 09:01:47.913102 systemd-logind[1160]: Session 78 logged out. Waiting for processes to exit. Feb 9 09:01:47.913520 systemd-logind[1160]: Removed session 78. Feb 9 09:01:52.920710 systemd[1]: Started sshd@101-139.178.90.113:22-147.75.109.163:58542.service. Feb 9 09:01:52.951930 sshd[5813]: Accepted publickey for core from 147.75.109.163 port 58542 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:52.952779 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:52.955775 systemd-logind[1160]: New session 79 of user core. Feb 9 09:01:52.956456 systemd[1]: Started session-79.scope. Feb 9 09:01:53.043084 sshd[5813]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:53.044523 systemd[1]: sshd@101-139.178.90.113:22-147.75.109.163:58542.service: Deactivated successfully. Feb 9 09:01:53.044992 systemd[1]: session-79.scope: Deactivated successfully. Feb 9 09:01:53.045312 systemd-logind[1160]: Session 79 logged out. Waiting for processes to exit. Feb 9 09:01:53.045803 systemd-logind[1160]: Removed session 79. Feb 9 09:01:58.051877 systemd[1]: Started sshd@102-139.178.90.113:22-147.75.109.163:38892.service. Feb 9 09:01:58.082866 sshd[5837]: Accepted publickey for core from 147.75.109.163 port 38892 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:01:58.083683 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:01:58.086446 systemd-logind[1160]: New session 80 of user core. Feb 9 09:01:58.087237 systemd[1]: Started session-80.scope. Feb 9 09:01:58.173747 sshd[5837]: pam_unix(sshd:session): session closed for user core Feb 9 09:01:58.175256 systemd[1]: sshd@102-139.178.90.113:22-147.75.109.163:38892.service: Deactivated successfully. Feb 9 09:01:58.175700 systemd[1]: session-80.scope: Deactivated successfully. Feb 9 09:01:58.176144 systemd-logind[1160]: Session 80 logged out. Waiting for processes to exit. Feb 9 09:01:58.176615 systemd-logind[1160]: Removed session 80. Feb 9 09:02:03.180704 systemd[1]: Started sshd@103-139.178.90.113:22-147.75.109.163:38904.service. Feb 9 09:02:03.215263 sshd[5863]: Accepted publickey for core from 147.75.109.163 port 38904 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:03.216055 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:03.218776 systemd-logind[1160]: New session 81 of user core. Feb 9 09:02:03.219470 systemd[1]: Started session-81.scope. Feb 9 09:02:03.307262 sshd[5863]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:03.308771 systemd[1]: sshd@103-139.178.90.113:22-147.75.109.163:38904.service: Deactivated successfully. Feb 9 09:02:03.309218 systemd[1]: session-81.scope: Deactivated successfully. Feb 9 09:02:03.309514 systemd-logind[1160]: Session 81 logged out. Waiting for processes to exit. Feb 9 09:02:03.310137 systemd-logind[1160]: Removed session 81. Feb 9 09:02:08.318354 systemd[1]: Started sshd@104-139.178.90.113:22-147.75.109.163:36972.service. Feb 9 09:02:08.394689 sshd[5888]: Accepted publickey for core from 147.75.109.163 port 36972 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:08.395971 sshd[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:08.400013 systemd-logind[1160]: New session 82 of user core. Feb 9 09:02:08.401020 systemd[1]: Started session-82.scope. Feb 9 09:02:08.490619 sshd[5888]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:08.492033 systemd[1]: sshd@104-139.178.90.113:22-147.75.109.163:36972.service: Deactivated successfully. Feb 9 09:02:08.492445 systemd[1]: session-82.scope: Deactivated successfully. Feb 9 09:02:08.492844 systemd-logind[1160]: Session 82 logged out. Waiting for processes to exit. Feb 9 09:02:08.493312 systemd-logind[1160]: Removed session 82. Feb 9 09:02:13.500095 systemd[1]: Started sshd@105-139.178.90.113:22-147.75.109.163:36978.service. Feb 9 09:02:13.532317 sshd[5916]: Accepted publickey for core from 147.75.109.163 port 36978 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:13.535502 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:13.546555 systemd-logind[1160]: New session 83 of user core. Feb 9 09:02:13.549713 systemd[1]: Started session-83.scope. Feb 9 09:02:13.661029 sshd[5916]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:13.667087 systemd[1]: sshd@105-139.178.90.113:22-147.75.109.163:36978.service: Deactivated successfully. Feb 9 09:02:13.668994 systemd[1]: session-83.scope: Deactivated successfully. Feb 9 09:02:13.670688 systemd-logind[1160]: Session 83 logged out. Waiting for processes to exit. Feb 9 09:02:13.672853 systemd-logind[1160]: Removed session 83. Feb 9 09:02:18.669257 systemd[1]: Started sshd@106-139.178.90.113:22-147.75.109.163:41932.service. Feb 9 09:02:18.741902 sshd[5943]: Accepted publickey for core from 147.75.109.163 port 41932 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:18.743513 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:18.748927 systemd-logind[1160]: New session 84 of user core. Feb 9 09:02:18.750122 systemd[1]: Started session-84.scope. Feb 9 09:02:18.841887 sshd[5943]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:18.843395 systemd[1]: sshd@106-139.178.90.113:22-147.75.109.163:41932.service: Deactivated successfully. Feb 9 09:02:18.843871 systemd[1]: session-84.scope: Deactivated successfully. Feb 9 09:02:18.844307 systemd-logind[1160]: Session 84 logged out. Waiting for processes to exit. Feb 9 09:02:18.844885 systemd-logind[1160]: Removed session 84. Feb 9 09:02:23.851483 systemd[1]: Started sshd@107-139.178.90.113:22-147.75.109.163:41940.service. Feb 9 09:02:23.882566 sshd[5968]: Accepted publickey for core from 147.75.109.163 port 41940 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:23.883364 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:23.886234 systemd-logind[1160]: New session 85 of user core. Feb 9 09:02:23.886991 systemd[1]: Started session-85.scope. Feb 9 09:02:23.973625 sshd[5968]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:23.974980 systemd[1]: sshd@107-139.178.90.113:22-147.75.109.163:41940.service: Deactivated successfully. Feb 9 09:02:23.975391 systemd[1]: session-85.scope: Deactivated successfully. Feb 9 09:02:23.975861 systemd-logind[1160]: Session 85 logged out. Waiting for processes to exit. Feb 9 09:02:23.976278 systemd-logind[1160]: Removed session 85. Feb 9 09:02:28.982458 systemd[1]: Started sshd@108-139.178.90.113:22-147.75.109.163:33974.service. Feb 9 09:02:29.013835 sshd[5994]: Accepted publickey for core from 147.75.109.163 port 33974 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:29.014689 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:29.017351 systemd-logind[1160]: New session 86 of user core. Feb 9 09:02:29.018073 systemd[1]: Started session-86.scope. Feb 9 09:02:29.105691 sshd[5994]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:29.107249 systemd[1]: sshd@108-139.178.90.113:22-147.75.109.163:33974.service: Deactivated successfully. Feb 9 09:02:29.107717 systemd[1]: session-86.scope: Deactivated successfully. Feb 9 09:02:29.108117 systemd-logind[1160]: Session 86 logged out. Waiting for processes to exit. Feb 9 09:02:29.108617 systemd-logind[1160]: Removed session 86. Feb 9 09:02:34.109007 systemd[1]: Started sshd@109-139.178.90.113:22-147.75.109.163:33984.service. Feb 9 09:02:34.182787 sshd[6021]: Accepted publickey for core from 147.75.109.163 port 33984 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:34.184362 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:34.189611 systemd-logind[1160]: New session 87 of user core. Feb 9 09:02:34.190884 systemd[1]: Started session-87.scope. Feb 9 09:02:34.281666 sshd[6021]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:34.283741 systemd[1]: sshd@109-139.178.90.113:22-147.75.109.163:33984.service: Deactivated successfully. Feb 9 09:02:34.284102 systemd[1]: session-87.scope: Deactivated successfully. Feb 9 09:02:34.284413 systemd-logind[1160]: Session 87 logged out. Waiting for processes to exit. Feb 9 09:02:34.285042 systemd[1]: Started sshd@110-139.178.90.113:22-147.75.109.163:33990.service. Feb 9 09:02:34.285418 systemd-logind[1160]: Removed session 87. Feb 9 09:02:34.316929 sshd[6045]: Accepted publickey for core from 147.75.109.163 port 33990 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:34.317753 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:34.320495 systemd-logind[1160]: New session 88 of user core. Feb 9 09:02:34.321125 systemd[1]: Started session-88.scope. Feb 9 09:02:35.662949 env[1172]: time="2024-02-09T09:02:35.662857477Z" level=info msg="StopContainer for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" with timeout 30 (s)" Feb 9 09:02:35.664189 env[1172]: time="2024-02-09T09:02:35.663677592Z" level=info msg="Stop container \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" with signal terminated" Feb 9 09:02:35.682957 systemd[1]: cri-containerd-0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210.scope: Deactivated successfully. Feb 9 09:02:35.683376 systemd[1]: cri-containerd-0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210.scope: Consumed 2.217s CPU time. Feb 9 09:02:35.715474 env[1172]: time="2024-02-09T09:02:35.715392710Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:02:35.723220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210-rootfs.mount: Deactivated successfully. Feb 9 09:02:35.724104 env[1172]: time="2024-02-09T09:02:35.724047045Z" level=info msg="StopContainer for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" with timeout 1 (s)" Feb 9 09:02:35.724412 env[1172]: time="2024-02-09T09:02:35.724367090Z" level=info msg="Stop container \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" with signal terminated" Feb 9 09:02:35.725063 env[1172]: time="2024-02-09T09:02:35.724977857Z" level=info msg="shim disconnected" id=0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210 Feb 9 09:02:35.725063 env[1172]: time="2024-02-09T09:02:35.725043986Z" level=warning msg="cleaning up after shim disconnected" id=0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210 namespace=k8s.io Feb 9 09:02:35.725063 env[1172]: time="2024-02-09T09:02:35.725063328Z" level=info msg="cleaning up dead shim" Feb 9 09:02:35.745615 systemd-networkd[1012]: lxc_health: Link DOWN Feb 9 09:02:35.745625 systemd-networkd[1012]: lxc_health: Lost carrier Feb 9 09:02:35.749916 env[1172]: time="2024-02-09T09:02:35.749820198Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6105 runtime=io.containerd.runc.v2\n" Feb 9 09:02:35.751278 env[1172]: time="2024-02-09T09:02:35.751197090Z" level=info msg="StopContainer for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" returns successfully" Feb 9 09:02:35.752128 env[1172]: time="2024-02-09T09:02:35.752053541Z" level=info msg="StopPodSandbox for \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\"" Feb 9 09:02:35.752270 env[1172]: time="2024-02-09T09:02:35.752154899Z" level=info msg="Container to stop \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:35.755512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf-shm.mount: Deactivated successfully. Feb 9 09:02:35.763113 systemd[1]: cri-containerd-99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf.scope: Deactivated successfully. Feb 9 09:02:35.804813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf-rootfs.mount: Deactivated successfully. Feb 9 09:02:35.805192 env[1172]: time="2024-02-09T09:02:35.805052332Z" level=info msg="shim disconnected" id=99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf Feb 9 09:02:35.805192 env[1172]: time="2024-02-09T09:02:35.805142438Z" level=warning msg="cleaning up after shim disconnected" id=99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf namespace=k8s.io Feb 9 09:02:35.805192 env[1172]: time="2024-02-09T09:02:35.805171576Z" level=info msg="cleaning up dead shim" Feb 9 09:02:35.813024 systemd[1]: cri-containerd-664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e.scope: Deactivated successfully. Feb 9 09:02:35.813443 systemd[1]: cri-containerd-664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e.scope: Consumed 10.909s CPU time. Feb 9 09:02:35.829716 env[1172]: time="2024-02-09T09:02:35.829651184Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6144 runtime=io.containerd.runc.v2\n" Feb 9 09:02:35.830195 env[1172]: time="2024-02-09T09:02:35.830122071Z" level=info msg="TearDown network for sandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" successfully" Feb 9 09:02:35.830195 env[1172]: time="2024-02-09T09:02:35.830162900Z" level=info msg="StopPodSandbox for \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" returns successfully" Feb 9 09:02:35.854971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e-rootfs.mount: Deactivated successfully. Feb 9 09:02:35.855333 env[1172]: time="2024-02-09T09:02:35.855068814Z" level=info msg="shim disconnected" id=664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e Feb 9 09:02:35.855333 env[1172]: time="2024-02-09T09:02:35.855147249Z" level=warning msg="cleaning up after shim disconnected" id=664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e namespace=k8s.io Feb 9 09:02:35.855333 env[1172]: time="2024-02-09T09:02:35.855167036Z" level=info msg="cleaning up dead shim" Feb 9 09:02:35.867059 env[1172]: time="2024-02-09T09:02:35.866970335Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6168 runtime=io.containerd.runc.v2\n" Feb 9 09:02:35.868441 env[1172]: time="2024-02-09T09:02:35.868356094Z" level=info msg="StopContainer for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" returns successfully" Feb 9 09:02:35.869109 env[1172]: time="2024-02-09T09:02:35.869028338Z" level=info msg="StopPodSandbox for \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\"" Feb 9 09:02:35.869247 env[1172]: time="2024-02-09T09:02:35.869131017Z" level=info msg="Container to stop \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:35.869247 env[1172]: time="2024-02-09T09:02:35.869161380Z" level=info msg="Container to stop \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:35.869247 env[1172]: time="2024-02-09T09:02:35.869182109Z" level=info msg="Container to stop \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:35.869247 env[1172]: time="2024-02-09T09:02:35.869201881Z" level=info msg="Container to stop \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:35.869247 env[1172]: time="2024-02-09T09:02:35.869220914Z" level=info msg="Container to stop \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:35.890805 systemd[1]: cri-containerd-80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283.scope: Deactivated successfully. Feb 9 09:02:35.923668 env[1172]: time="2024-02-09T09:02:35.923413634Z" level=info msg="shim disconnected" id=80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283 Feb 9 09:02:35.923668 env[1172]: time="2024-02-09T09:02:35.923553348Z" level=warning msg="cleaning up after shim disconnected" id=80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283 namespace=k8s.io Feb 9 09:02:35.923668 env[1172]: time="2024-02-09T09:02:35.923593354Z" level=info msg="cleaning up dead shim" Feb 9 09:02:35.938154 env[1172]: time="2024-02-09T09:02:35.938056693Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6199 runtime=io.containerd.runc.v2\n" Feb 9 09:02:35.938744 env[1172]: time="2024-02-09T09:02:35.938648815Z" level=info msg="TearDown network for sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" successfully" Feb 9 09:02:35.938744 env[1172]: time="2024-02-09T09:02:35.938700039Z" level=info msg="StopPodSandbox for \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" returns successfully" Feb 9 09:02:35.950756 kubelet[2206]: I0209 09:02:35.950676 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf288\" (UniqueName: \"kubernetes.io/projected/78b54691-079f-4b9a-987c-a77a9fec16d7-kube-api-access-bf288\") pod \"78b54691-079f-4b9a-987c-a77a9fec16d7\" (UID: \"78b54691-079f-4b9a-987c-a77a9fec16d7\") " Feb 9 09:02:35.951396 kubelet[2206]: I0209 09:02:35.950769 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78b54691-079f-4b9a-987c-a77a9fec16d7-cilium-config-path\") pod \"78b54691-079f-4b9a-987c-a77a9fec16d7\" (UID: \"78b54691-079f-4b9a-987c-a77a9fec16d7\") " Feb 9 09:02:35.951396 kubelet[2206]: W0209 09:02:35.951150 2206 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/78b54691-079f-4b9a-987c-a77a9fec16d7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:02:35.955150 kubelet[2206]: I0209 09:02:35.955067 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b54691-079f-4b9a-987c-a77a9fec16d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "78b54691-079f-4b9a-987c-a77a9fec16d7" (UID: "78b54691-079f-4b9a-987c-a77a9fec16d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:02:35.956483 kubelet[2206]: I0209 09:02:35.956378 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b54691-079f-4b9a-987c-a77a9fec16d7-kube-api-access-bf288" (OuterVolumeSpecName: "kube-api-access-bf288") pod "78b54691-079f-4b9a-987c-a77a9fec16d7" (UID: "78b54691-079f-4b9a-987c-a77a9fec16d7"). InnerVolumeSpecName "kube-api-access-bf288". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:02:36.051789 kubelet[2206]: I0209 09:02:36.051671 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-config-path\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.051789 kubelet[2206]: I0209 09:02:36.051769 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-etc-cni-netd\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.052347 kubelet[2206]: I0209 09:02:36.051838 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c083d8-4038-4a17-96ef-f77304ed2f26-clustermesh-secrets\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.052347 kubelet[2206]: I0209 09:02:36.051906 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-hostproc\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.052347 kubelet[2206]: I0209 09:02:36.051923 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.052347 kubelet[2206]: I0209 09:02:36.051965 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-xtables-lock\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.052347 kubelet[2206]: I0209 09:02:36.052013 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.053277 kubelet[2206]: I0209 09:02:36.052055 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.053277 kubelet[2206]: I0209 09:02:36.052104 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-bpf-maps\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.053277 kubelet[2206]: I0209 09:02:36.052166 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cni-path\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.053277 kubelet[2206]: W0209 09:02:36.052128 2206 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d8c083d8-4038-4a17-96ef-f77304ed2f26/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:02:36.053277 kubelet[2206]: I0209 09:02:36.052170 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.053277 kubelet[2206]: I0209 09:02:36.052222 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-run\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.054306 kubelet[2206]: I0209 09:02:36.052240 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.054306 kubelet[2206]: I0209 09:02:36.052282 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-cgroup\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.054306 kubelet[2206]: I0209 09:02:36.052319 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.054306 kubelet[2206]: I0209 09:02:36.052901 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-net\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.054306 kubelet[2206]: I0209 09:02:36.053011 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-kernel\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.055157 kubelet[2206]: I0209 09:02:36.052302 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.055157 kubelet[2206]: I0209 09:02:36.053196 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.055157 kubelet[2206]: I0209 09:02:36.053257 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.055157 kubelet[2206]: I0209 09:02:36.053488 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-hubble-tls\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.055157 kubelet[2206]: I0209 09:02:36.053651 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-lib-modules\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.053799 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75b8h\" (UniqueName: \"kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-kube-api-access-75b8h\") pod \"d8c083d8-4038-4a17-96ef-f77304ed2f26\" (UID: \"d8c083d8-4038-4a17-96ef-f77304ed2f26\") " Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.053835 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.054078 2206 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-net\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.054153 2206 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.054241 2206 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-lib-modules\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.054546 2206 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bf288\" (UniqueName: \"kubernetes.io/projected/78b54691-079f-4b9a-987c-a77a9fec16d7-kube-api-access-bf288\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.055987 kubelet[2206]: I0209 09:02:36.054636 2206 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-etc-cni-netd\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.054698 2206 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-hostproc\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.054757 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78b54691-079f-4b9a-987c-a77a9fec16d7-cilium-config-path\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.054832 2206 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-xtables-lock\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.054909 2206 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-bpf-maps\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.054987 2206 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cni-path\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.055048 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-run\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.056963 kubelet[2206]: I0209 09:02:36.055108 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-cgroup\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.060218 kubelet[2206]: I0209 09:02:36.060108 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:02:36.060961 kubelet[2206]: I0209 09:02:36.060855 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c083d8-4038-4a17-96ef-f77304ed2f26-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:02:36.062468 kubelet[2206]: I0209 09:02:36.062365 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-kube-api-access-75b8h" (OuterVolumeSpecName: "kube-api-access-75b8h") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "kube-api-access-75b8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:02:36.062720 kubelet[2206]: I0209 09:02:36.062504 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8c083d8-4038-4a17-96ef-f77304ed2f26" (UID: "d8c083d8-4038-4a17-96ef-f77304ed2f26"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:02:36.111303 kubelet[2206]: I0209 09:02:36.111222 2206 scope.go:115] "RemoveContainer" containerID="0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210" Feb 9 09:02:36.114075 env[1172]: time="2024-02-09T09:02:36.113614247Z" level=info msg="RemoveContainer for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\"" Feb 9 09:02:36.122481 env[1172]: time="2024-02-09T09:02:36.122366309Z" level=info msg="RemoveContainer for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" returns successfully" Feb 9 09:02:36.123006 kubelet[2206]: I0209 09:02:36.122947 2206 scope.go:115] "RemoveContainer" containerID="0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210" Feb 9 09:02:36.123189 systemd[1]: Removed slice kubepods-besteffort-pod78b54691_079f_4b9a_987c_a77a9fec16d7.slice. Feb 9 09:02:36.123600 systemd[1]: kubepods-besteffort-pod78b54691_079f_4b9a_987c_a77a9fec16d7.slice: Consumed 2.259s CPU time. Feb 9 09:02:36.123918 env[1172]: time="2024-02-09T09:02:36.123468339Z" level=error msg="ContainerStatus for \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\": not found" Feb 9 09:02:36.124106 kubelet[2206]: E0209 09:02:36.124069 2206 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\": not found" containerID="0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210" Feb 9 09:02:36.124270 kubelet[2206]: I0209 09:02:36.124169 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210} err="failed to get container status \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d1f79bf398ec9991be7e2c32ddc3b7861a9906905e5ce92b1bc21ac36b3f210\": not found" Feb 9 09:02:36.124270 kubelet[2206]: I0209 09:02:36.124208 2206 scope.go:115] "RemoveContainer" containerID="664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e" Feb 9 09:02:36.126741 env[1172]: time="2024-02-09T09:02:36.126624906Z" level=info msg="RemoveContainer for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\"" Feb 9 09:02:36.130584 env[1172]: time="2024-02-09T09:02:36.130496601Z" level=info msg="RemoveContainer for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" returns successfully" Feb 9 09:02:36.130975 kubelet[2206]: I0209 09:02:36.130912 2206 scope.go:115] "RemoveContainer" containerID="0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15" Feb 9 09:02:36.131130 systemd[1]: Removed slice kubepods-burstable-podd8c083d8_4038_4a17_96ef_f77304ed2f26.slice. Feb 9 09:02:36.131513 systemd[1]: kubepods-burstable-podd8c083d8_4038_4a17_96ef_f77304ed2f26.slice: Consumed 11.003s CPU time. Feb 9 09:02:36.133165 env[1172]: time="2024-02-09T09:02:36.133094721Z" level=info msg="RemoveContainer for \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\"" Feb 9 09:02:36.136721 env[1172]: time="2024-02-09T09:02:36.136622474Z" level=info msg="RemoveContainer for \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\" returns successfully" Feb 9 09:02:36.137018 kubelet[2206]: I0209 09:02:36.136940 2206 scope.go:115] "RemoveContainer" containerID="d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1" Feb 9 09:02:36.139300 env[1172]: time="2024-02-09T09:02:36.139229227Z" level=info msg="RemoveContainer for \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\"" Feb 9 09:02:36.143061 env[1172]: time="2024-02-09T09:02:36.142997147Z" level=info msg="RemoveContainer for \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\" returns successfully" Feb 9 09:02:36.143376 kubelet[2206]: I0209 09:02:36.143336 2206 scope.go:115] "RemoveContainer" containerID="adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407" Feb 9 09:02:36.145793 env[1172]: time="2024-02-09T09:02:36.145721155Z" level=info msg="RemoveContainer for \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\"" Feb 9 09:02:36.149188 env[1172]: time="2024-02-09T09:02:36.149094455Z" level=info msg="RemoveContainer for \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\" returns successfully" Feb 9 09:02:36.149414 kubelet[2206]: I0209 09:02:36.149362 2206 scope.go:115] "RemoveContainer" containerID="45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823" Feb 9 09:02:36.151600 env[1172]: time="2024-02-09T09:02:36.151498380Z" level=info msg="RemoveContainer for \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\"" Feb 9 09:02:36.155380 env[1172]: time="2024-02-09T09:02:36.155261462Z" level=info msg="RemoveContainer for \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\" returns successfully" Feb 9 09:02:36.155664 kubelet[2206]: I0209 09:02:36.155600 2206 scope.go:115] "RemoveContainer" containerID="664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e" Feb 9 09:02:36.155974 kubelet[2206]: I0209 09:02:36.155871 2206 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-75b8h\" (UniqueName: \"kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-kube-api-access-75b8h\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.155974 kubelet[2206]: I0209 09:02:36.155954 2206 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c083d8-4038-4a17-96ef-f77304ed2f26-clustermesh-secrets\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.156235 kubelet[2206]: I0209 09:02:36.156016 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c083d8-4038-4a17-96ef-f77304ed2f26-cilium-config-path\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.156235 kubelet[2206]: I0209 09:02:36.156073 2206 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c083d8-4038-4a17-96ef-f77304ed2f26-hubble-tls\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:36.156436 env[1172]: time="2024-02-09T09:02:36.156158293Z" level=error msg="ContainerStatus for \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\": not found" Feb 9 09:02:36.156736 kubelet[2206]: E0209 09:02:36.156650 2206 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\": not found" containerID="664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e" Feb 9 09:02:36.156941 kubelet[2206]: I0209 09:02:36.156760 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e} err="failed to get container status \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\": rpc error: code = NotFound desc = an error occurred when try to find container \"664c7d5fe3a4636e948d4c4f9b5f99843f6bd9b583a5146290c452882a07625e\": not found" Feb 9 09:02:36.156941 kubelet[2206]: I0209 09:02:36.156794 2206 scope.go:115] "RemoveContainer" containerID="0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15" Feb 9 09:02:36.157403 env[1172]: time="2024-02-09T09:02:36.157259223Z" level=error msg="ContainerStatus for \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\": not found" Feb 9 09:02:36.157711 kubelet[2206]: E0209 09:02:36.157631 2206 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\": not found" containerID="0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15" Feb 9 09:02:36.157711 kubelet[2206]: I0209 09:02:36.157707 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15} err="failed to get container status \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\": rpc error: code = NotFound desc = an error occurred when try to find container \"0748bee3f0173478ed2d1ec789953dd5db20fe158a3f747a07a90bd0e8a06a15\": not found" Feb 9 09:02:36.158027 kubelet[2206]: I0209 09:02:36.157736 2206 scope.go:115] "RemoveContainer" containerID="d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1" Feb 9 09:02:36.158270 env[1172]: time="2024-02-09T09:02:36.158114339Z" level=error msg="ContainerStatus for \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\": not found" Feb 9 09:02:36.158478 kubelet[2206]: E0209 09:02:36.158451 2206 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\": not found" containerID="d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1" Feb 9 09:02:36.158645 kubelet[2206]: I0209 09:02:36.158516 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1} err="failed to get container status \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7ac6e38e5d829f7b88591874bf62747580332a16c02d99d77d85af3887719b1\": not found" Feb 9 09:02:36.158645 kubelet[2206]: I0209 09:02:36.158573 2206 scope.go:115] "RemoveContainer" containerID="adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407" Feb 9 09:02:36.159229 env[1172]: time="2024-02-09T09:02:36.159062251Z" level=error msg="ContainerStatus for \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\": not found" Feb 9 09:02:36.159679 kubelet[2206]: E0209 09:02:36.159582 2206 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\": not found" containerID="adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407" Feb 9 09:02:36.159940 kubelet[2206]: I0209 09:02:36.159698 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407} err="failed to get container status \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\": rpc error: code = NotFound desc = an error occurred when try to find container \"adc3910534792d8ab2bb886c6122533f0c4fc636766b96467550848425a47407\": not found" Feb 9 09:02:36.159940 kubelet[2206]: I0209 09:02:36.159740 2206 scope.go:115] "RemoveContainer" containerID="45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823" Feb 9 09:02:36.160309 env[1172]: time="2024-02-09T09:02:36.160189373Z" level=error msg="ContainerStatus for \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\": not found" Feb 9 09:02:36.160633 kubelet[2206]: E0209 09:02:36.160577 2206 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\": not found" containerID="45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823" Feb 9 09:02:36.160782 kubelet[2206]: I0209 09:02:36.160661 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823} err="failed to get container status \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\": rpc error: code = NotFound desc = an error occurred when try to find container \"45f3edd60a231aaa0e5c1a7c7751cbdd56fde3e281d46f62d9d45e7b7b4ae823\": not found" Feb 9 09:02:36.481545 kubelet[2206]: I0209 09:02:36.481416 2206 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=78b54691-079f-4b9a-987c-a77a9fec16d7 path="/var/lib/kubelet/pods/78b54691-079f-4b9a-987c-a77a9fec16d7/volumes" Feb 9 09:02:36.482603 kubelet[2206]: I0209 09:02:36.482558 2206 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d8c083d8-4038-4a17-96ef-f77304ed2f26 path="/var/lib/kubelet/pods/d8c083d8-4038-4a17-96ef-f77304ed2f26/volumes" Feb 9 09:02:36.697099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283-rootfs.mount: Deactivated successfully. Feb 9 09:02:36.697369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283-shm.mount: Deactivated successfully. Feb 9 09:02:36.697583 systemd[1]: var-lib-kubelet-pods-78b54691\x2d079f\x2d4b9a\x2d987c\x2da77a9fec16d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbf288.mount: Deactivated successfully. Feb 9 09:02:36.697772 systemd[1]: var-lib-kubelet-pods-d8c083d8\x2d4038\x2d4a17\x2d96ef\x2df77304ed2f26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75b8h.mount: Deactivated successfully. Feb 9 09:02:36.698008 systemd[1]: var-lib-kubelet-pods-d8c083d8\x2d4038\x2d4a17\x2d96ef\x2df77304ed2f26-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:02:36.698208 systemd[1]: var-lib-kubelet-pods-d8c083d8\x2d4038\x2d4a17\x2d96ef\x2df77304ed2f26-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:02:37.602286 sshd[6045]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:37.604318 systemd[1]: sshd@110-139.178.90.113:22-147.75.109.163:33990.service: Deactivated successfully. Feb 9 09:02:37.604787 systemd[1]: session-88.scope: Deactivated successfully. Feb 9 09:02:37.605170 systemd-logind[1160]: Session 88 logged out. Waiting for processes to exit. Feb 9 09:02:37.605830 systemd[1]: Started sshd@111-139.178.90.113:22-147.75.109.163:41082.service. Feb 9 09:02:37.606277 systemd-logind[1160]: Removed session 88. Feb 9 09:02:37.636939 sshd[6217]: Accepted publickey for core from 147.75.109.163 port 41082 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:37.637612 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:37.639833 systemd-logind[1160]: New session 89 of user core. Feb 9 09:02:37.640276 systemd[1]: Started session-89.scope. Feb 9 09:02:38.024934 sshd[6217]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:38.026982 systemd[1]: sshd@111-139.178.90.113:22-147.75.109.163:41082.service: Deactivated successfully. Feb 9 09:02:38.027363 systemd[1]: session-89.scope: Deactivated successfully. Feb 9 09:02:38.027695 systemd-logind[1160]: Session 89 logged out. Waiting for processes to exit. Feb 9 09:02:38.028572 systemd[1]: Started sshd@112-139.178.90.113:22-147.75.109.163:41084.service. Feb 9 09:02:38.029164 systemd-logind[1160]: Removed session 89. Feb 9 09:02:38.032573 kubelet[2206]: I0209 09:02:38.032549 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:02:38.032806 kubelet[2206]: E0209 09:02:38.032595 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c083d8-4038-4a17-96ef-f77304ed2f26" containerName="apply-sysctl-overwrites" Feb 9 09:02:38.032806 kubelet[2206]: E0209 09:02:38.032603 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c083d8-4038-4a17-96ef-f77304ed2f26" containerName="mount-bpf-fs" Feb 9 09:02:38.032806 kubelet[2206]: E0209 09:02:38.032608 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c083d8-4038-4a17-96ef-f77304ed2f26" containerName="clean-cilium-state" Feb 9 09:02:38.032806 kubelet[2206]: E0209 09:02:38.032612 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c083d8-4038-4a17-96ef-f77304ed2f26" containerName="cilium-agent" Feb 9 09:02:38.032806 kubelet[2206]: E0209 09:02:38.032617 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78b54691-079f-4b9a-987c-a77a9fec16d7" containerName="cilium-operator" Feb 9 09:02:38.032806 kubelet[2206]: E0209 09:02:38.032622 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c083d8-4038-4a17-96ef-f77304ed2f26" containerName="mount-cgroup" Feb 9 09:02:38.032806 kubelet[2206]: I0209 09:02:38.032639 2206 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8c083d8-4038-4a17-96ef-f77304ed2f26" containerName="cilium-agent" Feb 9 09:02:38.032806 kubelet[2206]: I0209 09:02:38.032646 2206 memory_manager.go:346] "RemoveStaleState removing state" podUID="78b54691-079f-4b9a-987c-a77a9fec16d7" containerName="cilium-operator" Feb 9 09:02:38.036188 systemd[1]: Created slice kubepods-burstable-podaed122bc_6623_4eac_b6d4_489ab57525ea.slice. Feb 9 09:02:38.061962 sshd[6240]: Accepted publickey for core from 147.75.109.163 port 41084 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:38.062798 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:38.065404 systemd-logind[1160]: New session 90 of user core. Feb 9 09:02:38.065939 systemd[1]: Started session-90.scope. Feb 9 09:02:38.170775 kubelet[2206]: I0209 09:02:38.170737 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktvh\" (UniqueName: \"kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-kube-api-access-8ktvh\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.170775 kubelet[2206]: I0209 09:02:38.170768 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-clustermesh-secrets\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.170916 kubelet[2206]: I0209 09:02:38.170801 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cni-path\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.170916 kubelet[2206]: I0209 09:02:38.170844 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-run\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.170916 kubelet[2206]: I0209 09:02:38.170879 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-xtables-lock\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.170916 kubelet[2206]: I0209 09:02:38.170909 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-net\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171062 kubelet[2206]: I0209 09:02:38.170937 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-etc-cni-netd\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171062 kubelet[2206]: I0209 09:02:38.170974 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-hubble-tls\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171062 kubelet[2206]: I0209 09:02:38.171005 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-cgroup\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171062 kubelet[2206]: I0209 09:02:38.171048 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-config-path\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171175 kubelet[2206]: I0209 09:02:38.171069 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-kernel\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171175 kubelet[2206]: I0209 09:02:38.171087 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-bpf-maps\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171175 kubelet[2206]: I0209 09:02:38.171140 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-lib-modules\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171175 kubelet[2206]: I0209 09:02:38.171155 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-ipsec-secrets\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.171175 kubelet[2206]: I0209 09:02:38.171168 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-hostproc\") pod \"cilium-9hdwr\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " pod="kube-system/cilium-9hdwr" Feb 9 09:02:38.183310 sshd[6240]: pam_unix(sshd:session): session closed for user core Feb 9 09:02:38.185078 systemd[1]: sshd@112-139.178.90.113:22-147.75.109.163:41084.service: Deactivated successfully. Feb 9 09:02:38.185435 systemd[1]: session-90.scope: Deactivated successfully. Feb 9 09:02:38.185847 systemd-logind[1160]: Session 90 logged out. Waiting for processes to exit. Feb 9 09:02:38.186503 systemd[1]: Started sshd@113-139.178.90.113:22-147.75.109.163:41094.service. Feb 9 09:02:38.186980 systemd-logind[1160]: Removed session 90. Feb 9 09:02:38.217968 sshd[6265]: Accepted publickey for core from 147.75.109.163 port 41094 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:02:38.218767 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:02:38.221214 systemd-logind[1160]: New session 91 of user core. Feb 9 09:02:38.221746 systemd[1]: Started session-91.scope. Feb 9 09:02:38.337662 env[1172]: time="2024-02-09T09:02:38.337625595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9hdwr,Uid:aed122bc-6623-4eac-b6d4-489ab57525ea,Namespace:kube-system,Attempt:0,}" Feb 9 09:02:38.343666 env[1172]: time="2024-02-09T09:02:38.343626149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:02:38.343666 env[1172]: time="2024-02-09T09:02:38.343651349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:02:38.343666 env[1172]: time="2024-02-09T09:02:38.343660244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:02:38.343802 env[1172]: time="2024-02-09T09:02:38.343737122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653 pid=6296 runtime=io.containerd.runc.v2 Feb 9 09:02:38.362496 systemd[1]: Started cri-containerd-bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653.scope. Feb 9 09:02:38.389171 env[1172]: time="2024-02-09T09:02:38.389133414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9hdwr,Uid:aed122bc-6623-4eac-b6d4-489ab57525ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\"" Feb 9 09:02:38.391032 env[1172]: time="2024-02-09T09:02:38.391003383Z" level=info msg="CreateContainer within sandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:02:38.397440 env[1172]: time="2024-02-09T09:02:38.397381342Z" level=info msg="CreateContainer within sandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\"" Feb 9 09:02:38.397779 env[1172]: time="2024-02-09T09:02:38.397745167Z" level=info msg="StartContainer for \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\"" Feb 9 09:02:38.435265 systemd[1]: Started cri-containerd-553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3.scope. Feb 9 09:02:38.456031 systemd[1]: cri-containerd-553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3.scope: Deactivated successfully. Feb 9 09:02:38.472165 env[1172]: time="2024-02-09T09:02:38.472045601Z" level=info msg="shim disconnected" id=553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3 Feb 9 09:02:38.472656 env[1172]: time="2024-02-09T09:02:38.472165227Z" level=warning msg="cleaning up after shim disconnected" id=553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3 namespace=k8s.io Feb 9 09:02:38.472656 env[1172]: time="2024-02-09T09:02:38.472192234Z" level=info msg="cleaning up dead shim" Feb 9 09:02:38.501805 env[1172]: time="2024-02-09T09:02:38.501662844Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6355 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:02:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:02:38.502378 env[1172]: time="2024-02-09T09:02:38.502166132Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Feb 9 09:02:38.502745 env[1172]: time="2024-02-09T09:02:38.502644463Z" level=error msg="Failed to pipe stdout of container \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\"" error="reading from a closed fifo" Feb 9 09:02:38.502921 env[1172]: time="2024-02-09T09:02:38.502694311Z" level=error msg="Failed to pipe stderr of container \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\"" error="reading from a closed fifo" Feb 9 09:02:38.503925 env[1172]: time="2024-02-09T09:02:38.503778127Z" level=error msg="StartContainer for \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:02:38.504236 kubelet[2206]: E0209 09:02:38.504177 2206 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3" Feb 9 09:02:38.504494 kubelet[2206]: E0209 09:02:38.504440 2206 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:02:38.504494 kubelet[2206]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:02:38.504494 kubelet[2206]: rm /hostbin/cilium-mount Feb 9 09:02:38.504982 kubelet[2206]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8ktvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-9hdwr_kube-system(aed122bc-6623-4eac-b6d4-489ab57525ea): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:02:38.504982 kubelet[2206]: E0209 09:02:38.504579 2206 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9hdwr" podUID=aed122bc-6623-4eac-b6d4-489ab57525ea Feb 9 09:02:38.872509 kubelet[2206]: E0209 09:02:38.872399 2206 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:02:39.127354 env[1172]: time="2024-02-09T09:02:39.127253265Z" level=info msg="StopPodSandbox for \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\"" Feb 9 09:02:39.127354 env[1172]: time="2024-02-09T09:02:39.127296022Z" level=info msg="Container to stop \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:02:39.144867 systemd[1]: cri-containerd-bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653.scope: Deactivated successfully. Feb 9 09:02:39.170039 env[1172]: time="2024-02-09T09:02:39.170004933Z" level=info msg="shim disconnected" id=bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653 Feb 9 09:02:39.170039 env[1172]: time="2024-02-09T09:02:39.170038317Z" level=warning msg="cleaning up after shim disconnected" id=bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653 namespace=k8s.io Feb 9 09:02:39.170180 env[1172]: time="2024-02-09T09:02:39.170045736Z" level=info msg="cleaning up dead shim" Feb 9 09:02:39.187758 env[1172]: time="2024-02-09T09:02:39.187730994Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6385 runtime=io.containerd.runc.v2\n" Feb 9 09:02:39.188016 env[1172]: time="2024-02-09T09:02:39.187952927Z" level=info msg="TearDown network for sandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" successfully" Feb 9 09:02:39.188016 env[1172]: time="2024-02-09T09:02:39.187972930Z" level=info msg="StopPodSandbox for \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" returns successfully" Feb 9 09:02:39.280074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653-rootfs.mount: Deactivated successfully. Feb 9 09:02:39.280342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653-shm.mount: Deactivated successfully. Feb 9 09:02:39.280757 kubelet[2206]: I0209 09:02:39.280666 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-xtables-lock\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.280785 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-config-path\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.280796 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.280861 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-hubble-tls\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.280923 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-hostproc\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.280988 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ktvh\" (UniqueName: \"kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-kube-api-access-8ktvh\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281051 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-clustermesh-secrets\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281045 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281106 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-run\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281162 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-kernel\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281171 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281221 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-cgroup\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281282 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-ipsec-secrets\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281293 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281337 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-etc-cni-netd\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.281786 kubelet[2206]: I0209 09:02:39.281411 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-bpf-maps\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281412 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.283806 kubelet[2206]: W0209 09:02:39.281441 2206 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/aed122bc-6623-4eac-b6d4-489ab57525ea/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281454 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281482 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-lib-modules\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281489 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281557 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281639 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cni-path\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281755 2206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-net\") pod \"aed122bc-6623-4eac-b6d4-489ab57525ea\" (UID: \"aed122bc-6623-4eac-b6d4-489ab57525ea\") " Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281745 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281836 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281890 2206 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-lib-modules\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281936 2206 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cni-path\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.281969 2206 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-bpf-maps\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.282018 2206 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-xtables-lock\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.282072 2206 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-hostproc\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.282117 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-run\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.283806 kubelet[2206]: I0209 09:02:39.282155 2206 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.285513 kubelet[2206]: I0209 09:02:39.282189 2206 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-etc-cni-netd\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.285513 kubelet[2206]: I0209 09:02:39.282221 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-cgroup\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.287443 kubelet[2206]: I0209 09:02:39.287328 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:02:39.288137 kubelet[2206]: I0209 09:02:39.288027 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:02:39.288137 kubelet[2206]: I0209 09:02:39.288098 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:02:39.288511 kubelet[2206]: I0209 09:02:39.288262 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-kube-api-access-8ktvh" (OuterVolumeSpecName: "kube-api-access-8ktvh") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "kube-api-access-8ktvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:02:39.288860 kubelet[2206]: I0209 09:02:39.288765 2206 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aed122bc-6623-4eac-b6d4-489ab57525ea" (UID: "aed122bc-6623-4eac-b6d4-489ab57525ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:02:39.291642 systemd[1]: var-lib-kubelet-pods-aed122bc\x2d6623\x2d4eac\x2db6d4\x2d489ab57525ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ktvh.mount: Deactivated successfully. Feb 9 09:02:39.291885 systemd[1]: var-lib-kubelet-pods-aed122bc\x2d6623\x2d4eac\x2db6d4\x2d489ab57525ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:02:39.292080 systemd[1]: var-lib-kubelet-pods-aed122bc\x2d6623\x2d4eac\x2db6d4\x2d489ab57525ea-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:02:39.292259 systemd[1]: var-lib-kubelet-pods-aed122bc\x2d6623\x2d4eac\x2db6d4\x2d489ab57525ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:02:39.383695 kubelet[2206]: I0209 09:02:39.383467 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-config-path\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.383695 kubelet[2206]: I0209 09:02:39.383558 2206 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-hubble-tls\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.383695 kubelet[2206]: I0209 09:02:39.383603 2206 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8ktvh\" (UniqueName: \"kubernetes.io/projected/aed122bc-6623-4eac-b6d4-489ab57525ea-kube-api-access-8ktvh\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.383695 kubelet[2206]: I0209 09:02:39.383639 2206 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-clustermesh-secrets\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.383695 kubelet[2206]: I0209 09:02:39.383671 2206 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aed122bc-6623-4eac-b6d4-489ab57525ea-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:39.383695 kubelet[2206]: I0209 09:02:39.383704 2206 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aed122bc-6623-4eac-b6d4-489ab57525ea-host-proc-sys-net\") on node \"ci-3510.3.2-a-98a543a057\" DevicePath \"\"" Feb 9 09:02:40.129290 kubelet[2206]: I0209 09:02:40.129271 2206 scope.go:115] "RemoveContainer" containerID="553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3" Feb 9 09:02:40.129820 env[1172]: time="2024-02-09T09:02:40.129798994Z" level=info msg="RemoveContainer for \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\"" Feb 9 09:02:40.131129 env[1172]: time="2024-02-09T09:02:40.131112762Z" level=info msg="RemoveContainer for \"553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3\" returns successfully" Feb 9 09:02:40.131710 systemd[1]: Removed slice kubepods-burstable-podaed122bc_6623_4eac_b6d4_489ab57525ea.slice. Feb 9 09:02:40.149449 kubelet[2206]: I0209 09:02:40.149428 2206 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:02:40.149571 kubelet[2206]: E0209 09:02:40.149487 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aed122bc-6623-4eac-b6d4-489ab57525ea" containerName="mount-cgroup" Feb 9 09:02:40.149571 kubelet[2206]: I0209 09:02:40.149506 2206 memory_manager.go:346] "RemoveStaleState removing state" podUID="aed122bc-6623-4eac-b6d4-489ab57525ea" containerName="mount-cgroup" Feb 9 09:02:40.152772 systemd[1]: Created slice kubepods-burstable-pod36987351_2680_450a_9e81_7e555c9656a7.slice. Feb 9 09:02:40.290663 kubelet[2206]: I0209 09:02:40.290586 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75r7\" (UniqueName: \"kubernetes.io/projected/36987351-2680-450a-9e81-7e555c9656a7-kube-api-access-m75r7\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.290734 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-host-proc-sys-net\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.290864 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36987351-2680-450a-9e81-7e555c9656a7-clustermesh-secrets\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.290988 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-xtables-lock\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.291096 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-lib-modules\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.291272 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/36987351-2680-450a-9e81-7e555c9656a7-cilium-ipsec-secrets\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.291378 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-etc-cni-netd\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.291511 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-hostproc\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.291711 kubelet[2206]: I0209 09:02:40.291692 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36987351-2680-450a-9e81-7e555c9656a7-cilium-config-path\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.292619 kubelet[2206]: I0209 09:02:40.291799 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-bpf-maps\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.292619 kubelet[2206]: I0209 09:02:40.291885 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-cilium-run\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.292619 kubelet[2206]: I0209 09:02:40.292015 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-cni-path\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.292619 kubelet[2206]: I0209 09:02:40.292101 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-cilium-cgroup\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.292619 kubelet[2206]: I0209 09:02:40.292169 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36987351-2680-450a-9e81-7e555c9656a7-hubble-tls\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.292619 kubelet[2206]: I0209 09:02:40.292279 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36987351-2680-450a-9e81-7e555c9656a7-host-proc-sys-kernel\") pod \"cilium-89jrg\" (UID: \"36987351-2680-450a-9e81-7e555c9656a7\") " pod="kube-system/cilium-89jrg" Feb 9 09:02:40.456329 env[1172]: time="2024-02-09T09:02:40.456085060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89jrg,Uid:36987351-2680-450a-9e81-7e555c9656a7,Namespace:kube-system,Attempt:0,}" Feb 9 09:02:40.472078 env[1172]: time="2024-02-09T09:02:40.472049669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:02:40.472078 env[1172]: time="2024-02-09T09:02:40.472068964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:02:40.472078 env[1172]: time="2024-02-09T09:02:40.472075750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:02:40.472217 env[1172]: time="2024-02-09T09:02:40.472160399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563 pid=6412 runtime=io.containerd.runc.v2 Feb 9 09:02:40.475487 kubelet[2206]: I0209 09:02:40.475476 2206 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=aed122bc-6623-4eac-b6d4-489ab57525ea path="/var/lib/kubelet/pods/aed122bc-6623-4eac-b6d4-489ab57525ea/volumes" Feb 9 09:02:40.490127 systemd[1]: Started cri-containerd-4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563.scope. Feb 9 09:02:40.515406 env[1172]: time="2024-02-09T09:02:40.515367817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89jrg,Uid:36987351-2680-450a-9e81-7e555c9656a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\"" Feb 9 09:02:40.517345 env[1172]: time="2024-02-09T09:02:40.517298789Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:02:40.523496 env[1172]: time="2024-02-09T09:02:40.523433387Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c\"" Feb 9 09:02:40.523817 env[1172]: time="2024-02-09T09:02:40.523751718Z" level=info msg="StartContainer for \"9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c\"" Feb 9 09:02:40.555018 systemd[1]: Started cri-containerd-9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c.scope. Feb 9 09:02:40.610768 env[1172]: time="2024-02-09T09:02:40.610643072Z" level=info msg="StartContainer for \"9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c\" returns successfully" Feb 9 09:02:40.626392 systemd[1]: cri-containerd-9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c.scope: Deactivated successfully. Feb 9 09:02:40.677395 env[1172]: time="2024-02-09T09:02:40.677278509Z" level=info msg="shim disconnected" id=9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c Feb 9 09:02:40.677395 env[1172]: time="2024-02-09T09:02:40.677359635Z" level=warning msg="cleaning up after shim disconnected" id=9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c namespace=k8s.io Feb 9 09:02:40.677395 env[1172]: time="2024-02-09T09:02:40.677382682Z" level=info msg="cleaning up dead shim" Feb 9 09:02:40.703719 env[1172]: time="2024-02-09T09:02:40.703637731Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6495 runtime=io.containerd.runc.v2\n" Feb 9 09:02:41.133175 env[1172]: time="2024-02-09T09:02:41.133142353Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:02:41.138172 env[1172]: time="2024-02-09T09:02:41.138122577Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01\"" Feb 9 09:02:41.138449 env[1172]: time="2024-02-09T09:02:41.138398516Z" level=info msg="StartContainer for \"2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01\"" Feb 9 09:02:41.160209 systemd[1]: Started cri-containerd-2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01.scope. Feb 9 09:02:41.177455 env[1172]: time="2024-02-09T09:02:41.177420423Z" level=info msg="StartContainer for \"2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01\" returns successfully" Feb 9 09:02:41.183776 systemd[1]: cri-containerd-2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01.scope: Deactivated successfully. Feb 9 09:02:41.229260 env[1172]: time="2024-02-09T09:02:41.229110356Z" level=info msg="shim disconnected" id=2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01 Feb 9 09:02:41.229260 env[1172]: time="2024-02-09T09:02:41.229225639Z" level=warning msg="cleaning up after shim disconnected" id=2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01 namespace=k8s.io Feb 9 09:02:41.229260 env[1172]: time="2024-02-09T09:02:41.229253279Z" level=info msg="cleaning up dead shim" Feb 9 09:02:41.245008 env[1172]: time="2024-02-09T09:02:41.244907117Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6556 runtime=io.containerd.runc.v2\n" Feb 9 09:02:41.578183 kubelet[2206]: W0209 09:02:41.578038 2206 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaed122bc_6623_4eac_b6d4_489ab57525ea.slice/cri-containerd-553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3.scope WatchSource:0}: container "553a1232e02291aa8da738225a1cb86605a08eb7b0b2b561f40a7e15c5afb9b3" in namespace "k8s.io": not found Feb 9 09:02:42.135993 env[1172]: time="2024-02-09T09:02:42.135956013Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:02:42.142054 env[1172]: time="2024-02-09T09:02:42.142025922Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75\"" Feb 9 09:02:42.142357 env[1172]: time="2024-02-09T09:02:42.142343023Z" level=info msg="StartContainer for \"6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75\"" Feb 9 09:02:42.143084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917286463.mount: Deactivated successfully. Feb 9 09:02:42.164814 systemd[1]: Started cri-containerd-6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75.scope. Feb 9 09:02:42.195275 env[1172]: time="2024-02-09T09:02:42.195194391Z" level=info msg="StartContainer for \"6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75\" returns successfully" Feb 9 09:02:42.198068 systemd[1]: cri-containerd-6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75.scope: Deactivated successfully. Feb 9 09:02:42.264741 env[1172]: time="2024-02-09T09:02:42.264615708Z" level=info msg="shim disconnected" id=6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75 Feb 9 09:02:42.264741 env[1172]: time="2024-02-09T09:02:42.264713128Z" level=warning msg="cleaning up after shim disconnected" id=6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75 namespace=k8s.io Feb 9 09:02:42.264741 env[1172]: time="2024-02-09T09:02:42.264741490Z" level=info msg="cleaning up dead shim" Feb 9 09:02:42.280671 env[1172]: time="2024-02-09T09:02:42.280567898Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6610 runtime=io.containerd.runc.v2\n" Feb 9 09:02:42.400496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75-rootfs.mount: Deactivated successfully. Feb 9 09:02:43.148245 env[1172]: time="2024-02-09T09:02:43.148140283Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:02:43.165517 env[1172]: time="2024-02-09T09:02:43.165467338Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f\"" Feb 9 09:02:43.165869 env[1172]: time="2024-02-09T09:02:43.165790462Z" level=info msg="StartContainer for \"969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f\"" Feb 9 09:02:43.182198 systemd[1]: Started cri-containerd-969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f.scope. Feb 9 09:02:43.205062 systemd[1]: cri-containerd-969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f.scope: Deactivated successfully. Feb 9 09:02:43.205249 env[1172]: time="2024-02-09T09:02:43.205222017Z" level=info msg="StartContainer for \"969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f\" returns successfully" Feb 9 09:02:43.227399 env[1172]: time="2024-02-09T09:02:43.227364415Z" level=info msg="shim disconnected" id=969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f Feb 9 09:02:43.227536 env[1172]: time="2024-02-09T09:02:43.227400019Z" level=warning msg="cleaning up after shim disconnected" id=969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f namespace=k8s.io Feb 9 09:02:43.227536 env[1172]: time="2024-02-09T09:02:43.227410266Z" level=info msg="cleaning up dead shim" Feb 9 09:02:43.232065 env[1172]: time="2024-02-09T09:02:43.232002184Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:02:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6663 runtime=io.containerd.runc.v2\n" Feb 9 09:02:43.403426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f-rootfs.mount: Deactivated successfully. Feb 9 09:02:43.741748 kubelet[2206]: I0209 09:02:43.741627 2206 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-98a543a057" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:02:43.741569006 +0000 UTC m=+1225.324994872 LastTransitionTime:2024-02-09 09:02:43.741569006 +0000 UTC m=+1225.324994872 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:02:43.873571 kubelet[2206]: E0209 09:02:43.873494 2206 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:02:44.157892 env[1172]: time="2024-02-09T09:02:44.157787627Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:02:44.176605 env[1172]: time="2024-02-09T09:02:44.176452510Z" level=info msg="CreateContainer within sandbox \"4333bb405309575b03efc86e02bb674186dc4272a8268899fb6477fd152c9563\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"62832795e52bdaa51e99380abdb01e2bda0a65700a4f68d9ee5a4df22a48818f\"" Feb 9 09:02:44.178646 env[1172]: time="2024-02-09T09:02:44.177434552Z" level=info msg="StartContainer for \"62832795e52bdaa51e99380abdb01e2bda0a65700a4f68d9ee5a4df22a48818f\"" Feb 9 09:02:44.213436 systemd[1]: Started cri-containerd-62832795e52bdaa51e99380abdb01e2bda0a65700a4f68d9ee5a4df22a48818f.scope. Feb 9 09:02:44.238740 env[1172]: time="2024-02-09T09:02:44.238665282Z" level=info msg="StartContainer for \"62832795e52bdaa51e99380abdb01e2bda0a65700a4f68d9ee5a4df22a48818f\" returns successfully" Feb 9 09:02:44.424529 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 09:02:44.691929 kubelet[2206]: W0209 09:02:44.691846 2206 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36987351_2680_450a_9e81_7e555c9656a7.slice/cri-containerd-9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c.scope WatchSource:0}: task 9ae1e9532ab20c97c5c4d80a8b7a9a6104415cffd1fb60f59d9103611339228c not found: not found Feb 9 09:02:45.180839 kubelet[2206]: I0209 09:02:45.180780 2206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-89jrg" podStartSLOduration=5.180723319 podCreationTimestamp="2024-02-09 09:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:02:45.180109136 +0000 UTC m=+1226.763535004" watchObservedRunningTime="2024-02-09 09:02:45.180723319 +0000 UTC m=+1226.764149184" Feb 9 09:02:47.386831 systemd-networkd[1012]: lxc_health: Link UP Feb 9 09:02:47.410311 systemd-networkd[1012]: lxc_health: Gained carrier Feb 9 09:02:47.410528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:02:47.800953 kubelet[2206]: W0209 09:02:47.800864 2206 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36987351_2680_450a_9e81_7e555c9656a7.slice/cri-containerd-2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01.scope WatchSource:0}: task 2c4c88041fc46f0e7762dfd563af8d77d3782fb742f4c5c63944aad86adccd01 not found: not found Feb 9 09:02:49.279653 systemd-networkd[1012]: lxc_health: Gained IPv6LL Feb 9 09:02:50.907913 kubelet[2206]: W0209 09:02:50.907803 2206 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36987351_2680_450a_9e81_7e555c9656a7.slice/cri-containerd-6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75.scope WatchSource:0}: task 6a0bb9b06c853167527b3c62e8f30d54cb7dd95d60f5cac5ed7e91d9b7980c75 not found: not found Feb 9 09:02:54.018023 kubelet[2206]: W0209 09:02:54.017910 2206 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36987351_2680_450a_9e81_7e555c9656a7.slice/cri-containerd-969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f.scope WatchSource:0}: task 969f701afd2945690881fb8a6c82d88361ad71fe936b37cb5fb6f4ad7bddec5f not found: not found Feb 9 09:03:08.215468 systemd[1]: Started sshd@114-139.178.90.113:22-218.92.0.29:29015.service. Feb 9 09:03:18.524559 env[1172]: time="2024-02-09T09:03:18.524405220Z" level=info msg="StopPodSandbox for \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\"" Feb 9 09:03:18.525671 env[1172]: time="2024-02-09T09:03:18.524691026Z" level=info msg="TearDown network for sandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" successfully" Feb 9 09:03:18.525671 env[1172]: time="2024-02-09T09:03:18.524785157Z" level=info msg="StopPodSandbox for \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" returns successfully" Feb 9 09:03:18.526095 env[1172]: time="2024-02-09T09:03:18.525937654Z" level=info msg="RemovePodSandbox for \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\"" Feb 9 09:03:18.526297 env[1172]: time="2024-02-09T09:03:18.526040973Z" level=info msg="Forcibly stopping sandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\"" Feb 9 09:03:18.526498 env[1172]: time="2024-02-09T09:03:18.526310979Z" level=info msg="TearDown network for sandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" successfully" Feb 9 09:03:18.531834 env[1172]: time="2024-02-09T09:03:18.531757518Z" level=info msg="RemovePodSandbox \"99677763d555e1b97a3b1f8222a46ec3b2252062cbed80c5efbe822a288a07bf\" returns successfully" Feb 9 09:03:18.532736 env[1172]: time="2024-02-09T09:03:18.532630856Z" level=info msg="StopPodSandbox for \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\"" Feb 9 09:03:18.532965 env[1172]: time="2024-02-09T09:03:18.532818924Z" level=info msg="TearDown network for sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" successfully" Feb 9 09:03:18.532965 env[1172]: time="2024-02-09T09:03:18.532908024Z" level=info msg="StopPodSandbox for \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" returns successfully" Feb 9 09:03:18.533714 env[1172]: time="2024-02-09T09:03:18.533635509Z" level=info msg="RemovePodSandbox for \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\"" Feb 9 09:03:18.533886 env[1172]: time="2024-02-09T09:03:18.533721230Z" level=info msg="Forcibly stopping sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\"" Feb 9 09:03:18.534022 env[1172]: time="2024-02-09T09:03:18.533903741Z" level=info msg="TearDown network for sandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" successfully" Feb 9 09:03:18.538071 env[1172]: time="2024-02-09T09:03:18.537964718Z" level=info msg="RemovePodSandbox \"80c780d7a6b1ca8bf99ecf6e407edd8023ab4379c92e8e095a3e75cec8f05283\" returns successfully" Feb 9 09:03:18.538773 env[1172]: time="2024-02-09T09:03:18.538695957Z" level=info msg="StopPodSandbox for \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\"" Feb 9 09:03:18.539161 env[1172]: time="2024-02-09T09:03:18.538939073Z" level=info msg="TearDown network for sandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" successfully" Feb 9 09:03:18.539161 env[1172]: time="2024-02-09T09:03:18.539073743Z" level=info msg="StopPodSandbox for \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" returns successfully" Feb 9 09:03:18.539884 env[1172]: time="2024-02-09T09:03:18.539817569Z" level=info msg="RemovePodSandbox for \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\"" Feb 9 09:03:18.540131 env[1172]: time="2024-02-09T09:03:18.539888469Z" level=info msg="Forcibly stopping sandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\"" Feb 9 09:03:18.540329 env[1172]: time="2024-02-09T09:03:18.540156211Z" level=info msg="TearDown network for sandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" successfully" Feb 9 09:03:18.544446 env[1172]: time="2024-02-09T09:03:18.544373048Z" level=info msg="RemovePodSandbox \"bc909aab32ef3847f0bc3cbb5a6935710d1a8a3c9f8686aa30fa4d2684bdc653\" returns successfully" Feb 9 09:03:38.643303 sshd[6265]: pam_unix(sshd:session): session closed for user core Feb 9 09:03:38.644905 systemd[1]: sshd@113-139.178.90.113:22-147.75.109.163:41094.service: Deactivated successfully. Feb 9 09:03:38.645410 systemd[1]: session-91.scope: Deactivated successfully. Feb 9 09:03:38.645830 systemd-logind[1160]: Session 91 logged out. Waiting for processes to exit. Feb 9 09:03:38.646212 systemd-logind[1160]: Removed session 91.