Feb 9 13:52:21.550291 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 9 13:52:21.550303 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 13:52:21.550311 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 13:52:21.550315 kernel: BIOS-provided physical RAM map: Feb 9 13:52:21.550318 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 13:52:21.550322 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 13:52:21.550327 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 13:52:21.550331 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 13:52:21.550335 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 13:52:21.550340 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000061f6efff] usable Feb 9 13:52:21.550346 kernel: BIOS-e820: [mem 0x0000000061f6f000-0x0000000061f6ffff] ACPI NVS Feb 9 13:52:21.550350 kernel: BIOS-e820: [mem 0x0000000061f70000-0x0000000061f70fff] reserved Feb 9 13:52:21.550354 kernel: BIOS-e820: [mem 0x0000000061f71000-0x000000006c0c4fff] usable Feb 9 13:52:21.550358 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 9 13:52:21.550363 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 9 13:52:21.550368 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 9 13:52:21.550372 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 9 13:52:21.550376 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 9 13:52:21.550380 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 9 13:52:21.550385 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 13:52:21.550389 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 13:52:21.550393 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 13:52:21.550412 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 13:52:21.550417 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 13:52:21.550421 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 9 13:52:21.550426 kernel: NX (Execute Disable) protection: active Feb 9 13:52:21.550430 kernel: SMBIOS 3.2.1 present. Feb 9 13:52:21.550434 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 9 13:52:21.550438 kernel: tsc: Detected 3400.000 MHz processor Feb 9 13:52:21.550442 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 13:52:21.550447 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 13:52:21.550451 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 13:52:21.550456 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 9 13:52:21.550460 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 13:52:21.550464 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 9 13:52:21.550468 kernel: Using GB pages for direct mapping Feb 9 13:52:21.550473 kernel: ACPI: Early table checksum verification disabled Feb 9 13:52:21.550478 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 13:52:21.550482 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 13:52:21.550486 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 9 13:52:21.550492 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 13:52:21.550497 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 9 13:52:21.550502 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 9 13:52:21.550507 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 9 13:52:21.550512 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 13:52:21.550516 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 13:52:21.550521 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 13:52:21.550526 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 13:52:21.550530 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 13:52:21.550536 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 13:52:21.550540 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:52:21.550545 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 13:52:21.550549 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 13:52:21.550554 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:52:21.550559 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:52:21.550563 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 13:52:21.550568 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 13:52:21.550572 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:52:21.550578 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:52:21.550582 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 13:52:21.550587 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 9 13:52:21.550591 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 13:52:21.550596 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 13:52:21.550601 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 13:52:21.550605 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xf0a 01072009 AMI 00010013) Feb 9 13:52:21.550610 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 13:52:21.550615 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 13:52:21.550620 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 13:52:21.550625 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 13:52:21.550629 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 13:52:21.550634 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 9 13:52:21.550639 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 9 13:52:21.550643 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 9 13:52:21.550648 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 9 13:52:21.550652 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 9 13:52:21.550658 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 9 13:52:21.550662 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 9 13:52:21.550667 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 9 13:52:21.550672 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 9 13:52:21.550676 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 9 13:52:21.550681 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 9 13:52:21.550685 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 9 13:52:21.550690 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 9 13:52:21.550694 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 9 13:52:21.550700 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 9 13:52:21.550704 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 9 13:52:21.550709 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 9 13:52:21.550713 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 9 13:52:21.550718 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 9 13:52:21.550722 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 9 13:52:21.550727 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 9 13:52:21.550732 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 9 13:52:21.550736 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 9 13:52:21.550741 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 9 13:52:21.550746 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 9 13:52:21.550751 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 9 13:52:21.550755 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 9 13:52:21.550760 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 9 13:52:21.550764 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 9 13:52:21.550769 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 9 13:52:21.550773 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 9 13:52:21.550778 kernel: No NUMA configuration found Feb 9 13:52:21.550783 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 9 13:52:21.550788 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 9 13:52:21.550793 kernel: Zone ranges: Feb 9 13:52:21.550797 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 13:52:21.550802 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 13:52:21.550806 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 9 13:52:21.550811 kernel: Movable zone start for each node Feb 9 13:52:21.550816 kernel: Early memory node ranges Feb 9 13:52:21.550820 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 13:52:21.550825 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 13:52:21.550830 kernel: node 0: [mem 0x0000000040400000-0x0000000061f6efff] Feb 9 13:52:21.550835 kernel: node 0: [mem 0x0000000061f71000-0x000000006c0c4fff] Feb 9 13:52:21.550839 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 9 13:52:21.550844 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 9 13:52:21.550848 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 9 13:52:21.550853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 9 13:52:21.550861 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 13:52:21.550866 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 13:52:21.550871 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 13:52:21.550876 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 13:52:21.550882 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 9 13:52:21.550887 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 9 13:52:21.550892 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 9 13:52:21.550897 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 13:52:21.550902 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 13:52:21.550907 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 13:52:21.550912 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 13:52:21.550917 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 13:52:21.550922 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 13:52:21.550927 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 13:52:21.550932 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 13:52:21.550937 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 13:52:21.550942 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 13:52:21.550946 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 13:52:21.550951 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 13:52:21.550957 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 13:52:21.550962 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 13:52:21.550967 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 13:52:21.550972 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 13:52:21.550977 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 13:52:21.550982 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 13:52:21.550987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 13:52:21.550991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 13:52:21.550996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 13:52:21.551002 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 13:52:21.551007 kernel: TSC deadline timer available Feb 9 13:52:21.551012 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 13:52:21.551017 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 9 13:52:21.551022 kernel: Booting paravirtualized kernel on bare hardware Feb 9 13:52:21.551027 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 13:52:21.551032 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 13:52:21.551037 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 13:52:21.551042 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 13:52:21.551047 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 13:52:21.551052 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 9 13:52:21.551057 kernel: Policy zone: Normal Feb 9 13:52:21.551063 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 13:52:21.551068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 13:52:21.551073 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 13:52:21.551078 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 13:52:21.551083 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 13:52:21.551089 kernel: Memory: 32555728K/33281940K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 725952K reserved, 0K cma-reserved) Feb 9 13:52:21.551094 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 13:52:21.551099 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 13:52:21.551104 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 13:52:21.551109 kernel: rcu: Hierarchical RCU implementation. Feb 9 13:52:21.551114 kernel: rcu: RCU event tracing is enabled. Feb 9 13:52:21.551119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 13:52:21.551124 kernel: Rude variant of Tasks RCU enabled. Feb 9 13:52:21.551129 kernel: Tracing variant of Tasks RCU enabled. Feb 9 13:52:21.551135 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 13:52:21.551140 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 13:52:21.551145 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 13:52:21.551149 kernel: random: crng init done Feb 9 13:52:21.551154 kernel: Console: colour dummy device 80x25 Feb 9 13:52:21.551159 kernel: printk: console [tty0] enabled Feb 9 13:52:21.551164 kernel: printk: console [ttyS1] enabled Feb 9 13:52:21.551169 kernel: ACPI: Core revision 20210730 Feb 9 13:52:21.551174 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 9 13:52:21.551180 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 13:52:21.551185 kernel: DMAR: Host address width 39 Feb 9 13:52:21.551190 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 9 13:52:21.551195 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 9 13:52:21.551199 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 13:52:21.551204 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 13:52:21.551209 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 9 13:52:21.551214 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 9 13:52:21.551219 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 9 13:52:21.551225 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 13:52:21.551230 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 13:52:21.551235 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 13:52:21.551240 kernel: x2apic enabled Feb 9 13:52:21.551245 kernel: Switched APIC routing to cluster x2apic. Feb 9 13:52:21.551249 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 13:52:21.551254 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 13:52:21.551259 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 13:52:21.551264 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 13:52:21.551270 kernel: process: using mwait in idle threads Feb 9 13:52:21.551275 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 13:52:21.551280 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 13:52:21.551285 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 13:52:21.551290 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 13:52:21.551295 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 13:52:21.551300 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 13:52:21.551305 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 13:52:21.551310 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 13:52:21.551316 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 13:52:21.551321 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 13:52:21.551326 kernel: TAA: Mitigation: TSX disabled Feb 9 13:52:21.551331 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 13:52:21.551336 kernel: SRBDS: Mitigation: Microcode Feb 9 13:52:21.551340 kernel: GDS: Vulnerable: No microcode Feb 9 13:52:21.551347 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 13:52:21.551352 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 13:52:21.551357 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 13:52:21.551383 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 13:52:21.551388 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 13:52:21.551393 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 13:52:21.551398 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 13:52:21.551403 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 13:52:21.551408 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 13:52:21.551427 kernel: Freeing SMP alternatives memory: 32K Feb 9 13:52:21.551432 kernel: pid_max: default: 32768 minimum: 301 Feb 9 13:52:21.551437 kernel: LSM: Security Framework initializing Feb 9 13:52:21.551443 kernel: SELinux: Initializing. Feb 9 13:52:21.551448 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 13:52:21.551453 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 13:52:21.551458 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 13:52:21.551463 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 13:52:21.551468 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 13:52:21.551473 kernel: ... version: 4 Feb 9 13:52:21.551478 kernel: ... bit width: 48 Feb 9 13:52:21.551483 kernel: ... generic registers: 4 Feb 9 13:52:21.551488 kernel: ... value mask: 0000ffffffffffff Feb 9 13:52:21.551493 kernel: ... max period: 00007fffffffffff Feb 9 13:52:21.551498 kernel: ... fixed-purpose events: 3 Feb 9 13:52:21.551503 kernel: ... event mask: 000000070000000f Feb 9 13:52:21.551508 kernel: signal: max sigframe size: 2032 Feb 9 13:52:21.551513 kernel: rcu: Hierarchical SRCU implementation. Feb 9 13:52:21.551518 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 13:52:21.551523 kernel: smp: Bringing up secondary CPUs ... Feb 9 13:52:21.551527 kernel: x86: Booting SMP configuration: Feb 9 13:52:21.551533 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 13:52:21.551538 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 13:52:21.551543 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 13:52:21.551548 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 13:52:21.551553 kernel: smpboot: Max logical packages: 1 Feb 9 13:52:21.551559 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 13:52:21.551563 kernel: devtmpfs: initialized Feb 9 13:52:21.551568 kernel: x86/mm: Memory block size: 128MB Feb 9 13:52:21.551573 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x61f6f000-0x61f6ffff] (4096 bytes) Feb 9 13:52:21.551579 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 9 13:52:21.551584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 13:52:21.551589 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 13:52:21.551594 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 13:52:21.551599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 13:52:21.551604 kernel: audit: initializing netlink subsys (disabled) Feb 9 13:52:21.551609 kernel: audit: type=2000 audit(1707486735.111:1): state=initialized audit_enabled=0 res=1 Feb 9 13:52:21.551614 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 13:52:21.551619 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 13:52:21.551624 kernel: cpuidle: using governor menu Feb 9 13:52:21.551629 kernel: ACPI: bus type PCI registered Feb 9 13:52:21.551634 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 13:52:21.551639 kernel: dca service started, version 1.12.1 Feb 9 13:52:21.551644 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 13:52:21.551649 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 13:52:21.551654 kernel: PCI: Using configuration type 1 for base access Feb 9 13:52:21.551659 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 13:52:21.551665 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 13:52:21.551670 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 13:52:21.551674 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 13:52:21.551679 kernel: ACPI: Added _OSI(Module Device) Feb 9 13:52:21.551684 kernel: ACPI: Added _OSI(Processor Device) Feb 9 13:52:21.551689 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 13:52:21.551694 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 13:52:21.551699 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 13:52:21.551704 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 13:52:21.551710 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 13:52:21.551715 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 13:52:21.551719 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:52:21.551724 kernel: ACPI: SSDT 0xFFFF8E4B00215900 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 13:52:21.551729 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 13:52:21.551734 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:52:21.551739 kernel: ACPI: SSDT 0xFFFF8E4B01CEB400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 13:52:21.551744 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:52:21.551749 kernel: ACPI: SSDT 0xFFFF8E4B01C58000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 13:52:21.551754 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:52:21.551759 kernel: ACPI: SSDT 0xFFFF8E4B01C5F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 13:52:21.551764 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:52:21.551769 kernel: ACPI: SSDT 0xFFFF8E4B0014C000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 13:52:21.551774 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:52:21.551779 kernel: ACPI: SSDT 0xFFFF8E4B01CED400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 13:52:21.551784 kernel: ACPI: Interpreter enabled Feb 9 13:52:21.551789 kernel: ACPI: PM: (supports S0 S5) Feb 9 13:52:21.551794 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 13:52:21.551799 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 13:52:21.551804 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 13:52:21.551809 kernel: HEST: Table parsing has been initialized. Feb 9 13:52:21.551814 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 13:52:21.551819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 13:52:21.551824 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 13:52:21.551829 kernel: ACPI: PM: Power Resource [USBC] Feb 9 13:52:21.551834 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 13:52:21.551839 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 13:52:21.551844 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 13:52:21.551849 kernel: ACPI: PM: Power Resource [WRST] Feb 9 13:52:21.551854 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 13:52:21.551859 kernel: ACPI: PM: Power Resource [FN00] Feb 9 13:52:21.551864 kernel: ACPI: PM: Power Resource [FN01] Feb 9 13:52:21.551869 kernel: ACPI: PM: Power Resource [FN02] Feb 9 13:52:21.551874 kernel: ACPI: PM: Power Resource [FN03] Feb 9 13:52:21.551879 kernel: ACPI: PM: Power Resource [FN04] Feb 9 13:52:21.551883 kernel: ACPI: PM: Power Resource [PIN] Feb 9 13:52:21.551888 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 13:52:21.551952 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 13:52:21.551996 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 13:52:21.552036 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 13:52:21.552043 kernel: PCI host bridge to bus 0000:00 Feb 9 13:52:21.552085 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 13:52:21.552121 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 13:52:21.552159 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 13:52:21.552194 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 9 13:52:21.552229 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 13:52:21.552264 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 13:52:21.552312 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 13:52:21.552380 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 13:52:21.552436 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.552484 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 9 13:52:21.552525 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.552570 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 9 13:52:21.552612 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 9 13:52:21.552652 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 9 13:52:21.552692 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 9 13:52:21.552739 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 13:52:21.552781 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 9 13:52:21.552824 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 13:52:21.552864 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 9 13:52:21.552908 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 13:52:21.552948 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 9 13:52:21.552990 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 13:52:21.553036 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 13:52:21.553076 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 9 13:52:21.553116 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 9 13:52:21.553159 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 13:52:21.553199 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 13:52:21.553241 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 13:52:21.553283 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 13:52:21.553326 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 13:52:21.553386 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 9 13:52:21.553444 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 13:52:21.553494 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 13:52:21.553536 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 9 13:52:21.553577 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 13:52:21.553622 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 13:52:21.553661 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 9 13:52:21.553702 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 13:52:21.553745 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 13:52:21.553786 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 9 13:52:21.553826 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 9 13:52:21.553868 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 9 13:52:21.553907 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 9 13:52:21.553946 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 9 13:52:21.553986 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 9 13:52:21.554025 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 13:52:21.554071 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 13:52:21.554114 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.554161 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 13:52:21.554202 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.554245 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 13:52:21.554288 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.554334 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 13:52:21.554410 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.554453 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 9 13:52:21.554496 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.554540 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 13:52:21.554581 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 13:52:21.554628 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 13:52:21.554672 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 13:52:21.554713 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 9 13:52:21.554752 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 13:52:21.554797 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 13:52:21.554837 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 13:52:21.554880 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 13:52:21.554926 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 13:52:21.554970 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 13:52:21.555012 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 9 13:52:21.555053 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 9 13:52:21.555095 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 13:52:21.555136 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 13:52:21.555185 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 13:52:21.555227 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 13:52:21.555269 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 9 13:52:21.555310 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 9 13:52:21.555377 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 13:52:21.555420 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 13:52:21.555462 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 13:52:21.555504 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 13:52:21.555546 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 13:52:21.555586 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 13:52:21.555633 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 13:52:21.555676 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 9 13:52:21.555718 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 13:52:21.555760 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 9 13:52:21.555803 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.555846 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 13:52:21.555887 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 13:52:21.555928 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 13:52:21.555975 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 9 13:52:21.556019 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 9 13:52:21.556061 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 13:52:21.556152 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 9 13:52:21.556216 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 9 13:52:21.556257 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 13:52:21.556299 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 13:52:21.556339 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 13:52:21.556383 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 13:52:21.556430 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 13:52:21.556473 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 9 13:52:21.556515 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 13:52:21.556559 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 13:52:21.556600 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 13:52:21.556641 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 13:52:21.556682 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:52:21.556728 kernel: pci_bus 0000:08: extended config space not accessible Feb 9 13:52:21.556778 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 13:52:21.556823 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 9 13:52:21.556870 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 9 13:52:21.556913 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 13:52:21.556959 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 13:52:21.557002 kernel: pci 0000:08:00.0: supports D1 D2 Feb 9 13:52:21.557047 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 13:52:21.557089 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 13:52:21.557132 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 13:52:21.557176 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:52:21.557183 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 13:52:21.557189 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 13:52:21.557194 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 13:52:21.557201 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 13:52:21.557206 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 13:52:21.557212 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 13:52:21.557217 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 13:52:21.557222 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 13:52:21.557228 kernel: iommu: Default domain type: Translated Feb 9 13:52:21.557234 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 13:52:21.557278 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 9 13:52:21.557322 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 13:52:21.557369 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 9 13:52:21.557377 kernel: vgaarb: loaded Feb 9 13:52:21.557382 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 13:52:21.557388 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 13:52:21.557393 kernel: PTP clock support registered Feb 9 13:52:21.557400 kernel: PCI: Using ACPI for IRQ routing Feb 9 13:52:21.557405 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 13:52:21.557411 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 13:52:21.557416 kernel: e820: reserve RAM buffer [mem 0x61f6f000-0x63ffffff] Feb 9 13:52:21.557421 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 9 13:52:21.557427 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 9 13:52:21.557432 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 9 13:52:21.557437 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 13:52:21.557443 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 9 13:52:21.557449 kernel: clocksource: Switched to clocksource tsc-early Feb 9 13:52:21.557454 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 13:52:21.557459 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 13:52:21.557465 kernel: pnp: PnP ACPI init Feb 9 13:52:21.557507 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 13:52:21.557551 kernel: pnp 00:02: [dma 0 disabled] Feb 9 13:52:21.557592 kernel: pnp 00:03: [dma 0 disabled] Feb 9 13:52:21.557634 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 13:52:21.557671 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 13:52:21.557710 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 13:52:21.557751 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 13:52:21.557789 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 13:52:21.557826 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 13:52:21.557862 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 13:52:21.557902 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 13:52:21.557938 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 13:52:21.557974 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 13:52:21.558011 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 13:52:21.558050 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 13:52:21.558089 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 13:52:21.558127 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 13:52:21.558164 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 13:52:21.558200 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 13:52:21.558237 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 13:52:21.558274 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 13:52:21.558313 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 13:52:21.558321 kernel: pnp: PnP ACPI: found 10 devices Feb 9 13:52:21.558326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 13:52:21.558333 kernel: NET: Registered PF_INET protocol family Feb 9 13:52:21.558339 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 13:52:21.558346 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 13:52:21.558351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 13:52:21.558357 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 13:52:21.558377 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 13:52:21.558383 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 13:52:21.558388 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 13:52:21.558394 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 13:52:21.558399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 13:52:21.558405 kernel: NET: Registered PF_XDP protocol family Feb 9 13:52:21.558446 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 9 13:52:21.558486 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 9 13:52:21.558528 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 9 13:52:21.558569 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 13:52:21.558612 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 13:52:21.558655 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 13:52:21.558700 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 13:52:21.558741 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 13:52:21.558782 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 13:52:21.558824 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 13:52:21.558865 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 13:52:21.558907 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 13:52:21.558948 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 13:52:21.558990 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 13:52:21.559030 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 13:52:21.559072 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 13:52:21.559112 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 13:52:21.559153 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 13:52:21.559193 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 13:52:21.559237 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 13:52:21.559279 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 13:52:21.559322 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:52:21.559387 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 13:52:21.559429 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 13:52:21.559471 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:52:21.559508 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 13:52:21.559544 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 13:52:21.559584 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 13:52:21.559619 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 13:52:21.559655 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 9 13:52:21.559691 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 13:52:21.559735 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 9 13:52:21.559775 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 13:52:21.559817 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 9 13:52:21.559858 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 9 13:52:21.559900 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 9 13:52:21.559938 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 9 13:52:21.559979 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 13:52:21.560018 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 13:52:21.560058 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 9 13:52:21.560097 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 13:52:21.560106 kernel: PCI: CLS 64 bytes, default 64 Feb 9 13:52:21.560112 kernel: DMAR: No ATSR found Feb 9 13:52:21.560117 kernel: DMAR: No SATC found Feb 9 13:52:21.560123 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 9 13:52:21.560128 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 9 13:52:21.560133 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 9 13:52:21.560139 kernel: DMAR: IOMMU feature pasid inconsistent Feb 9 13:52:21.560145 kernel: DMAR: IOMMU feature eafs inconsistent Feb 9 13:52:21.560150 kernel: DMAR: IOMMU feature prs inconsistent Feb 9 13:52:21.560156 kernel: DMAR: IOMMU feature nest inconsistent Feb 9 13:52:21.560162 kernel: DMAR: IOMMU feature mts inconsistent Feb 9 13:52:21.560167 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 9 13:52:21.560172 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 9 13:52:21.560178 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 13:52:21.560183 kernel: DMAR: dmar1: Using Queued invalidation Feb 9 13:52:21.560226 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 13:52:21.560267 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 13:52:21.560312 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 9 13:52:21.560355 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 9 13:52:21.560397 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 9 13:52:21.560439 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 9 13:52:21.560480 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 9 13:52:21.560521 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 9 13:52:21.560561 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 9 13:52:21.560602 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 9 13:52:21.560642 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 9 13:52:21.560686 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 9 13:52:21.560726 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 9 13:52:21.560767 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 9 13:52:21.560807 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 9 13:52:21.560848 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 9 13:52:21.560890 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 9 13:52:21.560931 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 9 13:52:21.560972 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 9 13:52:21.561014 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 9 13:52:21.561056 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 9 13:52:21.561096 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 9 13:52:21.561138 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 9 13:52:21.561180 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 9 13:52:21.561222 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 9 13:52:21.561266 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 13:52:21.561310 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 9 13:52:21.561358 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 9 13:52:21.561402 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 9 13:52:21.561410 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 13:52:21.561416 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 13:52:21.561421 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 9 13:52:21.561427 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 9 13:52:21.561432 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 13:52:21.561437 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 13:52:21.561444 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 13:52:21.561449 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 9 13:52:21.561495 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 13:52:21.561503 kernel: Initialise system trusted keyrings Feb 9 13:52:21.561509 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 13:52:21.561514 kernel: Key type asymmetric registered Feb 9 13:52:21.561519 kernel: Asymmetric key parser 'x509' registered Feb 9 13:52:21.561525 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 13:52:21.561532 kernel: io scheduler mq-deadline registered Feb 9 13:52:21.561537 kernel: io scheduler kyber registered Feb 9 13:52:21.561542 kernel: io scheduler bfq registered Feb 9 13:52:21.561583 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 9 13:52:21.561625 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 9 13:52:21.561666 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 9 13:52:21.561707 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 9 13:52:21.561749 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 9 13:52:21.561791 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 9 13:52:21.561833 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 9 13:52:21.561878 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 13:52:21.561886 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 13:52:21.561892 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 13:52:21.561897 kernel: pstore: Registered erst as persistent store backend Feb 9 13:52:21.561903 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 13:52:21.561908 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 13:52:21.561915 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 13:52:21.561920 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 13:52:21.561962 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 13:52:21.561970 kernel: i8042: PNP: No PS/2 controller found. Feb 9 13:52:21.562007 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 13:52:21.562046 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 13:52:21.562082 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T13:52:20 UTC (1707486740) Feb 9 13:52:21.562120 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 13:52:21.562128 kernel: fail to initialize ptp_kvm Feb 9 13:52:21.562134 kernel: intel_pstate: Intel P-state driver initializing Feb 9 13:52:21.562139 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 13:52:21.562145 kernel: intel_pstate: HWP enabled Feb 9 13:52:21.562150 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 13:52:21.562156 kernel: vesafb: scrolling: redraw Feb 9 13:52:21.562161 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 13:52:21.562167 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x000000008330e5ee, using 768k, total 768k Feb 9 13:52:21.562173 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 13:52:21.562178 kernel: fb0: VESA VGA frame buffer device Feb 9 13:52:21.562184 kernel: NET: Registered PF_INET6 protocol family Feb 9 13:52:21.562189 kernel: Segment Routing with IPv6 Feb 9 13:52:21.562194 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 13:52:21.562200 kernel: NET: Registered PF_PACKET protocol family Feb 9 13:52:21.562205 kernel: Key type dns_resolver registered Feb 9 13:52:21.562210 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 13:52:21.562216 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 13:52:21.562221 kernel: IPI shorthand broadcast: enabled Feb 9 13:52:21.562227 kernel: sched_clock: Marking stable (1838902290, 1353732810)->(4615520075, -1422884975) Feb 9 13:52:21.562233 kernel: registered taskstats version 1 Feb 9 13:52:21.562238 kernel: Loading compiled-in X.509 certificates Feb 9 13:52:21.562243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 13:52:21.562249 kernel: Key type .fscrypt registered Feb 9 13:52:21.562254 kernel: Key type fscrypt-provisioning registered Feb 9 13:52:21.562260 kernel: pstore: Using crash dump compression: deflate Feb 9 13:52:21.562265 kernel: ima: Allocated hash algorithm: sha1 Feb 9 13:52:21.562271 kernel: ima: No architecture policies found Feb 9 13:52:21.562277 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 13:52:21.562282 kernel: Write protecting the kernel read-only data: 28672k Feb 9 13:52:21.562287 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 13:52:21.562293 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 13:52:21.562298 kernel: Run /init as init process Feb 9 13:52:21.562304 kernel: with arguments: Feb 9 13:52:21.562309 kernel: /init Feb 9 13:52:21.562314 kernel: with environment: Feb 9 13:52:21.562320 kernel: HOME=/ Feb 9 13:52:21.562326 kernel: TERM=linux Feb 9 13:52:21.562331 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 13:52:21.562337 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 13:52:21.562346 systemd[1]: Detected architecture x86-64. Feb 9 13:52:21.562352 systemd[1]: Running in initrd. Feb 9 13:52:21.562358 systemd[1]: No hostname configured, using default hostname. Feb 9 13:52:21.562363 systemd[1]: Hostname set to . Feb 9 13:52:21.562370 systemd[1]: Initializing machine ID from random generator. Feb 9 13:52:21.562375 systemd[1]: Queued start job for default target initrd.target. Feb 9 13:52:21.562381 systemd[1]: Started systemd-ask-password-console.path. Feb 9 13:52:21.562386 systemd[1]: Reached target cryptsetup.target. Feb 9 13:52:21.562392 systemd[1]: Reached target paths.target. Feb 9 13:52:21.562397 systemd[1]: Reached target slices.target. Feb 9 13:52:21.562403 systemd[1]: Reached target swap.target. Feb 9 13:52:21.562408 systemd[1]: Reached target timers.target. Feb 9 13:52:21.562415 systemd[1]: Listening on iscsid.socket. Feb 9 13:52:21.562421 systemd[1]: Listening on iscsiuio.socket. Feb 9 13:52:21.562426 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 13:52:21.562432 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 13:52:21.562437 systemd[1]: Listening on systemd-journald.socket. Feb 9 13:52:21.562443 systemd[1]: Listening on systemd-networkd.socket. Feb 9 13:52:21.562449 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 13:52:21.562454 kernel: tsc: Refined TSC clocksource calibration: 3408.013 MHz Feb 9 13:52:21.562460 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe02c9f5, max_idle_ns: 440795207564 ns Feb 9 13:52:21.562466 kernel: clocksource: Switched to clocksource tsc Feb 9 13:52:21.562471 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 13:52:21.562477 systemd[1]: Reached target sockets.target. Feb 9 13:52:21.562482 systemd[1]: Starting kmod-static-nodes.service... Feb 9 13:52:21.562488 systemd[1]: Finished network-cleanup.service. Feb 9 13:52:21.562494 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 13:52:21.562499 systemd[1]: Starting systemd-journald.service... Feb 9 13:52:21.562505 systemd[1]: Starting systemd-modules-load.service... Feb 9 13:52:21.562513 systemd-journald[268]: Journal started Feb 9 13:52:21.562538 systemd-journald[268]: Runtime Journal (/run/log/journal/043f322064bd452c811f00ff5ea1ee56) is 8.0M, max 636.8M, 628.8M free. Feb 9 13:52:21.564795 systemd-modules-load[269]: Inserted module 'overlay' Feb 9 13:52:21.569000 audit: BPF prog-id=6 op=LOAD Feb 9 13:52:21.588400 kernel: audit: type=1334 audit(1707486741.569:2): prog-id=6 op=LOAD Feb 9 13:52:21.588415 systemd[1]: Starting systemd-resolved.service... Feb 9 13:52:21.636374 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 13:52:21.636388 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 13:52:21.652393 kernel: Bridge firewalling registered Feb 9 13:52:21.667398 systemd[1]: Started systemd-journald.service. Feb 9 13:52:21.681526 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 9 13:52:21.730665 kernel: audit: type=1130 audit(1707486741.688:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.688334 systemd-resolved[271]: Positive Trust Anchors: Feb 9 13:52:21.806259 kernel: SCSI subsystem initialized Feb 9 13:52:21.806274 kernel: audit: type=1130 audit(1707486741.741:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.806282 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 13:52:21.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.688339 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 13:52:21.905836 kernel: device-mapper: uevent: version 1.0.3 Feb 9 13:52:21.905847 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 13:52:21.905878 kernel: audit: type=1130 audit(1707486741.862:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.688362 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 13:52:21.979531 kernel: audit: type=1130 audit(1707486741.913:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.689633 systemd[1]: Finished kmod-static-nodes.service. Feb 9 13:52:21.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.689889 systemd-resolved[271]: Defaulting to hostname 'linux'. Feb 9 13:52:22.087214 kernel: audit: type=1130 audit(1707486741.986:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.087230 kernel: audit: type=1130 audit(1707486742.040:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:21.742490 systemd[1]: Started systemd-resolved.service. Feb 9 13:52:21.863735 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 13:52:21.906344 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 9 13:52:21.914649 systemd[1]: Finished systemd-modules-load.service. Feb 9 13:52:21.987789 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 13:52:22.062212 systemd[1]: Reached target nss-lookup.target. Feb 9 13:52:22.096024 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 13:52:22.115953 systemd[1]: Starting systemd-sysctl.service... Feb 9 13:52:22.116246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 13:52:22.119199 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 13:52:22.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.119966 systemd[1]: Finished systemd-sysctl.service. Feb 9 13:52:22.168543 kernel: audit: type=1130 audit(1707486742.117:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.180661 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 13:52:22.245435 kernel: audit: type=1130 audit(1707486742.179:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.237994 systemd[1]: Starting dracut-cmdline.service... Feb 9 13:52:22.259447 dracut-cmdline[295]: dracut-dracut-053 Feb 9 13:52:22.259447 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 13:52:22.259447 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 13:52:22.390430 kernel: Loading iSCSI transport class v2.0-870. Feb 9 13:52:22.390450 kernel: iscsi: registered transport (tcp) Feb 9 13:52:22.390457 kernel: iscsi: registered transport (qla4xxx) Feb 9 13:52:22.390464 kernel: QLogic iSCSI HBA Driver Feb 9 13:52:22.396857 systemd[1]: Finished dracut-cmdline.service. Feb 9 13:52:22.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:22.406092 systemd[1]: Starting dracut-pre-udev.service... Feb 9 13:52:22.461413 kernel: raid6: avx2x4 gen() 47780 MB/s Feb 9 13:52:22.496408 kernel: raid6: avx2x4 xor() 21139 MB/s Feb 9 13:52:22.531409 kernel: raid6: avx2x2 gen() 53874 MB/s Feb 9 13:52:22.566409 kernel: raid6: avx2x2 xor() 32110 MB/s Feb 9 13:52:22.601410 kernel: raid6: avx2x1 gen() 44847 MB/s Feb 9 13:52:22.636379 kernel: raid6: avx2x1 xor() 27366 MB/s Feb 9 13:52:22.671413 kernel: raid6: sse2x4 gen() 20920 MB/s Feb 9 13:52:22.704391 kernel: raid6: sse2x4 xor() 11599 MB/s Feb 9 13:52:22.738415 kernel: raid6: sse2x2 gen() 21234 MB/s Feb 9 13:52:22.772410 kernel: raid6: sse2x2 xor() 13159 MB/s Feb 9 13:52:22.806378 kernel: raid6: sse2x1 gen() 17942 MB/s Feb 9 13:52:22.858068 kernel: raid6: sse2x1 xor() 8752 MB/s Feb 9 13:52:22.858083 kernel: raid6: using algorithm avx2x2 gen() 53874 MB/s Feb 9 13:52:22.858091 kernel: raid6: .... xor() 32110 MB/s, rmw enabled Feb 9 13:52:22.876174 kernel: raid6: using avx2x2 recovery algorithm Feb 9 13:52:22.922388 kernel: xor: automatically using best checksumming function avx Feb 9 13:52:23.000402 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 13:52:23.005725 systemd[1]: Finished dracut-pre-udev.service. Feb 9 13:52:23.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:23.013000 audit: BPF prog-id=7 op=LOAD Feb 9 13:52:23.013000 audit: BPF prog-id=8 op=LOAD Feb 9 13:52:23.015317 systemd[1]: Starting systemd-udevd.service... Feb 9 13:52:23.023067 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 13:52:23.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:23.028602 systemd[1]: Started systemd-udevd.service. Feb 9 13:52:23.067471 dracut-pre-trigger[489]: rd.md=0: removing MD RAID activation Feb 9 13:52:23.044156 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 13:52:23.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:23.071761 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 13:52:23.084306 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 13:52:23.134565 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 13:52:23.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:23.161720 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 13:52:23.179916 kernel: libata version 3.00 loaded. Feb 9 13:52:23.179947 kernel: ACPI: bus type USB registered Feb 9 13:52:23.198102 kernel: usbcore: registered new interface driver usbfs Feb 9 13:52:23.215923 kernel: usbcore: registered new interface driver hub Feb 9 13:52:23.215951 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 13:52:23.215959 kernel: usbcore: registered new device driver usb Feb 9 13:52:23.250358 kernel: AES CTR mode by8 optimization enabled Feb 9 13:52:23.267349 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 13:52:23.300962 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 13:52:23.321255 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 13:52:23.321355 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 9 13:52:23.321418 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 13:52:23.345380 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 13:52:23.345459 kernel: pps pps0: new PPS source ptp0 Feb 9 13:52:23.345518 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 9 13:52:23.345576 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 13:52:23.364348 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 13:52:23.364444 kernel: scsi host0: ahci Feb 9 13:52:23.364510 kernel: scsi host1: ahci Feb 9 13:52:23.364568 kernel: scsi host2: ahci Feb 9 13:52:23.364648 kernel: scsi host3: ahci Feb 9 13:52:23.364714 kernel: scsi host4: ahci Feb 9 13:52:23.364765 kernel: scsi host5: ahci Feb 9 13:52:23.364841 kernel: scsi host6: ahci Feb 9 13:52:23.364905 kernel: scsi host7: ahci Feb 9 13:52:23.364959 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Feb 9 13:52:23.364967 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Feb 9 13:52:23.364973 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Feb 9 13:52:23.364980 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Feb 9 13:52:23.364987 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Feb 9 13:52:23.364993 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Feb 9 13:52:23.365000 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Feb 9 13:52:23.365006 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Feb 9 13:52:23.375156 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 9 13:52:23.429400 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 13:52:23.429470 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 13:52:23.442402 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 13:52:23.442471 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:66 Feb 9 13:52:23.454910 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 13:52:23.478602 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 9 13:52:23.478673 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 13:52:23.489755 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 13:52:23.510701 kernel: hub 1-0:1.0: USB hub found Feb 9 13:52:23.562409 kernel: pps pps1: new PPS source ptp1 Feb 9 13:52:23.562480 kernel: hub 1-0:1.0: 16 ports detected Feb 9 13:52:23.562540 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 9 13:52:23.602524 kernel: hub 2-0:1.0: USB hub found Feb 9 13:52:23.602604 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 13:52:23.602663 kernel: hub 2-0:1.0: 10 ports detected Feb 9 13:52:23.609403 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 13:52:23.615690 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:67 Feb 9 13:52:23.639073 kernel: usb: port power management may be unreliable Feb 9 13:52:23.654417 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 9 13:52:23.693148 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 13:52:23.693165 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 13:52:23.693237 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 13:52:23.720382 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 13:52:23.720452 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 13:52:23.833396 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 13:52:23.833422 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 13:52:23.935413 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 13:52:23.935485 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 13:52:23.965399 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 9 13:52:23.965476 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 13:52:23.996169 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 13:52:23.996240 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 13:52:24.004347 kernel: hub 1-14:1.0: USB hub found Feb 9 13:52:24.004425 kernel: hub 1-14:1.0: 4 ports detected Feb 9 13:52:24.116385 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 9 13:52:24.131417 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 13:52:24.149367 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 13:52:24.165416 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 13:52:24.195818 kernel: ata1.00: Features: NCQ-prio Feb 9 13:52:24.230303 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 13:52:24.230316 kernel: ata2.00: Features: NCQ-prio Feb 9 13:52:24.246378 kernel: ata1.00: configured for UDMA/133 Feb 9 13:52:24.246391 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 13:52:24.265347 kernel: ata2.00: configured for UDMA/133 Feb 9 13:52:24.280366 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 13:52:24.298413 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 13:52:24.298482 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 13:52:24.356349 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 9 13:52:24.356457 kernel: port_module: 9 callbacks suppressed Feb 9 13:52:24.356467 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 9 13:52:24.420351 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 9 13:52:24.420437 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:24.436123 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 13:52:24.451712 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 13:52:24.451799 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 13:52:24.488502 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 13:52:24.488582 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 13:52:24.488640 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 13:52:24.504355 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 13:52:24.519777 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 13:52:24.535628 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 13:52:24.551024 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 13:52:24.551100 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 13:52:24.553349 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 13:52:24.571348 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 13:52:24.571363 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:24.573346 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 13:52:24.573360 kernel: GPT:9289727 != 937703087 Feb 9 13:52:24.573367 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 13:52:24.573373 kernel: GPT:9289727 != 937703087 Feb 9 13:52:24.573379 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 13:52:24.573385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:52:24.573392 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:24.573398 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 13:52:24.610421 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 13:52:24.761801 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 13:52:24.776407 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 13:52:24.809720 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 13:52:24.842351 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Feb 9 13:52:24.852826 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 13:52:24.951568 kernel: usbcore: registered new interface driver usbhid Feb 9 13:52:24.951581 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (541) Feb 9 13:52:24.951588 kernel: usbhid: USB HID core driver Feb 9 13:52:24.951598 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 13:52:24.951605 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Feb 9 13:52:24.911436 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 13:52:24.929545 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 13:52:25.031437 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 13:52:25.031527 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 13:52:24.970274 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 13:52:25.110474 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 13:52:25.110553 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:25.110561 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:52:24.992624 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 13:52:25.147412 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:25.147443 kernel: GPT:disk_guids don't match. Feb 9 13:52:25.066910 systemd[1]: Starting disk-uuid.service... Feb 9 13:52:25.196438 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 13:52:25.196449 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:52:25.196456 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:25.196498 disk-uuid[693]: Primary Header is updated. Feb 9 13:52:25.196498 disk-uuid[693]: Secondary Entries is updated. Feb 9 13:52:25.196498 disk-uuid[693]: Secondary Header is updated. Feb 9 13:52:25.233436 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:52:26.185194 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:52:26.203390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:52:26.203406 disk-uuid[694]: The operation has completed successfully. Feb 9 13:52:26.240158 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 13:52:26.335575 kernel: audit: type=1130 audit(1707486746.247:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.335589 kernel: audit: type=1131 audit(1707486746.247:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.240201 systemd[1]: Finished disk-uuid.service. Feb 9 13:52:26.365378 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 13:52:26.251888 systemd[1]: Starting verity-setup.service... Feb 9 13:52:26.397354 systemd[1]: Found device dev-mapper-usr.device. Feb 9 13:52:26.406350 systemd[1]: Mounting sysusr-usr.mount... Feb 9 13:52:26.424648 systemd[1]: Finished verity-setup.service. Feb 9 13:52:26.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.479349 kernel: audit: type=1130 audit(1707486746.432:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.508284 systemd[1]: Mounted sysusr-usr.mount. Feb 9 13:52:26.523559 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 13:52:26.516613 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 13:52:26.517017 systemd[1]: Starting ignition-setup.service... Feb 9 13:52:26.615441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 13:52:26.615455 kernel: BTRFS info (device sda6): using free space tree Feb 9 13:52:26.615463 kernel: BTRFS info (device sda6): has skinny extents Feb 9 13:52:26.615470 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 13:52:26.523931 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 13:52:26.672394 kernel: audit: type=1130 audit(1707486746.623:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.608313 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 13:52:26.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.624888 systemd[1]: Finished ignition-setup.service. Feb 9 13:52:26.762247 kernel: audit: type=1130 audit(1707486746.680:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.762261 kernel: audit: type=1334 audit(1707486746.738:24): prog-id=9 op=LOAD Feb 9 13:52:26.738000 audit: BPF prog-id=9 op=LOAD Feb 9 13:52:26.682028 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 13:52:26.740205 systemd[1]: Starting systemd-networkd.service... Feb 9 13:52:26.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.805158 ignition[871]: Ignition 2.14.0 Feb 9 13:52:26.840414 kernel: audit: type=1130 audit(1707486746.776:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.775514 systemd-networkd[881]: lo: Link UP Feb 9 13:52:26.805163 ignition[871]: Stage: fetch-offline Feb 9 13:52:26.775516 systemd-networkd[881]: lo: Gained carrier Feb 9 13:52:26.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.805190 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:52:26.999106 kernel: audit: type=1130 audit(1707486746.865:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.999123 kernel: audit: type=1130 audit(1707486746.924:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.999131 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 13:52:26.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.775801 systemd-networkd[881]: Enumeration completed Feb 9 13:52:27.030579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 9 13:52:26.805203 ignition[871]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:52:26.775877 systemd[1]: Started systemd-networkd.service. Feb 9 13:52:26.813889 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:52:26.776386 systemd-networkd[881]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:52:27.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.813953 ignition[871]: parsed url from cmdline: "" Feb 9 13:52:27.071495 iscsid[907]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 13:52:27.071495 iscsid[907]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 13:52:27.071495 iscsid[907]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 13:52:27.071495 iscsid[907]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 13:52:27.071495 iscsid[907]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 13:52:27.071495 iscsid[907]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 13:52:27.071495 iscsid[907]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 13:52:27.239441 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 13:52:27.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.777517 systemd[1]: Reached target network.target. Feb 9 13:52:27.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:26.813956 ignition[871]: no config URL provided Feb 9 13:52:26.829138 systemd[1]: Starting iscsiuio.service... Feb 9 13:52:26.813959 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 13:52:26.835853 unknown[871]: fetched base config from "system" Feb 9 13:52:26.813988 ignition[871]: parsing config with SHA512: 73fb58585ba4f77416610bcbe38dac8b0ab4e32f94c3ff5b292d57dd59330f3479d66f943e537e7c45b96b4930c35a85b2d5095360a5e18bfe7ffa2bffc2827e Feb 9 13:52:26.835857 unknown[871]: fetched user config from "system" Feb 9 13:52:26.836253 ignition[871]: fetch-offline: fetch-offline passed Feb 9 13:52:26.848562 systemd[1]: Started iscsiuio.service. Feb 9 13:52:26.836256 ignition[871]: POST message to Packet Timeline Feb 9 13:52:26.866697 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 13:52:26.836261 ignition[871]: POST Status error: resource requires networking Feb 9 13:52:26.925658 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 13:52:26.836294 ignition[871]: Ignition finished successfully Feb 9 13:52:26.926165 systemd[1]: Starting ignition-kargs.service... Feb 9 13:52:27.003411 ignition[896]: Ignition 2.14.0 Feb 9 13:52:27.000463 systemd-networkd[881]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:52:27.003415 ignition[896]: Stage: kargs Feb 9 13:52:27.012933 systemd[1]: Starting iscsid.service... Feb 9 13:52:27.003501 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:52:27.037575 systemd[1]: Started iscsid.service. Feb 9 13:52:27.003509 ignition[896]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:52:27.060049 systemd[1]: Starting dracut-initqueue.service... Feb 9 13:52:27.005776 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:52:27.078603 systemd[1]: Finished dracut-initqueue.service. Feb 9 13:52:27.006546 ignition[896]: kargs: kargs passed Feb 9 13:52:27.123687 systemd[1]: Reached target remote-fs-pre.target. Feb 9 13:52:27.006549 ignition[896]: POST message to Packet Timeline Feb 9 13:52:27.152569 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 13:52:27.006559 ignition[896]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:52:27.175983 systemd[1]: Reached target remote-fs.target. Feb 9 13:52:27.008527 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54814->[::1]:53: read: connection refused Feb 9 13:52:27.186137 systemd[1]: Starting dracut-pre-mount.service... Feb 9 13:52:27.208920 ignition[896]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 13:52:27.223381 systemd-networkd[881]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:52:27.209287 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60008->[::1]:53: read: connection refused Feb 9 13:52:27.228696 systemd[1]: Finished dracut-pre-mount.service. Feb 9 13:52:27.251615 systemd-networkd[881]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:52:27.279937 systemd-networkd[881]: enp2s0f1np1: Link UP Feb 9 13:52:27.280099 systemd-networkd[881]: enp2s0f1np1: Gained carrier Feb 9 13:52:27.290617 systemd-networkd[881]: enp2s0f0np0: Link UP Feb 9 13:52:27.290787 systemd-networkd[881]: eno2: Link UP Feb 9 13:52:27.290944 systemd-networkd[881]: eno1: Link UP Feb 9 13:52:27.610141 ignition[896]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 13:52:27.611240 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36788->[::1]:53: read: connection refused Feb 9 13:52:28.025713 systemd-networkd[881]: enp2s0f0np0: Gained carrier Feb 9 13:52:28.034587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 9 13:52:28.058559 systemd-networkd[881]: enp2s0f0np0: DHCPv4 address 86.109.11.101/31, gateway 86.109.11.100 acquired from 145.40.83.140 Feb 9 13:52:28.411661 ignition[896]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 13:52:28.412785 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51223->[::1]:53: read: connection refused Feb 9 13:52:28.706857 systemd-networkd[881]: enp2s0f1np1: Gained IPv6LL Feb 9 13:52:29.602801 systemd-networkd[881]: enp2s0f0np0: Gained IPv6LL Feb 9 13:52:30.014679 ignition[896]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 13:52:30.015940 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39769->[::1]:53: read: connection refused Feb 9 13:52:33.216435 ignition[896]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 13:52:33.258000 ignition[896]: GET result: OK Feb 9 13:52:33.485308 ignition[896]: Ignition finished successfully Feb 9 13:52:33.486635 systemd[1]: Finished ignition-kargs.service. Feb 9 13:52:33.575251 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 13:52:33.575270 kernel: audit: type=1130 audit(1707486753.499:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:33.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:33.507500 ignition[925]: Ignition 2.14.0 Feb 9 13:52:33.501302 systemd[1]: Starting ignition-disks.service... Feb 9 13:52:33.507503 ignition[925]: Stage: disks Feb 9 13:52:33.507576 ignition[925]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:52:33.507585 ignition[925]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:52:33.509039 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:52:33.510866 ignition[925]: disks: disks passed Feb 9 13:52:33.510868 ignition[925]: POST message to Packet Timeline Feb 9 13:52:33.510879 ignition[925]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:52:33.534773 ignition[925]: GET result: OK Feb 9 13:52:33.783568 ignition[925]: Ignition finished successfully Feb 9 13:52:33.786600 systemd[1]: Finished ignition-disks.service. Feb 9 13:52:33.853355 kernel: audit: type=1130 audit(1707486753.797:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:33.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:33.798876 systemd[1]: Reached target initrd-root-device.target. Feb 9 13:52:33.861587 systemd[1]: Reached target local-fs-pre.target. Feb 9 13:52:33.875550 systemd[1]: Reached target local-fs.target. Feb 9 13:52:33.875584 systemd[1]: Reached target sysinit.target. Feb 9 13:52:33.899553 systemd[1]: Reached target basic.target. Feb 9 13:52:33.913222 systemd[1]: Starting systemd-fsck-root.service... Feb 9 13:52:33.934950 systemd-fsck[939]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 13:52:33.946852 systemd[1]: Finished systemd-fsck-root.service. Feb 9 13:52:34.036939 kernel: audit: type=1130 audit(1707486753.954:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.036955 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 13:52:33.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:33.960909 systemd[1]: Mounting sysroot.mount... Feb 9 13:52:34.043968 systemd[1]: Mounted sysroot.mount. Feb 9 13:52:34.057593 systemd[1]: Reached target initrd-root-fs.target. Feb 9 13:52:34.079290 systemd[1]: Mounting sysroot-usr.mount... Feb 9 13:52:34.087177 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 13:52:34.103081 systemd[1]: Starting flatcar-static-network.service... Feb 9 13:52:34.118591 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 13:52:34.118679 systemd[1]: Reached target ignition-diskful.target. Feb 9 13:52:34.137113 systemd[1]: Mounted sysroot-usr.mount. Feb 9 13:52:34.160507 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 13:52:34.233468 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (950) Feb 9 13:52:34.233485 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 13:52:34.173055 systemd[1]: Starting initrd-setup-root.service... Feb 9 13:52:34.302558 kernel: BTRFS info (device sda6): using free space tree Feb 9 13:52:34.302579 kernel: BTRFS info (device sda6): has skinny extents Feb 9 13:52:34.302587 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 13:52:34.302598 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 13:52:34.363338 kernel: audit: type=1130 audit(1707486754.309:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.363457 coreos-metadata[947]: Feb 09 13:52:34.255 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:52:34.363457 coreos-metadata[947]: Feb 09 13:52:34.275 INFO Fetch successful Feb 9 13:52:34.546742 kernel: audit: type=1130 audit(1707486754.370:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.546828 kernel: audit: type=1130 audit(1707486754.433:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.546836 kernel: audit: type=1131 audit(1707486754.433:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.546892 coreos-metadata[946]: Feb 09 13:52:34.249 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:52:34.546892 coreos-metadata[946]: Feb 09 13:52:34.271 INFO Fetch successful Feb 9 13:52:34.546892 coreos-metadata[946]: Feb 09 13:52:34.289 INFO wrote hostname ci-3510.3.2-a-2834128369 to /sysroot/etc/hostname Feb 9 13:52:34.239770 systemd[1]: Finished initrd-setup-root.service. Feb 9 13:52:34.611514 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Feb 9 13:52:34.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.311667 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 13:52:34.686577 kernel: audit: type=1130 audit(1707486754.618:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.686596 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 13:52:34.371646 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 13:52:34.707589 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 13:52:34.371684 systemd[1]: Finished flatcar-static-network.service. Feb 9 13:52:34.726605 ignition[1024]: INFO : Ignition 2.14.0 Feb 9 13:52:34.726605 ignition[1024]: INFO : Stage: mount Feb 9 13:52:34.726605 ignition[1024]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:52:34.726605 ignition[1024]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:52:34.726605 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:52:34.726605 ignition[1024]: INFO : mount: mount passed Feb 9 13:52:34.726605 ignition[1024]: INFO : POST message to Packet Timeline Feb 9 13:52:34.726605 ignition[1024]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:52:34.726605 ignition[1024]: INFO : GET result: OK Feb 9 13:52:34.434593 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 13:52:34.555912 systemd[1]: Starting ignition-mount.service... Feb 9 13:52:34.583919 systemd[1]: Starting sysroot-boot.service... Feb 9 13:52:34.605165 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 13:52:34.605205 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 13:52:34.605801 systemd[1]: Finished sysroot-boot.service. Feb 9 13:52:34.895660 ignition[1024]: INFO : Ignition finished successfully Feb 9 13:52:34.898192 systemd[1]: Finished ignition-mount.service. Feb 9 13:52:34.979375 kernel: audit: type=1130 audit(1707486754.912:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:34.915265 systemd[1]: Starting ignition-files.service... Feb 9 13:52:34.988201 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 13:52:35.039447 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1039) Feb 9 13:52:35.039459 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 13:52:35.075355 kernel: BTRFS info (device sda6): using free space tree Feb 9 13:52:35.075393 kernel: BTRFS info (device sda6): has skinny extents Feb 9 13:52:35.124394 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 13:52:35.125767 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 13:52:35.142506 ignition[1058]: INFO : Ignition 2.14.0 Feb 9 13:52:35.142506 ignition[1058]: INFO : Stage: files Feb 9 13:52:35.142506 ignition[1058]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:52:35.142506 ignition[1058]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:52:35.142506 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:52:35.142506 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Feb 9 13:52:35.142506 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 13:52:35.142506 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 13:52:35.144918 unknown[1058]: wrote ssh authorized keys file for user: core Feb 9 13:52:35.246644 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 13:52:35.246644 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 13:52:35.246644 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 13:52:35.246644 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 13:52:35.246644 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 13:52:35.314464 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 13:52:35.314464 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 13:52:35.314464 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 13:52:35.314464 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 13:52:35.314464 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 13:52:35.314464 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 13:52:35.797661 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 13:52:35.878022 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 13:52:35.878022 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 13:52:35.920556 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 13:52:35.920556 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 13:52:36.321496 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 13:52:36.371275 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 13:52:36.395633 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 13:52:36.395633 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 13:52:36.395633 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 13:52:36.445550 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 13:52:36.674275 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 13:52:36.674275 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 13:52:36.674275 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 13:52:36.731496 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 13:52:36.731496 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 13:52:39.533542 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 13:52:39.557676 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 13:52:39.557676 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 13:52:39.557676 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 13:52:39.606596 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 13:52:40.856763 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 13:52:40.881655 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 13:52:40.881655 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 13:52:40.881655 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 13:52:40.881655 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 13:52:40.881655 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 13:52:41.270622 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 13:52:41.322669 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 13:52:41.338693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 13:52:41.558665 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1081) Feb 9 13:52:41.558764 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem769151543" Feb 9 13:52:41.558764 ignition[1058]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem769151543": device or resource busy Feb 9 13:52:41.558764 ignition[1058]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem769151543", trying btrfs: device or resource busy Feb 9 13:52:41.558764 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem769151543" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem769151543" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem769151543" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem769151543" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(16): [started] processing unit "packet-phone-home.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(16): [finished] processing unit "packet-phone-home.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(17): [started] processing unit "containerd.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(17): [finished] processing unit "containerd.service" Feb 9 13:52:41.558764 ignition[1058]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 9 13:52:42.262546 kernel: audit: type=1130 audit(1707486761.642:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262563 kernel: audit: type=1130 audit(1707486761.758:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262572 kernel: audit: type=1130 audit(1707486761.826:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262582 kernel: audit: type=1131 audit(1707486761.826:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262590 kernel: audit: type=1130 audit(1707486761.998:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262596 kernel: audit: type=1131 audit(1707486761.998:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262603 kernel: audit: type=1130 audit(1707486762.182:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(20): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(20): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 9 13:52:42.262730 ignition[1058]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 13:52:42.844585 kernel: audit: type=1131 audit(1707486762.342:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.844607 kernel: audit: type=1131 audit(1707486762.668:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.844615 kernel: audit: type=1131 audit(1707486762.761:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.624913 systemd[1]: Finished ignition-files.service. Feb 9 13:52:42.858543 ignition[1058]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 9 13:52:42.858543 ignition[1058]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 13:52:42.858543 ignition[1058]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 13:52:42.858543 ignition[1058]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 13:52:42.858543 ignition[1058]: INFO : files: files passed Feb 9 13:52:42.858543 ignition[1058]: INFO : POST message to Packet Timeline Feb 9 13:52:42.858543 ignition[1058]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:52:42.858543 ignition[1058]: INFO : GET result: OK Feb 9 13:52:42.858543 ignition[1058]: INFO : Ignition finished successfully Feb 9 13:52:42.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.649875 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 13:52:43.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.032788 iscsid[907]: iscsid shutting down. Feb 9 13:52:43.046680 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 13:52:43.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.710616 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 13:52:43.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.710927 systemd[1]: Starting ignition-quench.service... Feb 9 13:52:43.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.733739 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 13:52:43.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.759777 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 13:52:41.759857 systemd[1]: Finished ignition-quench.service. Feb 9 13:52:41.827610 systemd[1]: Reached target ignition-complete.target. Feb 9 13:52:43.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.948938 systemd[1]: Starting initrd-parse-etc.service... Feb 9 13:52:43.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.185903 ignition[1108]: INFO : Ignition 2.14.0 Feb 9 13:52:43.185903 ignition[1108]: INFO : Stage: umount Feb 9 13:52:43.185903 ignition[1108]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:52:43.185903 ignition[1108]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:52:43.185903 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:52:43.185903 ignition[1108]: INFO : umount: umount passed Feb 9 13:52:43.185903 ignition[1108]: INFO : POST message to Packet Timeline Feb 9 13:52:43.185903 ignition[1108]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:52:43.185903 ignition[1108]: INFO : GET result: OK Feb 9 13:52:43.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.973370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 13:52:43.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.344791 ignition[1108]: INFO : Ignition finished successfully Feb 9 13:52:43.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.359000 audit: BPF prog-id=6 op=UNLOAD Feb 9 13:52:41.973410 systemd[1]: Finished initrd-parse-etc.service. Feb 9 13:52:43.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:41.999653 systemd[1]: Reached target initrd-fs.target. Feb 9 13:52:43.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.122555 systemd[1]: Reached target initrd.target. Feb 9 13:52:42.122621 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 13:52:43.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.123090 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 13:52:43.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.149784 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 13:52:43.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.184281 systemd[1]: Starting initrd-cleanup.service... Feb 9 13:52:42.252301 systemd[1]: Stopped target nss-lookup.target. Feb 9 13:52:43.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.270579 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 13:52:42.289710 systemd[1]: Stopped target timers.target. Feb 9 13:52:42.321713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 13:52:42.321901 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 13:52:43.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.344227 systemd[1]: Stopped target initrd.target. Feb 9 13:52:43.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.420636 systemd[1]: Stopped target basic.target. Feb 9 13:52:43.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.437704 systemd[1]: Stopped target ignition-complete.target. Feb 9 13:52:42.469766 systemd[1]: Stopped target ignition-diskful.target. Feb 9 13:52:43.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.490899 systemd[1]: Stopped target initrd-root-device.target. Feb 9 13:52:43.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:43.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.512085 systemd[1]: Stopped target remote-fs.target. Feb 9 13:52:42.536928 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 13:52:42.561934 systemd[1]: Stopped target sysinit.target. Feb 9 13:52:42.582964 systemd[1]: Stopped target local-fs.target. Feb 9 13:52:42.606071 systemd[1]: Stopped target local-fs-pre.target. Feb 9 13:52:42.627923 systemd[1]: Stopped target swap.target. Feb 9 13:52:42.647834 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 13:52:42.648201 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 13:52:42.670296 systemd[1]: Stopped target cryptsetup.target. Feb 9 13:52:42.748619 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 13:52:42.748701 systemd[1]: Stopped dracut-initqueue.service. Feb 9 13:52:42.762743 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 13:52:42.762817 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 13:52:42.831662 systemd[1]: Stopped target paths.target. Feb 9 13:52:42.851603 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 13:52:42.855562 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 13:52:42.858632 systemd[1]: Stopped target slices.target. Feb 9 13:52:43.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:42.878700 systemd[1]: Stopped target sockets.target. Feb 9 13:52:42.905694 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 13:52:42.905873 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 13:52:43.790000 audit: BPF prog-id=5 op=UNLOAD Feb 9 13:52:43.790000 audit: BPF prog-id=4 op=UNLOAD Feb 9 13:52:43.790000 audit: BPF prog-id=3 op=UNLOAD Feb 9 13:52:43.793000 audit: BPF prog-id=8 op=UNLOAD Feb 9 13:52:43.793000 audit: BPF prog-id=7 op=UNLOAD Feb 9 13:52:42.932046 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 13:52:42.932410 systemd[1]: Stopped ignition-files.service. Feb 9 13:52:42.957036 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 13:52:42.957414 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 13:52:42.973996 systemd[1]: Stopping ignition-mount.service... Feb 9 13:52:42.994590 systemd[1]: Stopping iscsid.service... Feb 9 13:52:43.001551 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 13:52:43.001648 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 13:52:43.024696 systemd[1]: Stopping sysroot-boot.service... Feb 9 13:52:43.039475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 13:52:43.039581 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 13:52:43.054995 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 13:52:43.055294 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 13:52:43.082955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 13:52:43.083349 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 13:52:43.083399 systemd[1]: Stopped iscsid.service. Feb 9 13:52:43.102962 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 13:52:43.103022 systemd[1]: Stopped sysroot-boot.service. Feb 9 13:52:43.117933 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 13:52:43.118014 systemd[1]: Closed iscsid.socket. Feb 9 13:52:43.132670 systemd[1]: Stopping iscsiuio.service... Feb 9 13:52:43.148103 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 13:52:43.148313 systemd[1]: Stopped iscsiuio.service. Feb 9 13:52:43.162220 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 13:52:43.162434 systemd[1]: Finished initrd-cleanup.service. Feb 9 13:52:43.179726 systemd[1]: Stopped target network.target. Feb 9 13:52:43.193691 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 13:52:43.193780 systemd[1]: Closed iscsiuio.socket. Feb 9 13:52:43.207935 systemd[1]: Stopping systemd-networkd.service... Feb 9 13:52:43.216493 systemd-networkd[881]: enp2s0f1np1: DHCPv6 lease lost Feb 9 13:52:43.222906 systemd[1]: Stopping systemd-resolved.service... Feb 9 13:52:43.229586 systemd-networkd[881]: enp2s0f0np0: DHCPv6 lease lost Feb 9 13:52:43.848000 audit: BPF prog-id=9 op=UNLOAD Feb 9 13:52:43.234226 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 13:52:43.850382 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 9 13:52:43.234458 systemd[1]: Stopped systemd-resolved.service. Feb 9 13:52:43.260532 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 13:52:43.260577 systemd[1]: Stopped systemd-networkd.service. Feb 9 13:52:43.291547 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 13:52:43.291616 systemd[1]: Stopped ignition-mount.service. Feb 9 13:52:43.298594 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 13:52:43.298611 systemd[1]: Closed systemd-networkd.socket. Feb 9 13:52:43.315659 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 13:52:43.315707 systemd[1]: Stopped ignition-disks.service. Feb 9 13:52:43.336621 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 13:52:43.336698 systemd[1]: Stopped ignition-kargs.service. Feb 9 13:52:43.352747 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 13:52:43.352872 systemd[1]: Stopped ignition-setup.service. Feb 9 13:52:43.369685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 13:52:43.369815 systemd[1]: Stopped initrd-setup-root.service. Feb 9 13:52:43.387389 systemd[1]: Stopping network-cleanup.service... Feb 9 13:52:43.399554 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 13:52:43.399719 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 13:52:43.414728 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 13:52:43.414847 systemd[1]: Stopped systemd-sysctl.service. Feb 9 13:52:43.433936 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 13:52:43.434067 systemd[1]: Stopped systemd-modules-load.service. Feb 9 13:52:43.450995 systemd[1]: Stopping systemd-udevd.service... Feb 9 13:52:43.468050 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 13:52:43.469449 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 13:52:43.469729 systemd[1]: Stopped systemd-udevd.service. Feb 9 13:52:43.482036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 13:52:43.482162 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 13:52:43.494656 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 13:52:43.494746 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 13:52:43.510692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 13:52:43.510804 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 13:52:43.533683 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 13:52:43.533725 systemd[1]: Stopped dracut-cmdline.service. Feb 9 13:52:43.548484 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 13:52:43.548599 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 13:52:43.564442 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 13:52:43.578419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 13:52:43.578447 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 13:52:43.592817 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 13:52:43.592875 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 13:52:43.741216 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 13:52:43.741475 systemd[1]: Stopped network-cleanup.service. Feb 9 13:52:43.751012 systemd[1]: Reached target initrd-switch-root.target. Feb 9 13:52:43.769067 systemd[1]: Starting initrd-switch-root.service... Feb 9 13:52:43.782720 systemd[1]: Switching root. Feb 9 13:52:43.851302 systemd-journald[268]: Journal stopped Feb 9 13:52:47.621424 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 13:52:47.621437 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 13:52:47.621446 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 13:52:47.621451 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 13:52:47.621457 kernel: SELinux: policy capability open_perms=1 Feb 9 13:52:47.621462 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 13:52:47.621468 kernel: SELinux: policy capability always_check_network=0 Feb 9 13:52:47.621473 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 13:52:47.621479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 13:52:47.621485 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 13:52:47.621490 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 13:52:47.621496 systemd[1]: Successfully loaded SELinux policy in 317.919ms. Feb 9 13:52:47.621502 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.844ms. Feb 9 13:52:47.621509 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 13:52:47.621517 systemd[1]: Detected architecture x86-64. Feb 9 13:52:47.621523 systemd[1]: Detected first boot. Feb 9 13:52:47.621529 systemd[1]: Hostname set to . Feb 9 13:52:47.621536 systemd[1]: Initializing machine ID from random generator. Feb 9 13:52:47.621541 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 13:52:47.621547 systemd[1]: Populated /etc with preset unit settings. Feb 9 13:52:47.621553 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:52:47.621560 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:52:47.621567 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:52:47.621573 systemd[1]: Queued start job for default target multi-user.target. Feb 9 13:52:47.621579 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 13:52:47.621585 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 13:52:47.621592 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 13:52:47.621599 systemd[1]: Created slice system-getty.slice. Feb 9 13:52:47.621605 systemd[1]: Created slice system-modprobe.slice. Feb 9 13:52:47.621611 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 13:52:47.621617 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 13:52:47.621623 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 13:52:47.621629 systemd[1]: Created slice user.slice. Feb 9 13:52:47.621635 systemd[1]: Started systemd-ask-password-console.path. Feb 9 13:52:47.621641 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 13:52:47.621647 systemd[1]: Set up automount boot.automount. Feb 9 13:52:47.621654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 13:52:47.621660 systemd[1]: Reached target integritysetup.target. Feb 9 13:52:47.621666 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 13:52:47.621672 systemd[1]: Reached target remote-fs.target. Feb 9 13:52:47.621680 systemd[1]: Reached target slices.target. Feb 9 13:52:47.621687 systemd[1]: Reached target swap.target. Feb 9 13:52:47.621693 systemd[1]: Reached target torcx.target. Feb 9 13:52:47.621699 systemd[1]: Reached target veritysetup.target. Feb 9 13:52:47.621706 systemd[1]: Listening on systemd-coredump.socket. Feb 9 13:52:47.621713 systemd[1]: Listening on systemd-initctl.socket. Feb 9 13:52:47.621719 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 9 13:52:47.621725 kernel: audit: type=1400 audit(1707486766.858:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 13:52:47.621731 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 13:52:47.621738 kernel: audit: type=1335 audit(1707486766.858:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 13:52:47.621744 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 13:52:47.621751 systemd[1]: Listening on systemd-journald.socket. Feb 9 13:52:47.621758 systemd[1]: Listening on systemd-networkd.socket. Feb 9 13:52:47.621764 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 13:52:47.621770 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 13:52:47.621777 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 13:52:47.621784 systemd[1]: Mounting dev-hugepages.mount... Feb 9 13:52:47.621791 systemd[1]: Mounting dev-mqueue.mount... Feb 9 13:52:47.621797 systemd[1]: Mounting media.mount... Feb 9 13:52:47.621804 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 13:52:47.621810 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 13:52:47.621816 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 13:52:47.621823 systemd[1]: Mounting tmp.mount... Feb 9 13:52:47.621829 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 13:52:47.621835 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 13:52:47.621843 systemd[1]: Starting kmod-static-nodes.service... Feb 9 13:52:47.621849 systemd[1]: Starting modprobe@configfs.service... Feb 9 13:52:47.621855 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 13:52:47.621862 systemd[1]: Starting modprobe@drm.service... Feb 9 13:52:47.621868 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 13:52:47.621875 systemd[1]: Starting modprobe@fuse.service... Feb 9 13:52:47.621881 kernel: fuse: init (API version 7.34) Feb 9 13:52:47.621887 systemd[1]: Starting modprobe@loop.service... Feb 9 13:52:47.621893 kernel: loop: module loaded Feb 9 13:52:47.621900 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 13:52:47.621907 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 13:52:47.621913 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 13:52:47.621920 systemd[1]: Starting systemd-journald.service... Feb 9 13:52:47.621926 systemd[1]: Starting systemd-modules-load.service... Feb 9 13:52:47.621933 kernel: audit: type=1305 audit(1707486767.618:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 13:52:47.621941 systemd-journald[1301]: Journal started Feb 9 13:52:47.621966 systemd-journald[1301]: Runtime Journal (/run/log/journal/fa456914c0854a4ea5af979cc72d820e) is 8.0M, max 636.8M, 628.8M free. Feb 9 13:52:46.858000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 13:52:46.858000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 13:52:47.618000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 13:52:47.618000 audit[1301]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd50063850 a2=4000 a3=7ffd500638ec items=0 ppid=1 pid=1301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:52:47.618000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 13:52:47.669415 kernel: audit: type=1300 audit(1707486767.618:94): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd50063850 a2=4000 a3=7ffd500638ec items=0 ppid=1 pid=1301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:52:47.669463 kernel: audit: type=1327 audit(1707486767.618:94): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 13:52:47.783532 systemd[1]: Starting systemd-network-generator.service... Feb 9 13:52:47.810398 systemd[1]: Starting systemd-remount-fs.service... Feb 9 13:52:47.836381 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 13:52:47.879395 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 13:52:47.898386 systemd[1]: Started systemd-journald.service. Feb 9 13:52:47.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:47.908052 systemd[1]: Mounted dev-hugepages.mount. Feb 9 13:52:47.955527 kernel: audit: type=1130 audit(1707486767.906:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:47.961573 systemd[1]: Mounted dev-mqueue.mount. Feb 9 13:52:47.968583 systemd[1]: Mounted media.mount. Feb 9 13:52:47.975580 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 13:52:47.984572 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 13:52:47.993644 systemd[1]: Mounted tmp.mount. Feb 9 13:52:48.000715 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 13:52:48.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.009674 systemd[1]: Finished kmod-static-nodes.service. Feb 9 13:52:48.057498 kernel: audit: type=1130 audit(1707486768.008:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.065644 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 13:52:48.065722 systemd[1]: Finished modprobe@configfs.service. Feb 9 13:52:48.114382 kernel: audit: type=1130 audit(1707486768.064:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.122776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 13:52:48.122869 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 13:52:48.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.173388 kernel: audit: type=1130 audit(1707486768.121:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.173422 kernel: audit: type=1131 audit(1707486768.121:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.233660 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 13:52:48.233733 systemd[1]: Finished modprobe@drm.service. Feb 9 13:52:48.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.242696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 13:52:48.242767 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 13:52:48.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.251663 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 13:52:48.251734 systemd[1]: Finished modprobe@fuse.service. Feb 9 13:52:48.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.260654 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 13:52:48.260730 systemd[1]: Finished modprobe@loop.service. Feb 9 13:52:48.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.269718 systemd[1]: Finished systemd-modules-load.service. Feb 9 13:52:48.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.278689 systemd[1]: Finished systemd-network-generator.service. Feb 9 13:52:48.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.286703 systemd[1]: Finished systemd-remount-fs.service. Feb 9 13:52:48.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.295731 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 13:52:48.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.304846 systemd[1]: Reached target network-pre.target. Feb 9 13:52:48.315007 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 13:52:48.324699 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 13:52:48.331564 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 13:52:48.332591 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 13:52:48.340034 systemd[1]: Starting systemd-journal-flush.service... Feb 9 13:52:48.343454 systemd-journald[1301]: Time spent on flushing to /var/log/journal/fa456914c0854a4ea5af979cc72d820e is 14.696ms for 1592 entries. Feb 9 13:52:48.343454 systemd-journald[1301]: System Journal (/var/log/journal/fa456914c0854a4ea5af979cc72d820e) is 8.0M, max 195.6M, 187.6M free. Feb 9 13:52:48.390225 systemd-journald[1301]: Received client request to flush runtime journal. Feb 9 13:52:48.356471 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 13:52:48.356955 systemd[1]: Starting systemd-random-seed.service... Feb 9 13:52:48.374486 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 13:52:48.375008 systemd[1]: Starting systemd-sysctl.service... Feb 9 13:52:48.381991 systemd[1]: Starting systemd-sysusers.service... Feb 9 13:52:48.389050 systemd[1]: Starting systemd-udev-settle.service... Feb 9 13:52:48.397635 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 13:52:48.405513 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 13:52:48.413614 systemd[1]: Finished systemd-journal-flush.service. Feb 9 13:52:48.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.421627 systemd[1]: Finished systemd-random-seed.service. Feb 9 13:52:48.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.429598 systemd[1]: Finished systemd-sysctl.service. Feb 9 13:52:48.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.437571 systemd[1]: Finished systemd-sysusers.service. Feb 9 13:52:48.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.446443 systemd[1]: Reached target first-boot-complete.target. Feb 9 13:52:48.455110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 13:52:48.463741 udevadm[1327]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 13:52:48.475827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 13:52:48.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.651939 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 13:52:48.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.661287 systemd[1]: Starting systemd-udevd.service... Feb 9 13:52:48.673233 systemd-udevd[1335]: Using default interface naming scheme 'v252'. Feb 9 13:52:48.693864 systemd[1]: Started systemd-udevd.service. Feb 9 13:52:48.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:48.705784 systemd[1]: Found device dev-ttyS1.device. Feb 9 13:52:48.739555 systemd[1]: Starting systemd-networkd.service... Feb 9 13:52:48.750706 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 13:52:48.750763 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 13:52:48.750779 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1412) Feb 9 13:52:48.777146 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 13:52:48.736000 audit[1405]: AVC avc: denied { confidentiality } for pid=1405 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 13:52:48.736000 audit[1405]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560baeff4cf0 a1=4d8bc a2=7fb3e94d0bc5 a3=5 items=42 ppid=1335 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:52:48.736000 audit: CWD cwd="/" Feb 9 13:52:48.736000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=1 name=(null) inode=22286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=2 name=(null) inode=22286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=3 name=(null) inode=22287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=4 name=(null) inode=22286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=5 name=(null) inode=22288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=6 name=(null) inode=22286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=7 name=(null) inode=22289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=8 name=(null) inode=22289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=9 name=(null) inode=22290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=10 name=(null) inode=22289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=11 name=(null) inode=22291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=12 name=(null) inode=22289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=13 name=(null) inode=22292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=14 name=(null) inode=22289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=15 name=(null) inode=22293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=16 name=(null) inode=22289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=17 name=(null) inode=22294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=18 name=(null) inode=22286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=19 name=(null) inode=22295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=20 name=(null) inode=22295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=21 name=(null) inode=22296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=22 name=(null) inode=22295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=23 name=(null) inode=22297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=24 name=(null) inode=22295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=25 name=(null) inode=22298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=26 name=(null) inode=22295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=27 name=(null) inode=22299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=28 name=(null) inode=22295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=29 name=(null) inode=22300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=30 name=(null) inode=22286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=31 name=(null) inode=22301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=32 name=(null) inode=22301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=33 name=(null) inode=22302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=34 name=(null) inode=22301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=35 name=(null) inode=22303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=36 name=(null) inode=22301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=37 name=(null) inode=22304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=38 name=(null) inode=22301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=39 name=(null) inode=22305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=40 name=(null) inode=22301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PATH item=41 name=(null) inode=22306 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:52:48.736000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 13:52:48.825358 kernel: IPMI message handler: version 39.2 Feb 9 13:52:48.852356 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 13:52:48.852413 kernel: ACPI: button: Power Button [PWRF] Feb 9 13:52:48.853258 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 13:52:48.854916 systemd[1]: Starting systemd-userdbd.service... Feb 9 13:52:48.896360 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 13:52:48.896535 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 13:52:48.940353 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 13:52:48.948331 systemd[1]: Started systemd-userdbd.service. Feb 9 13:52:48.953350 kernel: ipmi device interface Feb 9 13:52:48.953378 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 13:52:48.953484 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 13:52:48.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:49.061571 kernel: ipmi_si: IPMI System Interface driver Feb 9 13:52:49.061603 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 13:52:49.061614 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 13:52:49.106958 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 13:52:49.107000 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 13:52:49.150032 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 13:52:49.150163 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 13:52:49.200351 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 9 13:52:49.200467 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 13:52:49.244780 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 13:52:49.270017 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 13:52:49.325563 kernel: intel_rapl_common: Found RAPL domain package Feb 9 13:52:49.325607 kernel: intel_rapl_common: Found RAPL domain core Feb 9 13:52:49.325627 kernel: intel_rapl_common: Found RAPL domain uncore Feb 9 13:52:49.345001 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 13:52:49.345096 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 13:52:49.404415 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 9 13:52:49.449315 systemd-networkd[1415]: bond0: netdev ready Feb 9 13:52:49.451406 systemd-networkd[1415]: lo: Link UP Feb 9 13:52:49.451409 systemd-networkd[1415]: lo: Gained carrier Feb 9 13:52:49.451887 systemd-networkd[1415]: Enumeration completed Feb 9 13:52:49.451977 systemd[1]: Started systemd-networkd.service. Feb 9 13:52:49.452162 systemd-networkd[1415]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 13:52:49.457600 systemd-networkd[1415]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e9.network. Feb 9 13:52:49.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:49.497379 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 13:52:49.518375 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 13:52:49.519381 systemd[1]: Finished systemd-udev-settle.service. Feb 9 13:52:49.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:49.528138 systemd[1]: Starting lvm2-activation-early.service... Feb 9 13:52:49.543496 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 13:52:49.571789 systemd[1]: Finished lvm2-activation-early.service. Feb 9 13:52:49.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:49.579523 systemd[1]: Reached target cryptsetup.target. Feb 9 13:52:49.588038 systemd[1]: Starting lvm2-activation.service... Feb 9 13:52:49.590107 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 13:52:49.627797 systemd[1]: Finished lvm2-activation.service. Feb 9 13:52:49.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:49.635534 systemd[1]: Reached target local-fs-pre.target. Feb 9 13:52:49.643432 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 13:52:49.643447 systemd[1]: Reached target local-fs.target. Feb 9 13:52:49.651430 systemd[1]: Reached target machines.target. Feb 9 13:52:49.660083 systemd[1]: Starting ldconfig.service... Feb 9 13:52:49.666767 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 13:52:49.666796 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 13:52:49.667364 systemd[1]: Starting systemd-boot-update.service... Feb 9 13:52:49.674821 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 13:52:49.684955 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 13:52:49.685034 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 13:52:49.685077 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 13:52:49.685655 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 13:52:49.685890 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1447 (bootctl) Feb 9 13:52:49.686639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 13:52:49.706002 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 13:52:49.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:49.713122 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 13:52:49.721857 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 13:52:49.731383 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 13:52:49.897438 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 13:52:49.923393 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 9 13:52:49.925326 systemd-networkd[1415]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 13:52:49.993378 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:52:50.093406 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 13:52:50.119376 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 9 13:52:50.119441 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:52:50.119453 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 13:52:50.202574 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 13:52:50.202599 kernel: bond0: active interface up! Feb 9 13:52:50.224075 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 13:52:50.224299 systemd-networkd[1415]: bond0: Link UP Feb 9 13:52:50.224502 systemd-networkd[1415]: enp2s0f1np1: Link UP Feb 9 13:52:50.224620 systemd-networkd[1415]: enp2s0f1np1: Gained carrier Feb 9 13:52:50.225577 systemd-networkd[1415]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 13:52:50.265383 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:52:50.289200 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 13:52:50.289605 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 13:52:50.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:50.309740 systemd-fsck[1457]: fsck.fat 4.2 (2021-01-31) Feb 9 13:52:50.309740 systemd-fsck[1457]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 13:52:50.310456 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 13:52:50.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:50.321724 systemd-networkd[1415]: enp2s0f0np0: Link UP Feb 9 13:52:50.321925 systemd-networkd[1415]: bond0: Gained carrier Feb 9 13:52:50.322009 systemd-networkd[1415]: enp2s0f0np0: Gained carrier Feb 9 13:52:50.323385 systemd[1]: Mounting boot.mount... Feb 9 13:52:50.332630 systemd-networkd[1415]: enp2s0f1np1: Link DOWN Feb 9 13:52:50.332633 systemd-networkd[1415]: enp2s0f1np1: Lost carrier Feb 9 13:52:50.334395 systemd[1]: Mounted boot.mount. Feb 9 13:52:50.353407 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 13:52:50.353493 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 9 13:52:50.388539 systemd[1]: Finished systemd-boot-update.service. Feb 9 13:52:50.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:50.419280 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 13:52:50.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:52:50.428196 systemd[1]: Starting audit-rules.service... Feb 9 13:52:50.434988 systemd[1]: Starting clean-ca-certificates.service... Feb 9 13:52:50.442000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 13:52:50.442000 audit[1482]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5be684c0 a2=420 a3=0 items=0 ppid=1465 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:52:50.442000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 13:52:50.444291 augenrules[1482]: No rules Feb 9 13:52:50.445021 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 13:52:50.454182 systemd[1]: Starting systemd-resolved.service... Feb 9 13:52:50.461173 systemd[1]: Starting systemd-timesyncd.service... Feb 9 13:52:50.467975 systemd[1]: Starting systemd-update-utmp.service... Feb 9 13:52:50.474738 systemd[1]: Finished audit-rules.service. Feb 9 13:52:50.481573 systemd[1]: Finished clean-ca-certificates.service. Feb 9 13:52:50.489566 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 13:52:50.498913 ldconfig[1446]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 13:52:50.500599 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 13:52:50.501167 systemd[1]: Finished systemd-update-utmp.service. Feb 9 13:52:50.514632 systemd[1]: Finished ldconfig.service. Feb 9 13:52:50.522349 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 13:52:50.536142 systemd[1]: Starting systemd-update-done.service... Feb 9 13:52:50.545348 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 9 13:52:50.545525 systemd-networkd[1415]: enp2s0f1np1: Link UP Feb 9 13:52:50.545528 systemd-networkd[1415]: enp2s0f1np1: Gained carrier Feb 9 13:52:50.551671 systemd[1]: Finished systemd-update-done.service. Feb 9 13:52:50.555150 systemd-resolved[1489]: Positive Trust Anchors: Feb 9 13:52:50.555156 systemd-resolved[1489]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 13:52:50.555175 systemd-resolved[1489]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 13:52:50.559303 systemd-resolved[1489]: Using system hostname 'ci-3510.3.2-a-2834128369'. Feb 9 13:52:50.560537 systemd[1]: Started systemd-timesyncd.service. Feb 9 13:52:50.576537 systemd[1]: Started systemd-resolved.service. Feb 9 13:52:50.583378 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 9 13:52:50.599467 systemd[1]: Reached target network.target. Feb 9 13:52:50.604387 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 13:52:50.612419 systemd[1]: Reached target nss-lookup.target. Feb 9 13:52:50.620431 systemd[1]: Reached target sysinit.target. Feb 9 13:52:50.628462 systemd[1]: Started motdgen.path. Feb 9 13:52:50.635443 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 13:52:50.645429 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 13:52:50.653430 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 13:52:50.653452 systemd[1]: Reached target paths.target. Feb 9 13:52:50.660429 systemd[1]: Reached target time-set.target. Feb 9 13:52:50.668495 systemd[1]: Started logrotate.timer. Feb 9 13:52:50.675470 systemd[1]: Started mdadm.timer. Feb 9 13:52:50.682426 systemd[1]: Reached target timers.target. Feb 9 13:52:50.689535 systemd[1]: Listening on dbus.socket. Feb 9 13:52:50.697014 systemd[1]: Starting docker.socket... Feb 9 13:52:50.704214 systemd[1]: Listening on sshd.socket. Feb 9 13:52:50.711492 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 13:52:50.711693 systemd[1]: Listening on docker.socket. Feb 9 13:52:50.718477 systemd[1]: Reached target sockets.target. Feb 9 13:52:50.726433 systemd[1]: Reached target basic.target. Feb 9 13:52:50.733493 systemd[1]: System is tainted: cgroupsv1 Feb 9 13:52:50.733523 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 13:52:50.733543 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 13:52:50.734067 systemd[1]: Starting containerd.service... Feb 9 13:52:50.740839 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 13:52:50.749938 systemd[1]: Starting coreos-metadata.service... Feb 9 13:52:50.756937 systemd[1]: Starting dbus.service... Feb 9 13:52:50.763177 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 13:52:50.767515 jq[1509]: false Feb 9 13:52:50.770175 systemd[1]: Starting extend-filesystems.service... Feb 9 13:52:50.770590 coreos-metadata[1502]: Feb 09 13:52:50.770 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:52:50.774985 dbus-daemon[1508]: [system] SELinux support is enabled Feb 9 13:52:50.777391 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 13:52:50.778006 systemd[1]: Starting motdgen.service... Feb 9 13:52:50.778532 extend-filesystems[1511]: Found sda Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda1 Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda2 Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda3 Feb 9 13:52:50.800447 extend-filesystems[1511]: Found usr Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda4 Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda6 Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda7 Feb 9 13:52:50.800447 extend-filesystems[1511]: Found sda9 Feb 9 13:52:50.800447 extend-filesystems[1511]: Checking size of /dev/sda9 Feb 9 13:52:50.800447 extend-filesystems[1511]: Resized partition /dev/sda9 Feb 9 13:52:50.933403 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 9 13:52:50.933440 coreos-metadata[1505]: Feb 09 13:52:50.780 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:52:50.786027 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 13:52:50.933646 extend-filesystems[1527]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 13:52:50.820084 systemd[1]: Starting prepare-critools.service... Feb 9 13:52:50.859941 systemd[1]: Starting prepare-helm.service... Feb 9 13:52:50.875888 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 13:52:50.891876 systemd[1]: Starting sshd-keygen.service... Feb 9 13:52:50.908793 systemd[1]: Starting systemd-logind.service... Feb 9 13:52:50.924387 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 13:52:50.925154 systemd[1]: Starting tcsd.service... Feb 9 13:52:50.932626 systemd-logind[1545]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 13:52:50.932635 systemd-logind[1545]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 13:52:50.932645 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 13:52:50.932742 systemd-logind[1545]: New seat seat0. Feb 9 13:52:50.939234 systemd[1]: Starting update-engine.service... Feb 9 13:52:50.954187 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 13:52:50.955709 jq[1548]: true Feb 9 13:52:50.962867 systemd[1]: Started dbus.service. Feb 9 13:52:50.972105 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 13:52:50.972271 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 13:52:50.972514 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 13:52:50.972668 systemd[1]: Finished motdgen.service. Feb 9 13:52:50.980317 update_engine[1547]: I0209 13:52:50.979758 1547 main.cc:92] Flatcar Update Engine starting Feb 9 13:52:50.981502 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 13:52:50.981618 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 13:52:50.983530 update_engine[1547]: I0209 13:52:50.983517 1547 update_check_scheduler.cc:74] Next update check in 10m17s Feb 9 13:52:50.987029 tar[1552]: ./ Feb 9 13:52:50.987029 tar[1552]: ./macvlan Feb 9 13:52:50.992176 jq[1558]: true Feb 9 13:52:50.992314 tar[1553]: crictl Feb 9 13:52:50.992582 dbus-daemon[1508]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 13:52:50.994036 tar[1554]: linux-amd64/helm Feb 9 13:52:50.996448 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 13:52:50.996589 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 13:52:50.998897 systemd[1]: Started systemd-logind.service. Feb 9 13:52:51.005010 env[1559]: time="2024-02-09T13:52:51.004984113Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 13:52:51.011327 tar[1552]: ./static Feb 9 13:52:51.012636 systemd[1]: Started update-engine.service. Feb 9 13:52:51.013559 env[1559]: time="2024-02-09T13:52:51.013541540Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 13:52:51.013970 env[1559]: time="2024-02-09T13:52:51.013950015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:52:51.014665 env[1559]: time="2024-02-09T13:52:51.014651002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 13:52:51.014698 env[1559]: time="2024-02-09T13:52:51.014664912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016316 env[1559]: time="2024-02-09T13:52:51.016303868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016358 env[1559]: time="2024-02-09T13:52:51.016316301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016358 env[1559]: time="2024-02-09T13:52:51.016324231Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 13:52:51.016358 env[1559]: time="2024-02-09T13:52:51.016331062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016409 env[1559]: time="2024-02-09T13:52:51.016376899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016509 env[1559]: time="2024-02-09T13:52:51.016501165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016590 env[1559]: time="2024-02-09T13:52:51.016579845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 13:52:51.016638 env[1559]: time="2024-02-09T13:52:51.016590523Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 13:52:51.018581 env[1559]: time="2024-02-09T13:52:51.018569255Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 13:52:51.018608 env[1559]: time="2024-02-09T13:52:51.018580871Z" level=info msg="metadata content store policy set" policy=shared Feb 9 13:52:51.022431 systemd[1]: Started locksmithd.service. Feb 9 13:52:51.023016 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Feb 9 13:52:51.028226 env[1559]: time="2024-02-09T13:52:51.028209948Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 13:52:51.028271 env[1559]: time="2024-02-09T13:52:51.028231060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 13:52:51.028271 env[1559]: time="2024-02-09T13:52:51.028239181Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 13:52:51.028328 env[1559]: time="2024-02-09T13:52:51.028278653Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028328 env[1559]: time="2024-02-09T13:52:51.028298347Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028328 env[1559]: time="2024-02-09T13:52:51.028309067Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028328 env[1559]: time="2024-02-09T13:52:51.028317213Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028328 env[1559]: time="2024-02-09T13:52:51.028325692Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028446 env[1559]: time="2024-02-09T13:52:51.028333311Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028446 env[1559]: time="2024-02-09T13:52:51.028341410Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028446 env[1559]: time="2024-02-09T13:52:51.028380343Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.028446 env[1559]: time="2024-02-09T13:52:51.028393767Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 13:52:51.028549 env[1559]: time="2024-02-09T13:52:51.028464879Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 13:52:51.028549 env[1559]: time="2024-02-09T13:52:51.028518136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 13:52:51.028974 env[1559]: time="2024-02-09T13:52:51.028946964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 13:52:51.029044 env[1559]: time="2024-02-09T13:52:51.029033987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029077 env[1559]: time="2024-02-09T13:52:51.029046449Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 13:52:51.029106 env[1559]: time="2024-02-09T13:52:51.029076428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029106 env[1559]: time="2024-02-09T13:52:51.029085102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029106 env[1559]: time="2024-02-09T13:52:51.029092567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029106 env[1559]: time="2024-02-09T13:52:51.029099010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029106 env[1559]: time="2024-02-09T13:52:51.029105506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029112701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029119135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029125785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029134068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029202561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029212220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029219531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029227 env[1559]: time="2024-02-09T13:52:51.029225738Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 13:52:51.029426 env[1559]: time="2024-02-09T13:52:51.029236920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 13:52:51.029426 env[1559]: time="2024-02-09T13:52:51.029244643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 13:52:51.029426 env[1559]: time="2024-02-09T13:52:51.029254368Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 13:52:51.029426 env[1559]: time="2024-02-09T13:52:51.029275028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 13:52:51.029528 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 13:52:51.029587 env[1559]: time="2024-02-09T13:52:51.029394439Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 13:52:51.029587 env[1559]: time="2024-02-09T13:52:51.029450764Z" level=info msg="Connect containerd service" Feb 9 13:52:51.029587 env[1559]: time="2024-02-09T13:52:51.029469307Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029743276Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029842125Z" level=info msg="Start subscribing containerd event" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029863112Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029872777Z" level=info msg="Start recovering state" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029895461Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029912530Z" level=info msg="Start event monitor" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029921293Z" level=info msg="containerd successfully booted in 0.025952s" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029926827Z" level=info msg="Start snapshots syncer" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029937610Z" level=info msg="Start cni network conf syncer for default" Feb 9 13:52:51.031098 env[1559]: time="2024-02-09T13:52:51.029944710Z" level=info msg="Start streaming server" Feb 9 13:52:51.029659 systemd[1]: Reached target system-config.target. Feb 9 13:52:51.036871 tar[1552]: ./vlan Feb 9 13:52:51.037500 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 13:52:51.037614 systemd[1]: Reached target user-config.target. Feb 9 13:52:51.048087 systemd[1]: Started containerd.service. Feb 9 13:52:51.054752 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 13:52:51.058465 tar[1552]: ./portmap Feb 9 13:52:51.078781 tar[1552]: ./host-local Feb 9 13:52:51.081295 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 13:52:51.097056 tar[1552]: ./vrf Feb 9 13:52:51.116778 tar[1552]: ./bridge Feb 9 13:52:51.140387 tar[1552]: ./tuning Feb 9 13:52:51.159084 tar[1552]: ./firewall Feb 9 13:52:51.183474 tar[1552]: ./host-device Feb 9 13:52:51.204644 tar[1552]: ./sbr Feb 9 13:52:51.224138 tar[1552]: ./loopback Feb 9 13:52:51.242433 tar[1552]: ./dhcp Feb 9 13:52:51.258644 tar[1554]: linux-amd64/LICENSE Feb 9 13:52:51.258738 tar[1554]: linux-amd64/README.md Feb 9 13:52:51.261569 systemd[1]: Finished prepare-helm.service. Feb 9 13:52:51.269838 systemd[1]: Finished prepare-critools.service. Feb 9 13:52:51.295129 tar[1552]: ./ptp Feb 9 13:52:51.299350 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 9 13:52:51.329124 extend-filesystems[1527]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 13:52:51.329124 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 13:52:51.329124 extend-filesystems[1527]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 9 13:52:51.368400 extend-filesystems[1511]: Resized filesystem in /dev/sda9 Feb 9 13:52:51.368400 extend-filesystems[1511]: Found sdb Feb 9 13:52:51.383398 tar[1552]: ./ipvlan Feb 9 13:52:51.383398 tar[1552]: ./bandwidth Feb 9 13:52:51.329642 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 13:52:51.329818 systemd[1]: Finished extend-filesystems.service. Feb 9 13:52:51.391697 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 13:52:52.002406 systemd-networkd[1415]: bond0: Gained IPv6LL Feb 9 13:52:52.162323 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 13:52:52.173876 systemd[1]: Finished sshd-keygen.service. Feb 9 13:52:52.181329 systemd[1]: Starting issuegen.service... Feb 9 13:52:52.188642 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 13:52:52.188742 systemd[1]: Finished issuegen.service. Feb 9 13:52:52.196239 systemd[1]: Starting systemd-user-sessions.service... Feb 9 13:52:52.204700 systemd[1]: Finished systemd-user-sessions.service. Feb 9 13:52:52.213129 systemd[1]: Started getty@tty1.service. Feb 9 13:52:52.220093 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 13:52:52.228557 systemd[1]: Reached target getty.target. Feb 9 13:52:52.861505 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 13:52:56.675619 coreos-metadata[1502]: Feb 09 13:52:56.675 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 13:52:56.676514 coreos-metadata[1505]: Feb 09 13:52:56.675 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 13:52:57.259764 login[1636]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 13:52:57.267704 login[1635]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 13:52:57.271884 systemd-logind[1545]: New session 1 of user core. Feb 9 13:52:57.272554 systemd[1]: Created slice user-500.slice. Feb 9 13:52:57.273269 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 13:52:57.275037 systemd-logind[1545]: New session 2 of user core. Feb 9 13:52:57.280337 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 13:52:57.281156 systemd[1]: Starting user@500.service... Feb 9 13:52:57.283563 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:52:57.383502 systemd[1642]: Queued start job for default target default.target. Feb 9 13:52:57.383607 systemd[1642]: Reached target paths.target. Feb 9 13:52:57.383618 systemd[1642]: Reached target sockets.target. Feb 9 13:52:57.383626 systemd[1642]: Reached target timers.target. Feb 9 13:52:57.383633 systemd[1642]: Reached target basic.target. Feb 9 13:52:57.383652 systemd[1642]: Reached target default.target. Feb 9 13:52:57.383665 systemd[1642]: Startup finished in 96ms. Feb 9 13:52:57.383729 systemd[1]: Started user@500.service. Feb 9 13:52:57.384301 systemd[1]: Started session-1.scope. Feb 9 13:52:57.384648 systemd[1]: Started session-2.scope. Feb 9 13:52:57.675927 coreos-metadata[1502]: Feb 09 13:52:57.675 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 13:52:57.676772 coreos-metadata[1505]: Feb 09 13:52:57.675 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 13:52:58.312467 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 9 13:52:58.312622 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 9 13:52:58.695993 coreos-metadata[1502]: Feb 09 13:52:58.695 INFO Fetch successful Feb 9 13:52:58.697174 coreos-metadata[1505]: Feb 09 13:52:58.695 INFO Fetch successful Feb 9 13:52:58.722304 systemd[1]: Finished coreos-metadata.service. Feb 9 13:52:58.722733 unknown[1502]: wrote ssh authorized keys file for user: core Feb 9 13:52:58.723489 systemd[1]: Started packet-phone-home.service. Feb 9 13:52:58.728719 curl[1669]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 13:52:58.728879 curl[1669]: Dload Upload Total Spent Left Speed Feb 9 13:52:58.733907 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Feb 9 13:52:58.734123 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 13:52:58.734288 systemd[1]: Reached target multi-user.target. Feb 9 13:52:58.735019 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 13:52:58.738694 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 13:52:58.738799 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 13:52:58.738942 systemd[1]: Startup finished in 25.197s (kernel) + 14.719s (userspace) = 39.916s. Feb 9 13:52:58.337081 systemd-resolved[1489]: Clock change detected. Flushing caches. Feb 9 13:52:58.355752 systemd-journald[1301]: Time jumped backwards, rotating. Feb 9 13:52:58.337102 systemd-timesyncd[1491]: Contacted time server 207.246.65.226:123 (0.flatcar.pool.ntp.org). Feb 9 13:52:58.337126 systemd-timesyncd[1491]: Initial clock synchronization to Fri 2024-02-09 13:52:58.337030 UTC. Feb 9 13:52:58.469764 systemd[1]: Created slice system-sshd.slice. Feb 9 13:52:58.470433 systemd[1]: Started sshd@0-86.109.11.101:22-147.75.109.163:56144.service. Feb 9 13:52:58.509232 curl[1669]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 13:52:58.510067 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 13:52:58.512945 sshd[1677]: Accepted publickey for core from 147.75.109.163 port 56144 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 13:52:58.514172 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:52:58.518260 systemd-logind[1545]: New session 3 of user core. Feb 9 13:52:58.519187 systemd[1]: Started session-3.scope. Feb 9 13:52:58.575666 systemd[1]: Started sshd@1-86.109.11.101:22-147.75.109.163:56160.service. Feb 9 13:52:58.605643 sshd[1683]: Accepted publickey for core from 147.75.109.163 port 56160 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 13:52:58.606317 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:52:58.608698 systemd-logind[1545]: New session 4 of user core. Feb 9 13:52:58.609125 systemd[1]: Started session-4.scope. Feb 9 13:52:58.660243 sshd[1683]: pam_unix(sshd:session): session closed for user core Feb 9 13:52:58.661627 systemd[1]: Started sshd@2-86.109.11.101:22-147.75.109.163:56170.service. Feb 9 13:52:58.661929 systemd[1]: sshd@1-86.109.11.101:22-147.75.109.163:56160.service: Deactivated successfully. Feb 9 13:52:58.662432 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 13:52:58.662456 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Feb 9 13:52:58.662898 systemd-logind[1545]: Removed session 4. Feb 9 13:52:58.692292 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 56170 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 13:52:58.693192 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:52:58.696657 systemd-logind[1545]: New session 5 of user core. Feb 9 13:52:58.697351 systemd[1]: Started session-5.scope. Feb 9 13:52:58.751089 sshd[1689]: pam_unix(sshd:session): session closed for user core Feb 9 13:52:58.752463 systemd[1]: Started sshd@3-86.109.11.101:22-147.75.109.163:56178.service. Feb 9 13:52:58.752697 systemd[1]: sshd@2-86.109.11.101:22-147.75.109.163:56170.service: Deactivated successfully. Feb 9 13:52:58.753143 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Feb 9 13:52:58.753192 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 13:52:58.753614 systemd-logind[1545]: Removed session 5. Feb 9 13:52:58.783369 sshd[1695]: Accepted publickey for core from 147.75.109.163 port 56178 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 13:52:58.784416 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:52:58.787729 systemd-logind[1545]: New session 6 of user core. Feb 9 13:52:58.788439 systemd[1]: Started session-6.scope. Feb 9 13:52:58.856427 sshd[1695]: pam_unix(sshd:session): session closed for user core Feb 9 13:52:58.862247 systemd[1]: Started sshd@4-86.109.11.101:22-147.75.109.163:56186.service. Feb 9 13:52:58.863731 systemd[1]: sshd@3-86.109.11.101:22-147.75.109.163:56178.service: Deactivated successfully. Feb 9 13:52:58.866046 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Feb 9 13:52:58.866235 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 13:52:58.868709 systemd-logind[1545]: Removed session 6. Feb 9 13:52:58.929514 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 56186 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 13:52:58.932353 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:52:58.941840 systemd-logind[1545]: New session 7 of user core. Feb 9 13:52:58.943825 systemd[1]: Started session-7.scope. Feb 9 13:52:59.043952 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 13:52:59.044558 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 13:53:03.124495 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 13:53:03.128311 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 13:53:03.128485 systemd[1]: Reached target network-online.target. Feb 9 13:53:03.129193 systemd[1]: Starting docker.service... Feb 9 13:53:03.148070 env[1731]: time="2024-02-09T13:53:03.148007628Z" level=info msg="Starting up" Feb 9 13:53:03.148764 env[1731]: time="2024-02-09T13:53:03.148722258Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 13:53:03.148764 env[1731]: time="2024-02-09T13:53:03.148733715Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 13:53:03.148764 env[1731]: time="2024-02-09T13:53:03.148751680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 13:53:03.148764 env[1731]: time="2024-02-09T13:53:03.148759903Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 13:53:03.149644 env[1731]: time="2024-02-09T13:53:03.149598193Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 13:53:03.149644 env[1731]: time="2024-02-09T13:53:03.149607292Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 13:53:03.149644 env[1731]: time="2024-02-09T13:53:03.149616117Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 13:53:03.149644 env[1731]: time="2024-02-09T13:53:03.149621662Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 13:53:03.595189 env[1731]: time="2024-02-09T13:53:03.595141959Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 13:53:03.595189 env[1731]: time="2024-02-09T13:53:03.595155484Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 13:53:03.595292 env[1731]: time="2024-02-09T13:53:03.595223797Z" level=info msg="Loading containers: start." Feb 9 13:53:03.702826 kernel: Initializing XFRM netlink socket Feb 9 13:53:03.764766 env[1731]: time="2024-02-09T13:53:03.764749334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 13:53:03.801055 systemd-networkd[1415]: docker0: Link UP Feb 9 13:53:03.805675 env[1731]: time="2024-02-09T13:53:03.805659160Z" level=info msg="Loading containers: done." Feb 9 13:53:03.811445 env[1731]: time="2024-02-09T13:53:03.811425228Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 13:53:03.811564 env[1731]: time="2024-02-09T13:53:03.811551514Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 13:53:03.811634 env[1731]: time="2024-02-09T13:53:03.811624356Z" level=info msg="Daemon has completed initialization" Feb 9 13:53:03.819183 systemd[1]: Started docker.service. Feb 9 13:53:03.823329 env[1731]: time="2024-02-09T13:53:03.823270847Z" level=info msg="API listen on /run/docker.sock" Feb 9 13:53:03.839802 systemd[1]: Reloading. Feb 9 13:53:03.865656 /usr/lib/systemd/system-generators/torcx-generator[1884]: time="2024-02-09T13:53:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 13:53:03.865681 /usr/lib/systemd/system-generators/torcx-generator[1884]: time="2024-02-09T13:53:03Z" level=info msg="torcx already run" Feb 9 13:53:03.926421 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:53:03.926431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:53:03.938831 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:53:03.990135 systemd[1]: Started kubelet.service. Feb 9 13:53:04.013587 kubelet[1948]: E0209 13:53:04.013531 1948 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 13:53:04.014777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 13:53:04.014898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 13:53:04.707335 env[1559]: time="2024-02-09T13:53:04.707194696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 13:53:05.371658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185729299.mount: Deactivated successfully. Feb 9 13:53:07.377173 env[1559]: time="2024-02-09T13:53:07.377148208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:07.377925 env[1559]: time="2024-02-09T13:53:07.377872345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:07.379006 env[1559]: time="2024-02-09T13:53:07.378962900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:07.379912 env[1559]: time="2024-02-09T13:53:07.379868421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:07.380312 env[1559]: time="2024-02-09T13:53:07.380259225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 13:53:07.385839 env[1559]: time="2024-02-09T13:53:07.385774929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 13:53:09.531586 env[1559]: time="2024-02-09T13:53:09.531561425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:09.532252 env[1559]: time="2024-02-09T13:53:09.532240075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:09.533251 env[1559]: time="2024-02-09T13:53:09.533206790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:09.534745 env[1559]: time="2024-02-09T13:53:09.534704472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:09.535102 env[1559]: time="2024-02-09T13:53:09.535046279Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 13:53:09.545373 env[1559]: time="2024-02-09T13:53:09.545291203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 13:53:10.731590 env[1559]: time="2024-02-09T13:53:10.731535873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:10.732149 env[1559]: time="2024-02-09T13:53:10.732094312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:10.733241 env[1559]: time="2024-02-09T13:53:10.733194742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:10.734271 env[1559]: time="2024-02-09T13:53:10.734232462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:10.734701 env[1559]: time="2024-02-09T13:53:10.734660421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 13:53:10.741347 env[1559]: time="2024-02-09T13:53:10.741305877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 13:53:11.629919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494621574.mount: Deactivated successfully. Feb 9 13:53:11.918435 env[1559]: time="2024-02-09T13:53:11.918383144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:11.919050 env[1559]: time="2024-02-09T13:53:11.918986516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:11.919784 env[1559]: time="2024-02-09T13:53:11.919741378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:11.920755 env[1559]: time="2024-02-09T13:53:11.920732379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:11.921589 env[1559]: time="2024-02-09T13:53:11.921540462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 13:53:11.928465 env[1559]: time="2024-02-09T13:53:11.928449148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 13:53:12.490697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940298692.mount: Deactivated successfully. Feb 9 13:53:12.492021 env[1559]: time="2024-02-09T13:53:12.491955284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:12.492631 env[1559]: time="2024-02-09T13:53:12.492598295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:12.493322 env[1559]: time="2024-02-09T13:53:12.493286029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:12.494020 env[1559]: time="2024-02-09T13:53:12.493985601Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:12.494339 env[1559]: time="2024-02-09T13:53:12.494303829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 13:53:12.499887 env[1559]: time="2024-02-09T13:53:12.499823892Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 13:53:13.241422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926114755.mount: Deactivated successfully. Feb 9 13:53:14.173975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 13:53:14.174512 systemd[1]: Stopped kubelet.service. Feb 9 13:53:14.178032 systemd[1]: Started kubelet.service. Feb 9 13:53:14.201557 kubelet[2041]: E0209 13:53:14.201481 2041 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 13:53:14.204629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 13:53:14.204721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 13:53:16.095421 env[1559]: time="2024-02-09T13:53:16.095364664Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:16.095931 env[1559]: time="2024-02-09T13:53:16.095896882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:16.096865 env[1559]: time="2024-02-09T13:53:16.096809864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:16.097586 env[1559]: time="2024-02-09T13:53:16.097577080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:16.098017 env[1559]: time="2024-02-09T13:53:16.097976508Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 13:53:16.103109 env[1559]: time="2024-02-09T13:53:16.103093506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 13:53:16.742985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162494447.mount: Deactivated successfully. Feb 9 13:53:17.180038 env[1559]: time="2024-02-09T13:53:17.179979495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:17.180542 env[1559]: time="2024-02-09T13:53:17.180508766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:17.181338 env[1559]: time="2024-02-09T13:53:17.181291331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:17.182430 env[1559]: time="2024-02-09T13:53:17.182390922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:17.182614 env[1559]: time="2024-02-09T13:53:17.182571386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 13:53:19.289895 systemd[1]: Stopped kubelet.service. Feb 9 13:53:19.297779 systemd[1]: Reloading. Feb 9 13:53:19.328324 /usr/lib/systemd/system-generators/torcx-generator[2194]: time="2024-02-09T13:53:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 13:53:19.328338 /usr/lib/systemd/system-generators/torcx-generator[2194]: time="2024-02-09T13:53:19Z" level=info msg="torcx already run" Feb 9 13:53:19.394435 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:53:19.394447 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:53:19.409336 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:53:19.464555 systemd[1]: Started kubelet.service. Feb 9 13:53:19.487180 kubelet[2260]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 13:53:19.487180 kubelet[2260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 13:53:19.487180 kubelet[2260]: I0209 13:53:19.487162 2260 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 13:53:19.487898 kubelet[2260]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 13:53:19.487898 kubelet[2260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 13:53:19.744354 kubelet[2260]: I0209 13:53:19.744337 2260 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 13:53:19.744354 kubelet[2260]: I0209 13:53:19.744354 2260 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 13:53:19.744751 kubelet[2260]: I0209 13:53:19.744722 2260 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 13:53:19.746262 kubelet[2260]: I0209 13:53:19.746225 2260 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 13:53:19.746645 kubelet[2260]: E0209 13:53:19.746617 2260 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://86.109.11.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.771535 kubelet[2260]: I0209 13:53:19.771497 2260 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 13:53:19.771683 kubelet[2260]: I0209 13:53:19.771652 2260 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 13:53:19.771724 kubelet[2260]: I0209 13:53:19.771710 2260 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 13:53:19.771724 kubelet[2260]: I0209 13:53:19.771719 2260 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 13:53:19.771792 kubelet[2260]: I0209 13:53:19.771726 2260 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 13:53:19.771792 kubelet[2260]: I0209 13:53:19.771767 2260 state_mem.go:36] "Initialized new in-memory state store" Feb 9 13:53:19.773538 kubelet[2260]: I0209 13:53:19.773531 2260 kubelet.go:398] "Attempting to sync node with API server" Feb 9 13:53:19.773575 kubelet[2260]: I0209 13:53:19.773541 2260 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 13:53:19.773575 kubelet[2260]: I0209 13:53:19.773560 2260 kubelet.go:297] "Adding apiserver pod source" Feb 9 13:53:19.773575 kubelet[2260]: I0209 13:53:19.773569 2260 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 13:53:19.773846 kubelet[2260]: I0209 13:53:19.773839 2260 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 13:53:19.773869 kubelet[2260]: W0209 13:53:19.773847 2260 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2834128369&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.773869 kubelet[2260]: W0209 13:53:19.773850 2260 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://86.109.11.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.773926 kubelet[2260]: E0209 13:53:19.773876 2260 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2834128369&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.773926 kubelet[2260]: E0209 13:53:19.773880 2260 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://86.109.11.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.774064 kubelet[2260]: W0209 13:53:19.774031 2260 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 13:53:19.774331 kubelet[2260]: I0209 13:53:19.774301 2260 server.go:1186] "Started kubelet" Feb 9 13:53:19.774507 kubelet[2260]: I0209 13:53:19.774495 2260 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 13:53:19.774762 kubelet[2260]: E0209 13:53:19.774666 2260 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2834128369.17b236329f3707a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2834128369", UID:"ci-3510.3.2-a-2834128369", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2834128369"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 53, 19, 774287784, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 53, 19, 774287784, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://86.109.11.101:6443/api/v1/namespaces/default/events": dial tcp 86.109.11.101:6443: connect: connection refused'(may retry after sleeping) Feb 9 13:53:19.774926 kubelet[2260]: E0209 13:53:19.774906 2260 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 13:53:19.774974 kubelet[2260]: E0209 13:53:19.774938 2260 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 13:53:19.785079 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 13:53:19.785178 kubelet[2260]: I0209 13:53:19.785154 2260 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 13:53:19.785234 kubelet[2260]: I0209 13:53:19.785179 2260 server.go:451] "Adding debug handlers to kubelet server" Feb 9 13:53:19.785283 kubelet[2260]: I0209 13:53:19.785232 2260 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 13:53:19.785283 kubelet[2260]: E0209 13:53:19.785272 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:19.785283 kubelet[2260]: I0209 13:53:19.785277 2260 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 13:53:19.785491 kubelet[2260]: W0209 13:53:19.785467 2260 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://86.109.11.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.785491 kubelet[2260]: E0209 13:53:19.785478 2260 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://86.109.11.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2834128369?timeout=10s": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.785564 kubelet[2260]: E0209 13:53:19.785495 2260 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://86.109.11.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.803957 kubelet[2260]: I0209 13:53:19.803925 2260 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 13:53:19.814627 kubelet[2260]: I0209 13:53:19.814604 2260 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 13:53:19.814627 kubelet[2260]: I0209 13:53:19.814614 2260 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 13:53:19.814627 kubelet[2260]: I0209 13:53:19.814624 2260 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 13:53:19.814730 kubelet[2260]: E0209 13:53:19.814656 2260 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 13:53:19.814953 kubelet[2260]: W0209 13:53:19.814910 2260 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://86.109.11.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.814953 kubelet[2260]: E0209 13:53:19.814944 2260 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://86.109.11.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:19.883425 kubelet[2260]: I0209 13:53:19.883330 2260 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 13:53:19.883425 kubelet[2260]: I0209 13:53:19.883383 2260 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 13:53:19.883425 kubelet[2260]: I0209 13:53:19.883429 2260 state_mem.go:36] "Initialized new in-memory state store" Feb 9 13:53:19.885255 kubelet[2260]: I0209 13:53:19.885183 2260 policy_none.go:49] "None policy: Start" Feb 9 13:53:19.886391 kubelet[2260]: I0209 13:53:19.886345 2260 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 13:53:19.886391 kubelet[2260]: I0209 13:53:19.886395 2260 state_mem.go:35] "Initializing new in-memory state store" Feb 9 13:53:19.888629 kubelet[2260]: I0209 13:53:19.888581 2260 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:19.889438 kubelet[2260]: E0209 13:53:19.889380 2260 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://86.109.11.101:6443/api/v1/nodes\": dial tcp 86.109.11.101:6443: connect: connection refused" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:19.896041 kubelet[2260]: I0209 13:53:19.895980 2260 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 13:53:19.896551 kubelet[2260]: I0209 13:53:19.896510 2260 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 13:53:19.897593 kubelet[2260]: E0209 13:53:19.897534 2260 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:19.915727 kubelet[2260]: I0209 13:53:19.915640 2260 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:19.918836 kubelet[2260]: I0209 13:53:19.918748 2260 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:19.922202 kubelet[2260]: I0209 13:53:19.922125 2260 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:19.923032 kubelet[2260]: I0209 13:53:19.922925 2260 status_manager.go:698] "Failed to get status for pod" podUID=a5436da04a51fb5621932d38624d2674 pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-2834128369\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 13:53:19.926099 kubelet[2260]: I0209 13:53:19.926017 2260 status_manager.go:698] "Failed to get status for pod" podUID=aaf8d4f6573d2b6069bc077c6805621b pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-2834128369\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 13:53:19.929914 kubelet[2260]: I0209 13:53:19.929865 2260 status_manager.go:698] "Failed to get status for pod" podUID=9841cc47645ac480ac6a98edf5a59783 pod="kube-system/kube-scheduler-ci-3510.3.2-a-2834128369" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-2834128369\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 13:53:19.986424 kubelet[2260]: E0209 13:53:19.986293 2260 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://86.109.11.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2834128369?timeout=10s": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:20.086021 kubelet[2260]: I0209 13:53:20.085840 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086260 kubelet[2260]: I0209 13:53:20.086057 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086260 kubelet[2260]: I0209 13:53:20.086235 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086562 kubelet[2260]: I0209 13:53:20.086323 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5436da04a51fb5621932d38624d2674-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2834128369\" (UID: \"a5436da04a51fb5621932d38624d2674\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086562 kubelet[2260]: I0209 13:53:20.086387 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5436da04a51fb5621932d38624d2674-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-2834128369\" (UID: \"a5436da04a51fb5621932d38624d2674\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086909 kubelet[2260]: I0209 13:53:20.086565 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086909 kubelet[2260]: I0209 13:53:20.086693 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5436da04a51fb5621932d38624d2674-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2834128369\" (UID: \"a5436da04a51fb5621932d38624d2674\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.086909 kubelet[2260]: I0209 13:53:20.086827 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.087232 kubelet[2260]: I0209 13:53:20.086942 2260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9841cc47645ac480ac6a98edf5a59783-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-2834128369\" (UID: \"9841cc47645ac480ac6a98edf5a59783\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-2834128369" Feb 9 13:53:20.093568 kubelet[2260]: I0209 13:53:20.093527 2260 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:20.094259 kubelet[2260]: E0209 13:53:20.094222 2260 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://86.109.11.101:6443/api/v1/nodes\": dial tcp 86.109.11.101:6443: connect: connection refused" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:20.230991 env[1559]: time="2024-02-09T13:53:20.230854237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-2834128369,Uid:a5436da04a51fb5621932d38624d2674,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:20.231722 env[1559]: time="2024-02-09T13:53:20.231628321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-2834128369,Uid:aaf8d4f6573d2b6069bc077c6805621b,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:20.235746 env[1559]: time="2024-02-09T13:53:20.235670941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-2834128369,Uid:9841cc47645ac480ac6a98edf5a59783,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:20.387319 kubelet[2260]: E0209 13:53:20.387109 2260 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://86.109.11.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2834128369?timeout=10s": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:20.501403 kubelet[2260]: I0209 13:53:20.501312 2260 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:20.502206 kubelet[2260]: E0209 13:53:20.502014 2260 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://86.109.11.101:6443/api/v1/nodes\": dial tcp 86.109.11.101:6443: connect: connection refused" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:20.708243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006017871.mount: Deactivated successfully. Feb 9 13:53:20.708969 env[1559]: time="2024-02-09T13:53:20.708915652Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.710225 env[1559]: time="2024-02-09T13:53:20.710189866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.710737 env[1559]: time="2024-02-09T13:53:20.710702704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.711296 env[1559]: time="2024-02-09T13:53:20.711243606Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.712313 env[1559]: time="2024-02-09T13:53:20.712267212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.712722 env[1559]: time="2024-02-09T13:53:20.712689947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.714154 env[1559]: time="2024-02-09T13:53:20.714113890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.714501 env[1559]: time="2024-02-09T13:53:20.714462253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.716188 env[1559]: time="2024-02-09T13:53:20.716154274Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.717980 env[1559]: time="2024-02-09T13:53:20.717958787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.718769 env[1559]: time="2024-02-09T13:53:20.718729204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.719159 env[1559]: time="2024-02-09T13:53:20.719119151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:20.723751 env[1559]: time="2024-02-09T13:53:20.723714394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:20.723751 env[1559]: time="2024-02-09T13:53:20.723739033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:20.723751 env[1559]: time="2024-02-09T13:53:20.723728776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:20.723751 env[1559]: time="2024-02-09T13:53:20.723744595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:20.723751 env[1559]: time="2024-02-09T13:53:20.723746191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:20.723928 env[1559]: time="2024-02-09T13:53:20.723753897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:20.723928 env[1559]: time="2024-02-09T13:53:20.723821765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68bc3286fa05edc13b4be0f6dae6297ea97d9868a4bab9929ce221773c32ca1a pid=2351 runtime=io.containerd.runc.v2 Feb 9 13:53:20.723928 env[1559]: time="2024-02-09T13:53:20.723847021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bfdca2bfdbe7e1490100df005ecdebb64622780b6dba098280452e9fc678c4b pid=2359 runtime=io.containerd.runc.v2 Feb 9 13:53:20.725776 env[1559]: time="2024-02-09T13:53:20.725744893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:20.725776 env[1559]: time="2024-02-09T13:53:20.725767204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:20.725776 env[1559]: time="2024-02-09T13:53:20.725775491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:20.725900 env[1559]: time="2024-02-09T13:53:20.725852028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cbbf924eeee0aa9388cdcad88287de26838e610d1ac8b932af4e056d7887dee pid=2383 runtime=io.containerd.runc.v2 Feb 9 13:53:20.746160 kubelet[2260]: W0209 13:53:20.746116 2260 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2834128369&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:20.746238 kubelet[2260]: E0209 13:53:20.746168 2260 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2834128369&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 13:53:20.765427 env[1559]: time="2024-02-09T13:53:20.765402427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-2834128369,Uid:9841cc47645ac480ac6a98edf5a59783,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cbbf924eeee0aa9388cdcad88287de26838e610d1ac8b932af4e056d7887dee\"" Feb 9 13:53:20.766958 env[1559]: time="2024-02-09T13:53:20.766946687Z" level=info msg="CreateContainer within sandbox \"7cbbf924eeee0aa9388cdcad88287de26838e610d1ac8b932af4e056d7887dee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 13:53:20.771557 env[1559]: time="2024-02-09T13:53:20.771516342Z" level=info msg="CreateContainer within sandbox \"7cbbf924eeee0aa9388cdcad88287de26838e610d1ac8b932af4e056d7887dee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d4392d84adca8b4cb94c0b1f6a6d05d438debb9670238056f67a3a82d937eba\"" Feb 9 13:53:20.771744 env[1559]: time="2024-02-09T13:53:20.771709439Z" level=info msg="StartContainer for \"8d4392d84adca8b4cb94c0b1f6a6d05d438debb9670238056f67a3a82d937eba\"" Feb 9 13:53:20.777668 env[1559]: time="2024-02-09T13:53:20.777632440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-2834128369,Uid:aaf8d4f6573d2b6069bc077c6805621b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bfdca2bfdbe7e1490100df005ecdebb64622780b6dba098280452e9fc678c4b\"" Feb 9 13:53:20.777767 env[1559]: time="2024-02-09T13:53:20.777645954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-2834128369,Uid:a5436da04a51fb5621932d38624d2674,Namespace:kube-system,Attempt:0,} returns sandbox id \"68bc3286fa05edc13b4be0f6dae6297ea97d9868a4bab9929ce221773c32ca1a\"" Feb 9 13:53:20.778882 env[1559]: time="2024-02-09T13:53:20.778862894Z" level=info msg="CreateContainer within sandbox \"68bc3286fa05edc13b4be0f6dae6297ea97d9868a4bab9929ce221773c32ca1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 13:53:20.778923 env[1559]: time="2024-02-09T13:53:20.778885695Z" level=info msg="CreateContainer within sandbox \"0bfdca2bfdbe7e1490100df005ecdebb64622780b6dba098280452e9fc678c4b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 13:53:20.783313 env[1559]: time="2024-02-09T13:53:20.783291156Z" level=info msg="CreateContainer within sandbox \"68bc3286fa05edc13b4be0f6dae6297ea97d9868a4bab9929ce221773c32ca1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2fece5d6b91fdda8c0e37228891a22bae23984e7551ac2b1769863128090c8cd\"" Feb 9 13:53:20.783533 env[1559]: time="2024-02-09T13:53:20.783518474Z" level=info msg="StartContainer for \"2fece5d6b91fdda8c0e37228891a22bae23984e7551ac2b1769863128090c8cd\"" Feb 9 13:53:20.784094 env[1559]: time="2024-02-09T13:53:20.784077642Z" level=info msg="CreateContainer within sandbox \"0bfdca2bfdbe7e1490100df005ecdebb64622780b6dba098280452e9fc678c4b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0d7b4f2897e3e308088d2d14cbe81ec07dca68b7ae8d43625da1b73c38dd269\"" Feb 9 13:53:20.784242 env[1559]: time="2024-02-09T13:53:20.784229935Z" level=info msg="StartContainer for \"b0d7b4f2897e3e308088d2d14cbe81ec07dca68b7ae8d43625da1b73c38dd269\"" Feb 9 13:53:20.816021 env[1559]: time="2024-02-09T13:53:20.815995085Z" level=info msg="StartContainer for \"2fece5d6b91fdda8c0e37228891a22bae23984e7551ac2b1769863128090c8cd\" returns successfully" Feb 9 13:53:20.816353 env[1559]: time="2024-02-09T13:53:20.816337368Z" level=info msg="StartContainer for \"8d4392d84adca8b4cb94c0b1f6a6d05d438debb9670238056f67a3a82d937eba\" returns successfully" Feb 9 13:53:20.818913 kubelet[2260]: I0209 13:53:20.818899 2260 status_manager.go:698] "Failed to get status for pod" podUID=a5436da04a51fb5621932d38624d2674 pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-2834128369\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 13:53:20.829570 env[1559]: time="2024-02-09T13:53:20.829541385Z" level=info msg="StartContainer for \"b0d7b4f2897e3e308088d2d14cbe81ec07dca68b7ae8d43625da1b73c38dd269\" returns successfully" Feb 9 13:53:21.303339 kubelet[2260]: I0209 13:53:21.303296 2260 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:21.650587 kubelet[2260]: I0209 13:53:21.650538 2260 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:21.654999 kubelet[2260]: E0209 13:53:21.654982 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:21.755533 kubelet[2260]: E0209 13:53:21.755460 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:21.855702 kubelet[2260]: E0209 13:53:21.855676 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:21.956455 kubelet[2260]: E0209 13:53:21.956384 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.057641 kubelet[2260]: E0209 13:53:22.057573 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.157906 kubelet[2260]: E0209 13:53:22.157843 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.259196 kubelet[2260]: E0209 13:53:22.259027 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.359403 kubelet[2260]: E0209 13:53:22.359332 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.460212 kubelet[2260]: E0209 13:53:22.460136 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.560410 kubelet[2260]: E0209 13:53:22.560340 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.660551 kubelet[2260]: E0209 13:53:22.660485 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:22.761429 kubelet[2260]: E0209 13:53:22.761366 2260 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:23.776159 kubelet[2260]: I0209 13:53:23.776096 2260 apiserver.go:52] "Watching apiserver" Feb 9 13:53:23.786037 kubelet[2260]: I0209 13:53:23.785988 2260 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 13:53:23.809774 kubelet[2260]: I0209 13:53:23.809681 2260 reconciler.go:41] "Reconciler: start to sync state" Feb 9 13:53:24.568430 systemd[1]: Reloading. Feb 9 13:53:24.608677 /usr/lib/systemd/system-generators/torcx-generator[2624]: time="2024-02-09T13:53:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 13:53:24.608700 /usr/lib/systemd/system-generators/torcx-generator[2624]: time="2024-02-09T13:53:24Z" level=info msg="torcx already run" Feb 9 13:53:24.662700 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:53:24.662707 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:53:24.674071 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:53:24.731616 systemd[1]: Stopping kubelet.service... Feb 9 13:53:24.731683 kubelet[2260]: I0209 13:53:24.731632 2260 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 13:53:24.750006 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 13:53:24.750191 systemd[1]: Stopped kubelet.service. Feb 9 13:53:24.751140 systemd[1]: Started kubelet.service. Feb 9 13:53:24.773326 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 13:53:24.773326 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 13:53:24.773601 kubelet[2690]: I0209 13:53:24.773350 2690 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 13:53:24.774168 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 13:53:24.774168 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 13:53:24.775903 kubelet[2690]: I0209 13:53:24.775864 2690 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 13:53:24.775903 kubelet[2690]: I0209 13:53:24.775874 2690 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 13:53:24.776051 kubelet[2690]: I0209 13:53:24.775997 2690 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 13:53:24.776646 kubelet[2690]: I0209 13:53:24.776639 2690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 13:53:24.776976 kubelet[2690]: I0209 13:53:24.776968 2690 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 13:53:24.811368 kubelet[2690]: I0209 13:53:24.811282 2690 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 13:53:24.812356 kubelet[2690]: I0209 13:53:24.812276 2690 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 13:53:24.812522 kubelet[2690]: I0209 13:53:24.812432 2690 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 13:53:24.812522 kubelet[2690]: I0209 13:53:24.812478 2690 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 13:53:24.812522 kubelet[2690]: I0209 13:53:24.812508 2690 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 13:53:24.812981 kubelet[2690]: I0209 13:53:24.812581 2690 state_mem.go:36] "Initialized new in-memory state store" Feb 9 13:53:24.819406 kubelet[2690]: I0209 13:53:24.819291 2690 kubelet.go:398] "Attempting to sync node with API server" Feb 9 13:53:24.819406 kubelet[2690]: I0209 13:53:24.819343 2690 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 13:53:24.819681 kubelet[2690]: I0209 13:53:24.819408 2690 kubelet.go:297] "Adding apiserver pod source" Feb 9 13:53:24.819681 kubelet[2690]: I0209 13:53:24.819444 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 13:53:24.820821 kubelet[2690]: I0209 13:53:24.820746 2690 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 13:53:24.822008 kubelet[2690]: I0209 13:53:24.821968 2690 server.go:1186] "Started kubelet" Feb 9 13:53:24.822224 kubelet[2690]: I0209 13:53:24.822071 2690 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 13:53:24.823020 kubelet[2690]: E0209 13:53:24.822967 2690 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 13:53:24.823020 kubelet[2690]: E0209 13:53:24.823029 2690 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 13:53:24.824969 kubelet[2690]: I0209 13:53:24.824922 2690 server.go:451] "Adding debug handlers to kubelet server" Feb 9 13:53:24.826117 kubelet[2690]: I0209 13:53:24.826074 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 13:53:24.826343 kubelet[2690]: I0209 13:53:24.826292 2690 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 13:53:24.827319 kubelet[2690]: I0209 13:53:24.827236 2690 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 13:53:24.827686 kubelet[2690]: E0209 13:53:24.826410 2690 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2834128369\" not found" Feb 9 13:53:24.862254 kubelet[2690]: I0209 13:53:24.862236 2690 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 13:53:24.871170 kubelet[2690]: I0209 13:53:24.871129 2690 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 13:53:24.871170 kubelet[2690]: I0209 13:53:24.871146 2690 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 13:53:24.871170 kubelet[2690]: I0209 13:53:24.871159 2690 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 13:53:24.871297 kubelet[2690]: E0209 13:53:24.871208 2690 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 13:53:24.882841 kubelet[2690]: I0209 13:53:24.882824 2690 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 13:53:24.882841 kubelet[2690]: I0209 13:53:24.882834 2690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 13:53:24.882841 kubelet[2690]: I0209 13:53:24.882842 2690 state_mem.go:36] "Initialized new in-memory state store" Feb 9 13:53:24.882934 kubelet[2690]: I0209 13:53:24.882924 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 13:53:24.882934 kubelet[2690]: I0209 13:53:24.882932 2690 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 13:53:24.882967 kubelet[2690]: I0209 13:53:24.882936 2690 policy_none.go:49] "None policy: Start" Feb 9 13:53:24.883210 kubelet[2690]: I0209 13:53:24.883175 2690 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 13:53:24.883210 kubelet[2690]: I0209 13:53:24.883187 2690 state_mem.go:35] "Initializing new in-memory state store" Feb 9 13:53:24.883262 kubelet[2690]: I0209 13:53:24.883256 2690 state_mem.go:75] "Updated machine memory state" Feb 9 13:53:24.883919 kubelet[2690]: I0209 13:53:24.883885 2690 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 13:53:24.884001 kubelet[2690]: I0209 13:53:24.883994 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 13:53:24.930794 kubelet[2690]: I0209 13:53:24.930751 2690 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:24.936968 kubelet[2690]: I0209 13:53:24.936931 2690 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:24.937027 kubelet[2690]: I0209 13:53:24.936977 2690 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-2834128369" Feb 9 13:53:24.957226 sudo[2754]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 13:53:24.957474 sudo[2754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 13:53:24.971966 kubelet[2690]: I0209 13:53:24.971925 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:24.972122 kubelet[2690]: I0209 13:53:24.972083 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:24.972272 kubelet[2690]: I0209 13:53:24.972180 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:25.127737 kubelet[2690]: I0209 13:53:25.127637 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5436da04a51fb5621932d38624d2674-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2834128369\" (UID: \"a5436da04a51fb5621932d38624d2674\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127737 kubelet[2690]: I0209 13:53:25.127666 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5436da04a51fb5621932d38624d2674-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-2834128369\" (UID: \"a5436da04a51fb5621932d38624d2674\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127737 kubelet[2690]: I0209 13:53:25.127721 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127873 kubelet[2690]: I0209 13:53:25.127742 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9841cc47645ac480ac6a98edf5a59783-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-2834128369\" (UID: \"9841cc47645ac480ac6a98edf5a59783\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127873 kubelet[2690]: I0209 13:53:25.127755 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5436da04a51fb5621932d38624d2674-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2834128369\" (UID: \"a5436da04a51fb5621932d38624d2674\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127873 kubelet[2690]: I0209 13:53:25.127768 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127873 kubelet[2690]: I0209 13:53:25.127780 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127873 kubelet[2690]: I0209 13:53:25.127801 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.127953 kubelet[2690]: I0209 13:53:25.127819 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaf8d4f6573d2b6069bc077c6805621b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-2834128369\" (UID: \"aaf8d4f6573d2b6069bc077c6805621b\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.224430 kubelet[2690]: E0209 13:53:25.224368 2690 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-2834128369\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:25.317520 sudo[2754]: pam_unix(sudo:session): session closed for user root Feb 9 13:53:25.820891 kubelet[2690]: I0209 13:53:25.820771 2690 apiserver.go:52] "Watching apiserver" Feb 9 13:53:26.028678 kubelet[2690]: I0209 13:53:26.028613 2690 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 13:53:26.033827 kubelet[2690]: I0209 13:53:26.033774 2690 reconciler.go:41] "Reconciler: start to sync state" Feb 9 13:53:26.223554 sudo[1709]: pam_unix(sudo:session): session closed for user root Feb 9 13:53:26.225270 sshd[1703]: pam_unix(sshd:session): session closed for user core Feb 9 13:53:26.229144 systemd[1]: sshd@4-86.109.11.101:22-147.75.109.163:56186.service: Deactivated successfully. Feb 9 13:53:26.231285 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Feb 9 13:53:26.231315 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 13:53:26.233216 systemd-logind[1545]: Removed session 7. Feb 9 13:53:26.428672 kubelet[2690]: E0209 13:53:26.428572 2690 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-2834128369\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" Feb 9 13:53:26.628926 kubelet[2690]: E0209 13:53:26.628726 2690 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-2834128369\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" Feb 9 13:53:26.828263 kubelet[2690]: E0209 13:53:26.828182 2690 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-2834128369\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-2834128369" Feb 9 13:53:27.030219 kubelet[2690]: I0209 13:53:27.030207 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2834128369" podStartSLOduration=3.030130639 pod.CreationTimestamp="2024-02-09 13:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:27.030120276 +0000 UTC m=+2.277258271" watchObservedRunningTime="2024-02-09 13:53:27.030130639 +0000 UTC m=+2.277268624" Feb 9 13:53:27.828946 kubelet[2690]: I0209 13:53:27.828930 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2834128369" podStartSLOduration=4.828876123 pod.CreationTimestamp="2024-02-09 13:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:27.427831518 +0000 UTC m=+2.674969511" watchObservedRunningTime="2024-02-09 13:53:27.828876123 +0000 UTC m=+3.076014104" Feb 9 13:53:28.231147 kubelet[2690]: I0209 13:53:28.231082 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-2834128369" podStartSLOduration=3.230988877 pod.CreationTimestamp="2024-02-09 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:28.230891457 +0000 UTC m=+3.478029578" watchObservedRunningTime="2024-02-09 13:53:28.230988877 +0000 UTC m=+3.478126995" Feb 9 13:53:35.423778 update_engine[1547]: I0209 13:53:35.423658 1547 update_attempter.cc:509] Updating boot flags... Feb 9 13:53:38.484012 kubelet[2690]: I0209 13:53:38.483979 2690 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 13:53:38.484389 kubelet[2690]: I0209 13:53:38.484344 2690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 13:53:38.484422 env[1559]: time="2024-02-09T13:53:38.484248550Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 13:53:39.266208 kubelet[2690]: I0209 13:53:39.266189 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:39.268292 kubelet[2690]: I0209 13:53:39.268272 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:39.316248 kubelet[2690]: I0209 13:53:39.316193 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshrn\" (UniqueName: \"kubernetes.io/projected/ff840108-464e-4bb0-a0c9-82ea84617d89-kube-api-access-fshrn\") pod \"kube-proxy-rh5w8\" (UID: \"ff840108-464e-4bb0-a0c9-82ea84617d89\") " pod="kube-system/kube-proxy-rh5w8" Feb 9 13:53:39.316248 kubelet[2690]: I0209 13:53:39.316228 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-run\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316248 kubelet[2690]: I0209 13:53:39.316242 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6209d075-666f-40ed-aab4-d0989090d806-clustermesh-secrets\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316248 kubelet[2690]: I0209 13:53:39.316257 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-net\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316421 kubelet[2690]: I0209 13:53:39.316271 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-hubble-tls\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316421 kubelet[2690]: I0209 13:53:39.316285 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnkf\" (UniqueName: \"kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-kube-api-access-mgnkf\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316421 kubelet[2690]: I0209 13:53:39.316326 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff840108-464e-4bb0-a0c9-82ea84617d89-xtables-lock\") pod \"kube-proxy-rh5w8\" (UID: \"ff840108-464e-4bb0-a0c9-82ea84617d89\") " pod="kube-system/kube-proxy-rh5w8" Feb 9 13:53:39.316421 kubelet[2690]: I0209 13:53:39.316362 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cni-path\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316421 kubelet[2690]: I0209 13:53:39.316396 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-kernel\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316528 kubelet[2690]: I0209 13:53:39.316428 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-etc-cni-netd\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316528 kubelet[2690]: I0209 13:53:39.316471 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-cgroup\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316528 kubelet[2690]: I0209 13:53:39.316492 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff840108-464e-4bb0-a0c9-82ea84617d89-lib-modules\") pod \"kube-proxy-rh5w8\" (UID: \"ff840108-464e-4bb0-a0c9-82ea84617d89\") " pod="kube-system/kube-proxy-rh5w8" Feb 9 13:53:39.316528 kubelet[2690]: I0209 13:53:39.316509 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-lib-modules\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316528 kubelet[2690]: I0209 13:53:39.316524 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6209d075-666f-40ed-aab4-d0989090d806-cilium-config-path\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316628 kubelet[2690]: I0209 13:53:39.316537 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-xtables-lock\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316628 kubelet[2690]: I0209 13:53:39.316564 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff840108-464e-4bb0-a0c9-82ea84617d89-kube-proxy\") pod \"kube-proxy-rh5w8\" (UID: \"ff840108-464e-4bb0-a0c9-82ea84617d89\") " pod="kube-system/kube-proxy-rh5w8" Feb 9 13:53:39.316628 kubelet[2690]: I0209 13:53:39.316579 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-hostproc\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.316628 kubelet[2690]: I0209 13:53:39.316593 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-bpf-maps\") pod \"cilium-fndc7\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " pod="kube-system/cilium-fndc7" Feb 9 13:53:39.481482 kubelet[2690]: I0209 13:53:39.481456 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:39.518623 kubelet[2690]: I0209 13:53:39.518534 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7092c37-d7c1-458d-8dfb-a389a62463af-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-swwhm\" (UID: \"b7092c37-d7c1-458d-8dfb-a389a62463af\") " pod="kube-system/cilium-operator-f59cbd8c6-swwhm" Feb 9 13:53:39.518623 kubelet[2690]: I0209 13:53:39.518567 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdd4\" (UniqueName: \"kubernetes.io/projected/b7092c37-d7c1-458d-8dfb-a389a62463af-kube-api-access-fzdd4\") pod \"cilium-operator-f59cbd8c6-swwhm\" (UID: \"b7092c37-d7c1-458d-8dfb-a389a62463af\") " pod="kube-system/cilium-operator-f59cbd8c6-swwhm" Feb 9 13:53:39.570115 env[1559]: time="2024-02-09T13:53:39.570016212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rh5w8,Uid:ff840108-464e-4bb0-a0c9-82ea84617d89,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:39.572167 env[1559]: time="2024-02-09T13:53:39.572081889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fndc7,Uid:6209d075-666f-40ed-aab4-d0989090d806,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:39.595555 env[1559]: time="2024-02-09T13:53:39.595415606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:39.595555 env[1559]: time="2024-02-09T13:53:39.595515199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:39.595988 env[1559]: time="2024-02-09T13:53:39.595559149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:39.596164 env[1559]: time="2024-02-09T13:53:39.595960636Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26309f57459717bed77afe384d7b62042af29ee25d5e9d9b1138ac50aadb8ddc pid=2894 runtime=io.containerd.runc.v2 Feb 9 13:53:39.596448 env[1559]: time="2024-02-09T13:53:39.596327225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:39.596588 env[1559]: time="2024-02-09T13:53:39.596426566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:39.596588 env[1559]: time="2024-02-09T13:53:39.596481651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:39.597072 env[1559]: time="2024-02-09T13:53:39.596968020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1 pid=2895 runtime=io.containerd.runc.v2 Feb 9 13:53:39.677798 env[1559]: time="2024-02-09T13:53:39.677728085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fndc7,Uid:6209d075-666f-40ed-aab4-d0989090d806,Namespace:kube-system,Attempt:0,} returns sandbox id \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\"" Feb 9 13:53:39.678996 env[1559]: time="2024-02-09T13:53:39.678947899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rh5w8,Uid:ff840108-464e-4bb0-a0c9-82ea84617d89,Namespace:kube-system,Attempt:0,} returns sandbox id \"26309f57459717bed77afe384d7b62042af29ee25d5e9d9b1138ac50aadb8ddc\"" Feb 9 13:53:39.679589 env[1559]: time="2024-02-09T13:53:39.679537316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 13:53:39.681652 env[1559]: time="2024-02-09T13:53:39.681618460Z" level=info msg="CreateContainer within sandbox \"26309f57459717bed77afe384d7b62042af29ee25d5e9d9b1138ac50aadb8ddc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 13:53:39.693866 env[1559]: time="2024-02-09T13:53:39.693761433Z" level=info msg="CreateContainer within sandbox \"26309f57459717bed77afe384d7b62042af29ee25d5e9d9b1138ac50aadb8ddc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"06c2799c7fd6f75533b21af71c90488a62e4508d771e63997fc2d67d54fe661b\"" Feb 9 13:53:39.694416 env[1559]: time="2024-02-09T13:53:39.694342369Z" level=info msg="StartContainer for \"06c2799c7fd6f75533b21af71c90488a62e4508d771e63997fc2d67d54fe661b\"" Feb 9 13:53:39.750353 env[1559]: time="2024-02-09T13:53:39.750279362Z" level=info msg="StartContainer for \"06c2799c7fd6f75533b21af71c90488a62e4508d771e63997fc2d67d54fe661b\" returns successfully" Feb 9 13:53:40.085289 env[1559]: time="2024-02-09T13:53:40.085209891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-swwhm,Uid:b7092c37-d7c1-458d-8dfb-a389a62463af,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:40.213503 env[1559]: time="2024-02-09T13:53:40.213349944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:40.213503 env[1559]: time="2024-02-09T13:53:40.213449075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:40.213503 env[1559]: time="2024-02-09T13:53:40.213484068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:40.213950 env[1559]: time="2024-02-09T13:53:40.213824457Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f pid=3104 runtime=io.containerd.runc.v2 Feb 9 13:53:40.316235 env[1559]: time="2024-02-09T13:53:40.316195761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-swwhm,Uid:b7092c37-d7c1-458d-8dfb-a389a62463af,Namespace:kube-system,Attempt:0,} returns sandbox id \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\"" Feb 9 13:53:43.783283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539124894.mount: Deactivated successfully. Feb 9 13:53:45.481432 env[1559]: time="2024-02-09T13:53:45.481409925Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:45.481904 env[1559]: time="2024-02-09T13:53:45.481892296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:45.482895 env[1559]: time="2024-02-09T13:53:45.482877883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:45.483231 env[1559]: time="2024-02-09T13:53:45.483219601Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 13:53:45.483708 env[1559]: time="2024-02-09T13:53:45.483693075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 13:53:45.484411 env[1559]: time="2024-02-09T13:53:45.484378494Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 13:53:45.489566 env[1559]: time="2024-02-09T13:53:45.489545293Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\"" Feb 9 13:53:45.489814 env[1559]: time="2024-02-09T13:53:45.489799918Z" level=info msg="StartContainer for \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\"" Feb 9 13:53:45.490578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101258792.mount: Deactivated successfully. Feb 9 13:53:45.534719 env[1559]: time="2024-02-09T13:53:45.534692156Z" level=info msg="StartContainer for \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\" returns successfully" Feb 9 13:53:45.954068 kubelet[2690]: I0209 13:53:45.954019 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rh5w8" podStartSLOduration=6.953948587 pod.CreationTimestamp="2024-02-09 13:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:40.478908585 +0000 UTC m=+15.726046679" watchObservedRunningTime="2024-02-09 13:53:45.953948587 +0000 UTC m=+21.201086603" Feb 9 13:53:46.493380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833-rootfs.mount: Deactivated successfully. Feb 9 13:53:47.754208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930634930.mount: Deactivated successfully. Feb 9 13:53:47.951802 env[1559]: time="2024-02-09T13:53:47.951652328Z" level=info msg="shim disconnected" id=9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833 Feb 9 13:53:47.951802 env[1559]: time="2024-02-09T13:53:47.951746519Z" level=warning msg="cleaning up after shim disconnected" id=9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833 namespace=k8s.io Feb 9 13:53:47.951802 env[1559]: time="2024-02-09T13:53:47.951776576Z" level=info msg="cleaning up dead shim" Feb 9 13:53:47.980026 env[1559]: time="2024-02-09T13:53:47.979913476Z" level=warning msg="cleanup warnings time=\"2024-02-09T13:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3203 runtime=io.containerd.runc.v2\n" Feb 9 13:53:48.420103 env[1559]: time="2024-02-09T13:53:48.420025652Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:48.420910 env[1559]: time="2024-02-09T13:53:48.420858841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:48.422337 env[1559]: time="2024-02-09T13:53:48.422289701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:53:48.422959 env[1559]: time="2024-02-09T13:53:48.422902662Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 13:53:48.424302 env[1559]: time="2024-02-09T13:53:48.424248752Z" level=info msg="CreateContainer within sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 13:53:48.429384 env[1559]: time="2024-02-09T13:53:48.429325714Z" level=info msg="CreateContainer within sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\"" Feb 9 13:53:48.429681 env[1559]: time="2024-02-09T13:53:48.429659292Z" level=info msg="StartContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\"" Feb 9 13:53:48.474029 env[1559]: time="2024-02-09T13:53:48.473997012Z" level=info msg="StartContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" returns successfully" Feb 9 13:53:48.940602 env[1559]: time="2024-02-09T13:53:48.940529192Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 13:53:48.947813 kubelet[2690]: I0209 13:53:48.947772 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-swwhm" podStartSLOduration=-9.223372026907076e+09 pod.CreationTimestamp="2024-02-09 13:53:39 +0000 UTC" firstStartedPulling="2024-02-09 13:53:40.316982497 +0000 UTC m=+15.564120488" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:48.947271839 +0000 UTC m=+24.194409904" watchObservedRunningTime="2024-02-09 13:53:48.947700631 +0000 UTC m=+24.194838649" Feb 9 13:53:48.948702 env[1559]: time="2024-02-09T13:53:48.948661943Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\"" Feb 9 13:53:48.949157 env[1559]: time="2024-02-09T13:53:48.949118998Z" level=info msg="StartContainer for \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\"" Feb 9 13:53:48.998803 env[1559]: time="2024-02-09T13:53:48.998770432Z" level=info msg="StartContainer for \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\" returns successfully" Feb 9 13:53:49.004174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 13:53:49.004326 systemd[1]: Stopped systemd-sysctl.service. Feb 9 13:53:49.004418 systemd[1]: Stopping systemd-sysctl.service... Feb 9 13:53:49.005269 systemd[1]: Starting systemd-sysctl.service... Feb 9 13:53:49.009533 systemd[1]: Finished systemd-sysctl.service. Feb 9 13:53:49.177121 env[1559]: time="2024-02-09T13:53:49.177023634Z" level=info msg="shim disconnected" id=46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619 Feb 9 13:53:49.177498 env[1559]: time="2024-02-09T13:53:49.177125301Z" level=warning msg="cleaning up after shim disconnected" id=46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619 namespace=k8s.io Feb 9 13:53:49.177498 env[1559]: time="2024-02-09T13:53:49.177155497Z" level=info msg="cleaning up dead shim" Feb 9 13:53:49.193548 env[1559]: time="2024-02-09T13:53:49.193358925Z" level=warning msg="cleanup warnings time=\"2024-02-09T13:53:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3318 runtime=io.containerd.runc.v2\n" Feb 9 13:53:49.749393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619-rootfs.mount: Deactivated successfully. Feb 9 13:53:49.945001 env[1559]: time="2024-02-09T13:53:49.944920750Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 13:53:49.954245 env[1559]: time="2024-02-09T13:53:49.954176241Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\"" Feb 9 13:53:49.954710 env[1559]: time="2024-02-09T13:53:49.954654357Z" level=info msg="StartContainer for \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\"" Feb 9 13:53:49.956735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514245918.mount: Deactivated successfully. Feb 9 13:53:50.012402 env[1559]: time="2024-02-09T13:53:50.012226795Z" level=info msg="StartContainer for \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\" returns successfully" Feb 9 13:53:50.069435 env[1559]: time="2024-02-09T13:53:50.069335060Z" level=info msg="shim disconnected" id=1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9 Feb 9 13:53:50.069435 env[1559]: time="2024-02-09T13:53:50.069431179Z" level=warning msg="cleaning up after shim disconnected" id=1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9 namespace=k8s.io Feb 9 13:53:50.069922 env[1559]: time="2024-02-09T13:53:50.069461775Z" level=info msg="cleaning up dead shim" Feb 9 13:53:50.096947 env[1559]: time="2024-02-09T13:53:50.096873293Z" level=warning msg="cleanup warnings time=\"2024-02-09T13:53:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3375 runtime=io.containerd.runc.v2\n" Feb 9 13:53:50.748285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9-rootfs.mount: Deactivated successfully. Feb 9 13:53:50.954807 env[1559]: time="2024-02-09T13:53:50.954657436Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 13:53:50.962626 env[1559]: time="2024-02-09T13:53:50.962605939Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\"" Feb 9 13:53:50.962912 env[1559]: time="2024-02-09T13:53:50.962898729Z" level=info msg="StartContainer for \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\"" Feb 9 13:53:51.003421 env[1559]: time="2024-02-09T13:53:51.003253810Z" level=info msg="StartContainer for \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\" returns successfully" Feb 9 13:53:51.048326 env[1559]: time="2024-02-09T13:53:51.048235072Z" level=info msg="shim disconnected" id=83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208 Feb 9 13:53:51.049181 env[1559]: time="2024-02-09T13:53:51.048330783Z" level=warning msg="cleaning up after shim disconnected" id=83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208 namespace=k8s.io Feb 9 13:53:51.049181 env[1559]: time="2024-02-09T13:53:51.048359447Z" level=info msg="cleaning up dead shim" Feb 9 13:53:51.075849 env[1559]: time="2024-02-09T13:53:51.075713709Z" level=warning msg="cleanup warnings time=\"2024-02-09T13:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3429 runtime=io.containerd.runc.v2\n" Feb 9 13:53:51.752063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208-rootfs.mount: Deactivated successfully. Feb 9 13:53:51.964805 env[1559]: time="2024-02-09T13:53:51.964659667Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 13:53:51.981229 env[1559]: time="2024-02-09T13:53:51.981110113Z" level=info msg="CreateContainer within sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\"" Feb 9 13:53:51.982139 env[1559]: time="2024-02-09T13:53:51.982043590Z" level=info msg="StartContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\"" Feb 9 13:53:51.990405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998535081.mount: Deactivated successfully. Feb 9 13:53:52.035746 env[1559]: time="2024-02-09T13:53:52.035698141Z" level=info msg="StartContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" returns successfully" Feb 9 13:53:52.086869 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 13:53:52.115226 kubelet[2690]: I0209 13:53:52.115213 2690 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 13:53:52.125777 kubelet[2690]: I0209 13:53:52.125759 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:52.126675 kubelet[2690]: I0209 13:53:52.126664 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:53:52.207311 kubelet[2690]: I0209 13:53:52.207293 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzhp\" (UniqueName: \"kubernetes.io/projected/8c01b76d-c8df-4a79-9189-20a83e07312f-kube-api-access-zxzhp\") pod \"coredns-787d4945fb-bwhsd\" (UID: \"8c01b76d-c8df-4a79-9189-20a83e07312f\") " pod="kube-system/coredns-787d4945fb-bwhsd" Feb 9 13:53:52.207391 kubelet[2690]: I0209 13:53:52.207318 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7nv\" (UniqueName: \"kubernetes.io/projected/281eaed4-aaa7-42a5-bc86-21981e4a1f3e-kube-api-access-bh7nv\") pod \"coredns-787d4945fb-wfpqg\" (UID: \"281eaed4-aaa7-42a5-bc86-21981e4a1f3e\") " pod="kube-system/coredns-787d4945fb-wfpqg" Feb 9 13:53:52.207391 kubelet[2690]: I0209 13:53:52.207334 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c01b76d-c8df-4a79-9189-20a83e07312f-config-volume\") pod \"coredns-787d4945fb-bwhsd\" (UID: \"8c01b76d-c8df-4a79-9189-20a83e07312f\") " pod="kube-system/coredns-787d4945fb-bwhsd" Feb 9 13:53:52.207391 kubelet[2690]: I0209 13:53:52.207347 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281eaed4-aaa7-42a5-bc86-21981e4a1f3e-config-volume\") pod \"coredns-787d4945fb-wfpqg\" (UID: \"281eaed4-aaa7-42a5-bc86-21981e4a1f3e\") " pod="kube-system/coredns-787d4945fb-wfpqg" Feb 9 13:53:52.220794 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 13:53:52.428754 env[1559]: time="2024-02-09T13:53:52.428644603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bwhsd,Uid:8c01b76d-c8df-4a79-9189-20a83e07312f,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:52.429615 env[1559]: time="2024-02-09T13:53:52.429154362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wfpqg,Uid:281eaed4-aaa7-42a5-bc86-21981e4a1f3e,Namespace:kube-system,Attempt:0,}" Feb 9 13:53:53.001707 kubelet[2690]: I0209 13:53:53.001643 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fndc7" podStartSLOduration=-9.223372022853218e+09 pod.CreationTimestamp="2024-02-09 13:53:39 +0000 UTC" firstStartedPulling="2024-02-09 13:53:39.678993096 +0000 UTC m=+14.926131088" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:53.001444616 +0000 UTC m=+28.248582728" watchObservedRunningTime="2024-02-09 13:53:53.001556933 +0000 UTC m=+28.248694960" Feb 9 13:53:53.803457 systemd-networkd[1415]: cilium_host: Link UP Feb 9 13:53:53.803559 systemd-networkd[1415]: cilium_net: Link UP Feb 9 13:53:53.803561 systemd-networkd[1415]: cilium_net: Gained carrier Feb 9 13:53:53.803688 systemd-networkd[1415]: cilium_host: Gained carrier Feb 9 13:53:53.811760 systemd-networkd[1415]: cilium_host: Gained IPv6LL Feb 9 13:53:53.811890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 13:53:53.854383 systemd-networkd[1415]: cilium_vxlan: Link UP Feb 9 13:53:53.854385 systemd-networkd[1415]: cilium_vxlan: Gained carrier Feb 9 13:53:53.999793 kernel: NET: Registered PF_ALG protocol family Feb 9 13:53:54.289926 systemd-networkd[1415]: cilium_net: Gained IPv6LL Feb 9 13:53:54.468746 systemd-networkd[1415]: lxc_health: Link UP Feb 9 13:53:54.489717 systemd-networkd[1415]: lxc_health: Gained carrier Feb 9 13:53:54.489854 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 13:53:54.976981 systemd-networkd[1415]: lxccf220a22be20: Link UP Feb 9 13:53:55.004217 systemd-networkd[1415]: lxc2e275b785abc: Link UP Feb 9 13:53:55.008916 kernel: eth0: renamed from tmp17b52 Feb 9 13:53:55.030793 kernel: eth0: renamed from tmpc4242 Feb 9 13:53:55.058279 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 13:53:55.058340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf220a22be20: link becomes ready Feb 9 13:53:55.058389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 13:53:55.072449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2e275b785abc: link becomes ready Feb 9 13:53:55.072603 systemd-networkd[1415]: lxccf220a22be20: Gained carrier Feb 9 13:53:55.072729 systemd-networkd[1415]: lxc2e275b785abc: Gained carrier Feb 9 13:53:55.913910 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Feb 9 13:53:56.233932 systemd-networkd[1415]: lxc_health: Gained IPv6LL Feb 9 13:53:56.297973 systemd-networkd[1415]: lxccf220a22be20: Gained IPv6LL Feb 9 13:53:56.937949 systemd-networkd[1415]: lxc2e275b785abc: Gained IPv6LL Feb 9 13:53:57.384507 env[1559]: time="2024-02-09T13:53:57.384445591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:57.384507 env[1559]: time="2024-02-09T13:53:57.384468180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:57.384507 env[1559]: time="2024-02-09T13:53:57.384475153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:57.384735 env[1559]: time="2024-02-09T13:53:57.384535762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17b525ec86b531c88f4043076e50185e270d052a3780b31ac93e2a510962f600 pid=4114 runtime=io.containerd.runc.v2 Feb 9 13:53:57.385052 env[1559]: time="2024-02-09T13:53:57.385023592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:53:57.385052 env[1559]: time="2024-02-09T13:53:57.385042027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:53:57.385117 env[1559]: time="2024-02-09T13:53:57.385049101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:53:57.385185 env[1559]: time="2024-02-09T13:53:57.385161429Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c42426c56ec3073995b0371fd83d187983797fdd008011524c83bd279374e110 pid=4121 runtime=io.containerd.runc.v2 Feb 9 13:53:57.424077 env[1559]: time="2024-02-09T13:53:57.424044866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bwhsd,Uid:8c01b76d-c8df-4a79-9189-20a83e07312f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c42426c56ec3073995b0371fd83d187983797fdd008011524c83bd279374e110\"" Feb 9 13:53:57.424428 env[1559]: time="2024-02-09T13:53:57.424414268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wfpqg,Uid:281eaed4-aaa7-42a5-bc86-21981e4a1f3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"17b525ec86b531c88f4043076e50185e270d052a3780b31ac93e2a510962f600\"" Feb 9 13:53:57.425230 env[1559]: time="2024-02-09T13:53:57.425214583Z" level=info msg="CreateContainer within sandbox \"c42426c56ec3073995b0371fd83d187983797fdd008011524c83bd279374e110\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 13:53:57.425365 env[1559]: time="2024-02-09T13:53:57.425352510Z" level=info msg="CreateContainer within sandbox \"17b525ec86b531c88f4043076e50185e270d052a3780b31ac93e2a510962f600\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 13:53:57.444843 env[1559]: time="2024-02-09T13:53:57.444757017Z" level=info msg="CreateContainer within sandbox \"c42426c56ec3073995b0371fd83d187983797fdd008011524c83bd279374e110\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dab54992be1378a2061b9d9f5d652a317f4b11810c98ec9975289839a37a8afc\"" Feb 9 13:53:57.445540 env[1559]: time="2024-02-09T13:53:57.445462441Z" level=info msg="StartContainer for \"dab54992be1378a2061b9d9f5d652a317f4b11810c98ec9975289839a37a8afc\"" Feb 9 13:53:57.445857 env[1559]: time="2024-02-09T13:53:57.445759976Z" level=info msg="CreateContainer within sandbox \"17b525ec86b531c88f4043076e50185e270d052a3780b31ac93e2a510962f600\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"682e24c5a55f23a37a64ff2932fafea597de2149cd06f4b9ee5cfc7142861b2d\"" Feb 9 13:53:57.446514 env[1559]: time="2024-02-09T13:53:57.446449734Z" level=info msg="StartContainer for \"682e24c5a55f23a37a64ff2932fafea597de2149cd06f4b9ee5cfc7142861b2d\"" Feb 9 13:53:57.533762 env[1559]: time="2024-02-09T13:53:57.533723145Z" level=info msg="StartContainer for \"682e24c5a55f23a37a64ff2932fafea597de2149cd06f4b9ee5cfc7142861b2d\" returns successfully" Feb 9 13:53:57.533885 env[1559]: time="2024-02-09T13:53:57.533852034Z" level=info msg="StartContainer for \"dab54992be1378a2061b9d9f5d652a317f4b11810c98ec9975289839a37a8afc\" returns successfully" Feb 9 13:53:57.996247 kubelet[2690]: I0209 13:53:57.996149 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-wfpqg" podStartSLOduration=18.996072099 pod.CreationTimestamp="2024-02-09 13:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:57.995874752 +0000 UTC m=+33.243012795" watchObservedRunningTime="2024-02-09 13:53:57.996072099 +0000 UTC m=+33.243210139" Feb 9 13:53:58.011701 kubelet[2690]: I0209 13:53:58.011683 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-bwhsd" podStartSLOduration=19.011660039 pod.CreationTimestamp="2024-02-09 13:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:53:58.011437692 +0000 UTC m=+33.258575678" watchObservedRunningTime="2024-02-09 13:53:58.011660039 +0000 UTC m=+33.258798011" Feb 9 13:54:07.319459 kubelet[2690]: I0209 13:54:07.319347 2690 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 13:55:40.241965 systemd[1]: Started sshd@5-86.109.11.101:22-61.177.172.136:54057.service. Feb 9 13:55:41.997047 sshd[4346]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 13:55:43.923209 sshd[4346]: Failed password for root from 61.177.172.136 port 54057 ssh2 Feb 9 13:55:46.511289 sshd[4346]: Failed password for root from 61.177.172.136 port 54057 ssh2 Feb 9 13:55:49.533331 sshd[4346]: Failed password for root from 61.177.172.136 port 54057 ssh2 Feb 9 13:55:50.277363 sshd[4346]: Received disconnect from 61.177.172.136 port 54057:11: [preauth] Feb 9 13:55:50.277363 sshd[4346]: Disconnected from authenticating user root 61.177.172.136 port 54057 [preauth] Feb 9 13:55:50.277917 sshd[4346]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 13:55:50.279913 systemd[1]: sshd@5-86.109.11.101:22-61.177.172.136:54057.service: Deactivated successfully. Feb 9 13:55:51.418580 systemd[1]: Started sshd@6-86.109.11.101:22-61.177.172.136:64996.service. Feb 9 13:55:54.211401 sshd[4350]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 13:55:55.922109 sshd[4350]: Failed password for root from 61.177.172.136 port 64996 ssh2 Feb 9 13:55:56.696653 sshd[4350]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 13:55:58.682823 sshd[4350]: Failed password for root from 61.177.172.136 port 64996 ssh2 Feb 9 13:56:00.909051 sshd[4350]: Failed password for root from 61.177.172.136 port 64996 ssh2 Feb 9 13:56:01.672202 sshd[4350]: Received disconnect from 61.177.172.136 port 64996:11: [preauth] Feb 9 13:56:01.672202 sshd[4350]: Disconnected from authenticating user root 61.177.172.136 port 64996 [preauth] Feb 9 13:56:01.672700 sshd[4350]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 13:56:01.674693 systemd[1]: sshd@6-86.109.11.101:22-61.177.172.136:64996.service: Deactivated successfully. Feb 9 13:56:01.853432 systemd[1]: Started sshd@7-86.109.11.101:22-61.177.172.136:26570.service. Feb 9 13:56:03.248814 sshd[4357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 13:56:05.195142 sshd[4357]: Failed password for root from 61.177.172.136 port 26570 ssh2 Feb 9 13:56:07.640701 sshd[4357]: Failed password for root from 61.177.172.136 port 26570 ssh2 Feb 9 13:56:10.212920 sshd[4357]: Failed password for root from 61.177.172.136 port 26570 ssh2 Feb 9 13:56:11.132634 sshd[4357]: Received disconnect from 61.177.172.136 port 26570:11: [preauth] Feb 9 13:56:11.132634 sshd[4357]: Disconnected from authenticating user root 61.177.172.136 port 26570 [preauth] Feb 9 13:56:11.133193 sshd[4357]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.136 user=root Feb 9 13:56:11.135170 systemd[1]: sshd@7-86.109.11.101:22-61.177.172.136:26570.service: Deactivated successfully. Feb 9 13:56:33.809299 systemd[1]: Started sshd@8-86.109.11.101:22-85.209.11.27:37356.service. Feb 9 13:56:35.463438 sshd[4365]: Invalid user admin from 85.209.11.27 port 37356 Feb 9 13:56:35.845949 sshd[4365]: pam_faillock(sshd:auth): User unknown Feb 9 13:56:35.846963 sshd[4365]: pam_unix(sshd:auth): check pass; user unknown Feb 9 13:56:35.847050 sshd[4365]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.27 Feb 9 13:56:35.848050 sshd[4365]: pam_faillock(sshd:auth): User unknown Feb 9 13:56:37.854043 sshd[4365]: Failed password for invalid user admin from 85.209.11.27 port 37356 ssh2 Feb 9 13:56:38.759672 sshd[4365]: Connection closed by invalid user admin 85.209.11.27 port 37356 [preauth] Feb 9 13:56:38.762139 systemd[1]: sshd@8-86.109.11.101:22-85.209.11.27:37356.service: Deactivated successfully. Feb 9 13:58:01.654296 systemd[1]: Started sshd@9-86.109.11.101:22-218.92.0.56:19785.service. Feb 9 13:58:03.121309 sshd[4382]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 13:58:05.208187 sshd[4382]: Failed password for root from 218.92.0.56 port 19785 ssh2 Feb 9 13:58:07.309823 sshd[4382]: Failed password for root from 218.92.0.56 port 19785 ssh2 Feb 9 13:58:10.938183 sshd[4382]: Failed password for root from 218.92.0.56 port 19785 ssh2 Feb 9 13:58:13.320183 sshd[4382]: Received disconnect from 218.92.0.56 port 19785:11: [preauth] Feb 9 13:58:13.320183 sshd[4382]: Disconnected from authenticating user root 218.92.0.56 port 19785 [preauth] Feb 9 13:58:13.320728 sshd[4382]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 13:58:13.322681 systemd[1]: sshd@9-86.109.11.101:22-218.92.0.56:19785.service: Deactivated successfully. Feb 9 13:58:13.490134 systemd[1]: Started sshd@10-86.109.11.101:22-218.92.0.56:17256.service. Feb 9 13:58:14.534159 sshd[4388]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 13:58:16.465497 sshd[4388]: Failed password for root from 218.92.0.56 port 17256 ssh2 Feb 9 13:58:17.023922 sshd[4388]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 13:58:19.035086 sshd[4388]: Failed password for root from 218.92.0.56 port 17256 ssh2 Feb 9 13:58:21.797019 sshd[4388]: Failed password for root from 218.92.0.56 port 17256 ssh2 Feb 9 13:58:21.998191 sshd[4388]: Received disconnect from 218.92.0.56 port 17256:11: [preauth] Feb 9 13:58:21.998191 sshd[4388]: Disconnected from authenticating user root 218.92.0.56 port 17256 [preauth] Feb 9 13:58:21.998709 sshd[4388]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 13:58:22.000678 systemd[1]: sshd@10-86.109.11.101:22-218.92.0.56:17256.service: Deactivated successfully. Feb 9 13:58:22.141487 systemd[1]: Started sshd@11-86.109.11.101:22-218.92.0.56:23246.service. Feb 9 13:58:23.519204 sshd[4392]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 13:58:25.354676 sshd[4392]: Failed password for root from 218.92.0.56 port 23246 ssh2 Feb 9 13:58:27.775198 sshd[4392]: Failed password for root from 218.92.0.56 port 23246 ssh2 Feb 9 13:58:30.669772 sshd[4392]: Failed password for root from 218.92.0.56 port 23246 ssh2 Feb 9 13:58:30.962458 sshd[4392]: Received disconnect from 218.92.0.56 port 23246:11: [preauth] Feb 9 13:58:30.962458 sshd[4392]: Disconnected from authenticating user root 218.92.0.56 port 23246 [preauth] Feb 9 13:58:30.963051 sshd[4392]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Feb 9 13:58:30.965101 systemd[1]: sshd@11-86.109.11.101:22-218.92.0.56:23246.service: Deactivated successfully. Feb 9 14:01:35.268752 systemd[1]: Started sshd@12-86.109.11.101:22-61.177.172.160:40664.service. Feb 9 14:01:36.173824 sshd[4419]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 14:01:38.170061 sshd[4419]: Failed password for root from 61.177.172.160 port 40664 ssh2 Feb 9 14:01:40.246963 sshd[4419]: Failed password for root from 61.177.172.160 port 40664 ssh2 Feb 9 14:01:42.815127 sshd[4419]: Failed password for root from 61.177.172.160 port 40664 ssh2 Feb 9 14:01:43.935131 sshd[4419]: Received disconnect from 61.177.172.160 port 40664:11: [preauth] Feb 9 14:01:43.935131 sshd[4419]: Disconnected from authenticating user root 61.177.172.160 port 40664 [preauth] Feb 9 14:01:43.935660 sshd[4419]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 14:01:43.937615 systemd[1]: sshd@12-86.109.11.101:22-61.177.172.160:40664.service: Deactivated successfully. Feb 9 14:01:44.078144 systemd[1]: Started sshd@13-86.109.11.101:22-61.177.172.160:43707.service. Feb 9 14:01:44.233821 systemd[1]: Started sshd@14-86.109.11.101:22-180.101.88.197:29703.service. Feb 9 14:01:44.990836 sshd[4426]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 14:01:45.720842 sshd[4428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 14:01:45.720914 sshd[4428]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 14:01:46.751519 sshd[4426]: Failed password for root from 61.177.172.160 port 43707 ssh2 Feb 9 14:01:47.285637 sshd[4428]: Failed password for root from 180.101.88.197 port 29703 ssh2 Feb 9 14:01:49.633017 sshd[4426]: Failed password for root from 61.177.172.160 port 43707 ssh2 Feb 9 14:01:49.856506 sshd[4428]: Failed password for root from 180.101.88.197 port 29703 ssh2 Feb 9 14:01:52.046499 sshd[4426]: Failed password for root from 61.177.172.160 port 43707 ssh2 Feb 9 14:01:52.401219 sshd[4426]: Received disconnect from 61.177.172.160 port 43707:11: [preauth] Feb 9 14:01:52.401219 sshd[4426]: Disconnected from authenticating user root 61.177.172.160 port 43707 [preauth] Feb 9 14:01:52.401646 sshd[4426]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 14:01:52.403650 systemd[1]: sshd@13-86.109.11.101:22-61.177.172.160:43707.service: Deactivated successfully. Feb 9 14:01:52.557711 systemd[1]: Started sshd@15-86.109.11.101:22-61.177.172.160:43112.service. Feb 9 14:01:52.954435 sshd[4428]: Failed password for root from 180.101.88.197 port 29703 ssh2 Feb 9 14:01:53.531017 sshd[4434]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 14:01:53.573767 sshd[4428]: Received disconnect from 180.101.88.197 port 29703:11: [preauth] Feb 9 14:01:53.573767 sshd[4428]: Disconnected from authenticating user root 180.101.88.197 port 29703 [preauth] Feb 9 14:01:53.574348 sshd[4428]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 14:01:53.576365 systemd[1]: sshd@14-86.109.11.101:22-180.101.88.197:29703.service: Deactivated successfully. Feb 9 14:01:53.728138 systemd[1]: Started sshd@16-86.109.11.101:22-180.101.88.197:39842.service. Feb 9 14:01:54.860171 sshd[4434]: Failed password for root from 61.177.172.160 port 43112 ssh2 Feb 9 14:01:54.961146 sshd[4439]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 14:01:56.761927 sshd[4439]: Failed password for root from 180.101.88.197 port 39842 ssh2 Feb 9 14:01:57.752014 sshd[4434]: Failed password for root from 61.177.172.160 port 43112 ssh2 Feb 9 14:01:58.992484 sshd[4439]: Failed password for root from 180.101.88.197 port 39842 ssh2 Feb 9 14:02:00.732999 sshd[4434]: Failed password for root from 61.177.172.160 port 43112 ssh2 Feb 9 14:02:01.422478 sshd[4439]: Failed password for root from 180.101.88.197 port 39842 ssh2 Feb 9 14:02:01.907981 sshd[4434]: Received disconnect from 61.177.172.160 port 43112:11: [preauth] Feb 9 14:02:01.907981 sshd[4434]: Disconnected from authenticating user root 61.177.172.160 port 43112 [preauth] Feb 9 14:02:01.908475 sshd[4434]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.160 user=root Feb 9 14:02:01.910993 systemd[1]: sshd@15-86.109.11.101:22-61.177.172.160:43112.service: Deactivated successfully. Feb 9 14:02:02.421409 sshd[4439]: Received disconnect from 180.101.88.197 port 39842:11: [preauth] Feb 9 14:02:02.421409 sshd[4439]: Disconnected from authenticating user root 180.101.88.197 port 39842 [preauth] Feb 9 14:02:02.422024 sshd[4439]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 14:02:02.423966 systemd[1]: sshd@16-86.109.11.101:22-180.101.88.197:39842.service: Deactivated successfully. Feb 9 14:02:03.577996 systemd[1]: Started sshd@17-86.109.11.101:22-180.101.88.197:43001.service. Feb 9 14:02:05.177904 systemd[1]: Started sshd@18-86.109.11.101:22-101.42.135.203:49228.service. Feb 9 14:02:06.053248 sshd[4445]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 14:02:06.733138 sshd[4447]: Invalid user jorda from 101.42.135.203 port 49228 Feb 9 14:02:06.739178 sshd[4447]: pam_faillock(sshd:auth): User unknown Feb 9 14:02:06.740406 sshd[4447]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:02:06.740493 sshd[4447]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=101.42.135.203 Feb 9 14:02:06.741584 sshd[4447]: pam_faillock(sshd:auth): User unknown Feb 9 14:02:07.425665 systemd[1]: Started sshd@19-86.109.11.101:22-170.64.194.223:43176.service. Feb 9 14:02:07.834074 sshd[4445]: Failed password for root from 180.101.88.197 port 43001 ssh2 Feb 9 14:02:08.244704 sshd[4449]: Invalid user zuzab from 170.64.194.223 port 43176 Feb 9 14:02:08.250873 sshd[4449]: pam_faillock(sshd:auth): User unknown Feb 9 14:02:08.252062 sshd[4449]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:02:08.252152 sshd[4449]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.194.223 Feb 9 14:02:08.253169 sshd[4449]: pam_faillock(sshd:auth): User unknown Feb 9 14:02:08.522165 sshd[4447]: Failed password for invalid user jorda from 101.42.135.203 port 49228 ssh2 Feb 9 14:02:09.954566 sshd[4447]: Received disconnect from 101.42.135.203 port 49228:11: Bye Bye [preauth] Feb 9 14:02:09.954566 sshd[4447]: Disconnected from invalid user jorda 101.42.135.203 port 49228 [preauth] Feb 9 14:02:09.957058 systemd[1]: sshd@18-86.109.11.101:22-101.42.135.203:49228.service: Deactivated successfully. Feb 9 14:02:10.309725 sshd[4449]: Failed password for invalid user zuzab from 170.64.194.223 port 43176 ssh2 Feb 9 14:02:10.596186 sshd[4445]: Failed password for root from 180.101.88.197 port 43001 ssh2 Feb 9 14:02:12.368164 sshd[4449]: Received disconnect from 170.64.194.223 port 43176:11: Bye Bye [preauth] Feb 9 14:02:12.368164 sshd[4449]: Disconnected from invalid user zuzab 170.64.194.223 port 43176 [preauth] Feb 9 14:02:12.370570 systemd[1]: sshd@19-86.109.11.101:22-170.64.194.223:43176.service: Deactivated successfully. Feb 9 14:02:12.496011 sshd[4445]: Failed password for root from 180.101.88.197 port 43001 ssh2 Feb 9 14:02:13.518667 sshd[4445]: Received disconnect from 180.101.88.197 port 43001:11: [preauth] Feb 9 14:02:13.518667 sshd[4445]: Disconnected from authenticating user root 180.101.88.197 port 43001 [preauth] Feb 9 14:02:13.519245 sshd[4445]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 9 14:02:13.521210 systemd[1]: sshd@17-86.109.11.101:22-180.101.88.197:43001.service: Deactivated successfully. Feb 9 14:03:07.494313 update_engine[1547]: I0209 14:03:07.494187 1547 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 14:03:07.494313 update_engine[1547]: I0209 14:03:07.494266 1547 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 14:03:07.498962 update_engine[1547]: I0209 14:03:07.498080 1547 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 14:03:07.499083 update_engine[1547]: I0209 14:03:07.498984 1547 omaha_request_params.cc:62] Current group set to lts Feb 9 14:03:07.499346 update_engine[1547]: I0209 14:03:07.499272 1547 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 14:03:07.499346 update_engine[1547]: I0209 14:03:07.499290 1547 update_attempter.cc:643] Scheduling an action processor start. Feb 9 14:03:07.499346 update_engine[1547]: I0209 14:03:07.499321 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 14:03:07.499721 update_engine[1547]: I0209 14:03:07.499384 1547 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 14:03:07.499721 update_engine[1547]: I0209 14:03:07.499531 1547 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 14:03:07.499721 update_engine[1547]: I0209 14:03:07.499548 1547 omaha_request_action.cc:271] Request: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: Feb 9 14:03:07.499721 update_engine[1547]: I0209 14:03:07.499558 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:03:07.500755 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 14:03:07.502835 update_engine[1547]: I0209 14:03:07.502769 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:03:07.503145 update_engine[1547]: E0209 14:03:07.503067 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:03:07.503346 update_engine[1547]: I0209 14:03:07.503237 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 14:03:17.424710 update_engine[1547]: I0209 14:03:17.424494 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:03:17.425669 update_engine[1547]: I0209 14:03:17.425081 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:03:17.425669 update_engine[1547]: E0209 14:03:17.425297 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:03:17.425669 update_engine[1547]: I0209 14:03:17.425467 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 14:03:27.424677 update_engine[1547]: I0209 14:03:27.424593 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:03:27.425682 update_engine[1547]: I0209 14:03:27.425088 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:03:27.425682 update_engine[1547]: E0209 14:03:27.425287 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:03:27.425682 update_engine[1547]: I0209 14:03:27.425454 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 14:03:37.424938 update_engine[1547]: I0209 14:03:37.424813 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425292 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:03:37.425943 update_engine[1547]: E0209 14:03:37.425496 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425654 1547 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425670 1547 omaha_request_action.cc:621] Omaha request response: Feb 9 14:03:37.425943 update_engine[1547]: E0209 14:03:37.425834 1547 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425864 1547 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425874 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425884 1547 update_attempter.cc:306] Processing Done. Feb 9 14:03:37.425943 update_engine[1547]: E0209 14:03:37.425909 1547 update_attempter.cc:619] Update failed. Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425918 1547 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425927 1547 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 14:03:37.425943 update_engine[1547]: I0209 14:03:37.425936 1547 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426088 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426139 1547 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426148 1547 omaha_request_action.cc:271] Request: Feb 9 14:03:37.427063 update_engine[1547]: Feb 9 14:03:37.427063 update_engine[1547]: Feb 9 14:03:37.427063 update_engine[1547]: Feb 9 14:03:37.427063 update_engine[1547]: Feb 9 14:03:37.427063 update_engine[1547]: Feb 9 14:03:37.427063 update_engine[1547]: Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426158 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426476 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 14:03:37.427063 update_engine[1547]: E0209 14:03:37.426635 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426766 1547 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426797 1547 omaha_request_action.cc:621] Omaha request response: Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426810 1547 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426816 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426826 1547 update_attempter.cc:306] Processing Done. Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426832 1547 update_attempter.cc:310] Error event sent. Feb 9 14:03:37.427063 update_engine[1547]: I0209 14:03:37.426859 1547 update_check_scheduler.cc:74] Next update check in 45m45s Feb 9 14:03:37.428628 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 14:03:37.428628 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 14:04:21.514050 systemd[1]: Started sshd@20-86.109.11.101:22-165.154.183.15:21080.service. Feb 9 14:04:21.693588 sshd[4476]: Invalid user qlab from 165.154.183.15 port 21080 Feb 9 14:04:21.699830 sshd[4476]: pam_faillock(sshd:auth): User unknown Feb 9 14:04:21.700904 sshd[4476]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:04:21.700994 sshd[4476]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.183.15 Feb 9 14:04:21.701915 sshd[4476]: pam_faillock(sshd:auth): User unknown Feb 9 14:04:23.682944 sshd[4476]: Failed password for invalid user qlab from 165.154.183.15 port 21080 ssh2 Feb 9 14:04:24.845338 sshd[4476]: Received disconnect from 165.154.183.15 port 21080:11: Bye Bye [preauth] Feb 9 14:04:24.845338 sshd[4476]: Disconnected from invalid user qlab 165.154.183.15 port 21080 [preauth] Feb 9 14:04:24.848045 systemd[1]: sshd@20-86.109.11.101:22-165.154.183.15:21080.service: Deactivated successfully. Feb 9 14:06:00.498021 systemd[1]: Started sshd@21-86.109.11.101:22-165.227.228.212:50998.service. Feb 9 14:06:01.324071 sshd[4491]: Invalid user stemp from 165.227.228.212 port 50998 Feb 9 14:06:01.330249 sshd[4491]: pam_faillock(sshd:auth): User unknown Feb 9 14:06:01.331245 sshd[4491]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:06:01.331327 sshd[4491]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.228.212 Feb 9 14:06:01.332280 sshd[4491]: pam_faillock(sshd:auth): User unknown Feb 9 14:06:03.042236 sshd[4491]: Failed password for invalid user stemp from 165.227.228.212 port 50998 ssh2 Feb 9 14:06:03.743295 sshd[4491]: Received disconnect from 165.227.228.212 port 50998:11: Bye Bye [preauth] Feb 9 14:06:03.743295 sshd[4491]: Disconnected from invalid user stemp 165.227.228.212 port 50998 [preauth] Feb 9 14:06:03.745759 systemd[1]: sshd@21-86.109.11.101:22-165.227.228.212:50998.service: Deactivated successfully. Feb 9 14:06:57.175919 systemd[1]: Started sshd@22-86.109.11.101:22-165.227.228.212:41240.service. Feb 9 14:06:57.970028 sshd[4501]: Invalid user jooksan from 165.227.228.212 port 41240 Feb 9 14:06:57.976133 sshd[4501]: pam_faillock(sshd:auth): User unknown Feb 9 14:06:57.977123 sshd[4501]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:06:57.977214 sshd[4501]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.228.212 Feb 9 14:06:57.978157 sshd[4501]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:00.044204 sshd[4501]: Failed password for invalid user jooksan from 165.227.228.212 port 41240 ssh2 Feb 9 14:07:01.047485 sshd[4501]: Received disconnect from 165.227.228.212 port 41240:11: Bye Bye [preauth] Feb 9 14:07:01.047485 sshd[4501]: Disconnected from invalid user jooksan 165.227.228.212 port 41240 [preauth] Feb 9 14:07:01.049977 systemd[1]: sshd@22-86.109.11.101:22-165.227.228.212:41240.service: Deactivated successfully. Feb 9 14:07:01.853702 systemd[1]: Started sshd@23-86.109.11.101:22-170.64.194.223:50626.service. Feb 9 14:07:02.678238 sshd[4506]: Invalid user afhasti from 170.64.194.223 port 50626 Feb 9 14:07:02.684245 sshd[4506]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:02.685398 sshd[4506]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:07:02.685485 sshd[4506]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.194.223 Feb 9 14:07:02.686490 sshd[4506]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:02.978190 systemd[1]: Started sshd@24-86.109.11.101:22-147.75.109.163:55002.service. Feb 9 14:07:03.008969 sshd[4508]: Accepted publickey for core from 147.75.109.163 port 55002 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:03.009847 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:03.013293 systemd-logind[1545]: New session 8 of user core. Feb 9 14:07:03.014298 systemd[1]: Started session-8.scope. Feb 9 14:07:03.148670 sshd[4508]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:03.150268 systemd[1]: sshd@24-86.109.11.101:22-147.75.109.163:55002.service: Deactivated successfully. Feb 9 14:07:03.150996 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Feb 9 14:07:03.151032 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 14:07:03.151637 systemd-logind[1545]: Removed session 8. Feb 9 14:07:04.436829 sshd[4506]: Failed password for invalid user afhasti from 170.64.194.223 port 50626 ssh2 Feb 9 14:07:04.628177 sshd[4506]: Received disconnect from 170.64.194.223 port 50626:11: Bye Bye [preauth] Feb 9 14:07:04.628177 sshd[4506]: Disconnected from invalid user afhasti 170.64.194.223 port 50626 [preauth] Feb 9 14:07:04.629544 systemd[1]: sshd@23-86.109.11.101:22-170.64.194.223:50626.service: Deactivated successfully. Feb 9 14:07:08.157748 systemd[1]: Started sshd@25-86.109.11.101:22-147.75.109.163:55078.service. Feb 9 14:07:08.191423 sshd[4537]: Accepted publickey for core from 147.75.109.163 port 55078 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:08.192311 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:08.195522 systemd-logind[1545]: New session 9 of user core. Feb 9 14:07:08.196192 systemd[1]: Started session-9.scope. Feb 9 14:07:08.280996 sshd[4537]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:08.282326 systemd[1]: sshd@25-86.109.11.101:22-147.75.109.163:55078.service: Deactivated successfully. Feb 9 14:07:08.282944 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 14:07:08.282992 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Feb 9 14:07:08.283559 systemd-logind[1545]: Removed session 9. Feb 9 14:07:13.287589 systemd[1]: Started sshd@26-86.109.11.101:22-147.75.109.163:55088.service. Feb 9 14:07:13.317938 sshd[4566]: Accepted publickey for core from 147.75.109.163 port 55088 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:13.318888 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:13.322163 systemd-logind[1545]: New session 10 of user core. Feb 9 14:07:13.323004 systemd[1]: Started session-10.scope. Feb 9 14:07:13.410159 sshd[4566]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:13.411736 systemd[1]: sshd@26-86.109.11.101:22-147.75.109.163:55088.service: Deactivated successfully. Feb 9 14:07:13.412488 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Feb 9 14:07:13.412527 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 14:07:13.413225 systemd-logind[1545]: Removed session 10. Feb 9 14:07:18.417271 systemd[1]: Started sshd@27-86.109.11.101:22-147.75.109.163:33026.service. Feb 9 14:07:18.418229 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 9 14:07:18.423870 systemd-tmpfiles[4595]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 14:07:18.424096 systemd-tmpfiles[4595]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 14:07:18.424779 systemd-tmpfiles[4595]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 14:07:18.434188 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 9 14:07:18.434418 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 9 14:07:18.435881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 9 14:07:18.447721 sshd[4594]: Accepted publickey for core from 147.75.109.163 port 33026 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:18.448493 sshd[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:18.451033 systemd-logind[1545]: New session 11 of user core. Feb 9 14:07:18.451489 systemd[1]: Started session-11.scope. Feb 9 14:07:18.541627 sshd[4594]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:18.543735 systemd[1]: Started sshd@28-86.109.11.101:22-147.75.109.163:33034.service. Feb 9 14:07:18.544222 systemd[1]: sshd@27-86.109.11.101:22-147.75.109.163:33026.service: Deactivated successfully. Feb 9 14:07:18.545066 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Feb 9 14:07:18.545091 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 14:07:18.545779 systemd-logind[1545]: Removed session 11. Feb 9 14:07:18.575884 sshd[4625]: Accepted publickey for core from 147.75.109.163 port 33034 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:18.576801 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:18.579991 systemd-logind[1545]: New session 12 of user core. Feb 9 14:07:18.580619 systemd[1]: Started session-12.scope. Feb 9 14:07:19.109264 sshd[4625]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:19.111133 systemd[1]: Started sshd@29-86.109.11.101:22-147.75.109.163:33040.service. Feb 9 14:07:19.111454 systemd[1]: sshd@28-86.109.11.101:22-147.75.109.163:33034.service: Deactivated successfully. Feb 9 14:07:19.112126 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Feb 9 14:07:19.112150 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 14:07:19.112591 systemd-logind[1545]: Removed session 12. Feb 9 14:07:19.142183 sshd[4650]: Accepted publickey for core from 147.75.109.163 port 33040 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:19.142996 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:19.145485 systemd-logind[1545]: New session 13 of user core. Feb 9 14:07:19.146076 systemd[1]: Started session-13.scope. Feb 9 14:07:19.256114 systemd[1]: Started sshd@30-86.109.11.101:22-165.154.183.15:51614.service. Feb 9 14:07:19.272945 sshd[4650]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:19.274428 systemd[1]: sshd@29-86.109.11.101:22-147.75.109.163:33040.service: Deactivated successfully. Feb 9 14:07:19.275078 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 14:07:19.275115 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Feb 9 14:07:19.275646 systemd-logind[1545]: Removed session 13. Feb 9 14:07:19.421892 sshd[4678]: Invalid user stemp from 165.154.183.15 port 51614 Feb 9 14:07:19.423459 sshd[4678]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:19.423722 sshd[4678]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:07:19.423743 sshd[4678]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.183.15 Feb 9 14:07:19.424014 sshd[4678]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:20.842692 sshd[4678]: Failed password for invalid user stemp from 165.154.183.15 port 51614 ssh2 Feb 9 14:07:21.712705 sshd[4678]: Received disconnect from 165.154.183.15 port 51614:11: Bye Bye [preauth] Feb 9 14:07:21.712705 sshd[4678]: Disconnected from invalid user stemp 165.154.183.15 port 51614 [preauth] Feb 9 14:07:21.715219 systemd[1]: sshd@30-86.109.11.101:22-165.154.183.15:51614.service: Deactivated successfully. Feb 9 14:07:24.279911 systemd[1]: Started sshd@31-86.109.11.101:22-147.75.109.163:33046.service. Feb 9 14:07:24.309995 sshd[4687]: Accepted publickey for core from 147.75.109.163 port 33046 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:24.310923 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:24.314338 systemd-logind[1545]: New session 14 of user core. Feb 9 14:07:24.315014 systemd[1]: Started session-14.scope. Feb 9 14:07:24.400789 sshd[4687]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:24.402532 systemd[1]: sshd@31-86.109.11.101:22-147.75.109.163:33046.service: Deactivated successfully. Feb 9 14:07:24.403318 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 14:07:24.403363 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Feb 9 14:07:24.403951 systemd-logind[1545]: Removed session 14. Feb 9 14:07:29.407661 systemd[1]: Started sshd@32-86.109.11.101:22-147.75.109.163:42994.service. Feb 9 14:07:29.438317 sshd[4715]: Accepted publickey for core from 147.75.109.163 port 42994 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:29.439225 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:29.442349 systemd-logind[1545]: New session 15 of user core. Feb 9 14:07:29.442938 systemd[1]: Started session-15.scope. Feb 9 14:07:29.532891 sshd[4715]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:29.534765 systemd[1]: sshd@32-86.109.11.101:22-147.75.109.163:42994.service: Deactivated successfully. Feb 9 14:07:29.535610 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Feb 9 14:07:29.535639 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 14:07:29.536469 systemd-logind[1545]: Removed session 15. Feb 9 14:07:34.539010 systemd[1]: Started sshd@33-86.109.11.101:22-147.75.109.163:44864.service. Feb 9 14:07:34.569264 sshd[4742]: Accepted publickey for core from 147.75.109.163 port 44864 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:34.570195 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:34.573698 systemd-logind[1545]: New session 16 of user core. Feb 9 14:07:34.574367 systemd[1]: Started session-16.scope. Feb 9 14:07:34.664903 sshd[4742]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:34.666379 systemd[1]: sshd@33-86.109.11.101:22-147.75.109.163:44864.service: Deactivated successfully. Feb 9 14:07:34.667049 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 14:07:34.667107 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Feb 9 14:07:34.667642 systemd-logind[1545]: Removed session 16. Feb 9 14:07:39.673101 systemd[1]: Started sshd@34-86.109.11.101:22-147.75.109.163:44874.service. Feb 9 14:07:39.707609 sshd[4768]: Accepted publickey for core from 147.75.109.163 port 44874 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:39.708658 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:39.711749 systemd-logind[1545]: New session 17 of user core. Feb 9 14:07:39.712414 systemd[1]: Started session-17.scope. Feb 9 14:07:39.802464 sshd[4768]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:39.803886 systemd[1]: sshd@34-86.109.11.101:22-147.75.109.163:44874.service: Deactivated successfully. Feb 9 14:07:39.804522 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Feb 9 14:07:39.804553 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 14:07:39.805027 systemd-logind[1545]: Removed session 17. Feb 9 14:07:44.805700 systemd[1]: Started sshd@35-86.109.11.101:22-147.75.109.163:57866.service. Feb 9 14:07:44.837047 sshd[4796]: Accepted publickey for core from 147.75.109.163 port 57866 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:44.837854 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:44.840987 systemd-logind[1545]: New session 18 of user core. Feb 9 14:07:44.841925 systemd[1]: Started session-18.scope. Feb 9 14:07:44.931816 sshd[4796]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:44.933182 systemd[1]: sshd@35-86.109.11.101:22-147.75.109.163:57866.service: Deactivated successfully. Feb 9 14:07:44.933790 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Feb 9 14:07:44.933792 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 14:07:44.934452 systemd-logind[1545]: Removed session 18. Feb 9 14:07:49.937747 systemd[1]: Started sshd@36-86.109.11.101:22-147.75.109.163:57874.service. Feb 9 14:07:49.968676 sshd[4822]: Accepted publickey for core from 147.75.109.163 port 57874 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:49.969643 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:49.973125 systemd-logind[1545]: New session 19 of user core. Feb 9 14:07:49.974209 systemd[1]: Started session-19.scope. Feb 9 14:07:50.063483 sshd[4822]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:50.064911 systemd[1]: sshd@36-86.109.11.101:22-147.75.109.163:57874.service: Deactivated successfully. Feb 9 14:07:50.065545 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Feb 9 14:07:50.065577 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 14:07:50.066043 systemd-logind[1545]: Removed session 19. Feb 9 14:07:51.079268 systemd[1]: Started sshd@37-86.109.11.101:22-165.227.228.212:59664.service. Feb 9 14:07:51.877868 sshd[4845]: Invalid user renault from 165.227.228.212 port 59664 Feb 9 14:07:51.883774 sshd[4845]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:51.884755 sshd[4845]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:07:51.884868 sshd[4845]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.228.212 Feb 9 14:07:51.885830 sshd[4845]: pam_faillock(sshd:auth): User unknown Feb 9 14:07:54.032044 sshd[4845]: Failed password for invalid user renault from 165.227.228.212 port 59664 ssh2 Feb 9 14:07:55.071091 systemd[1]: Started sshd@38-86.109.11.101:22-147.75.109.163:54506.service. Feb 9 14:07:55.101562 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 54506 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:07:55.102510 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:07:55.105838 systemd-logind[1545]: New session 20 of user core. Feb 9 14:07:55.106913 systemd[1]: Started session-20.scope. Feb 9 14:07:55.196471 sshd[4847]: pam_unix(sshd:session): session closed for user core Feb 9 14:07:55.197749 systemd[1]: sshd@38-86.109.11.101:22-147.75.109.163:54506.service: Deactivated successfully. Feb 9 14:07:55.198424 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 14:07:55.198436 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Feb 9 14:07:55.199001 systemd-logind[1545]: Removed session 20. Feb 9 14:07:55.650148 sshd[4845]: Received disconnect from 165.227.228.212 port 59664:11: Bye Bye [preauth] Feb 9 14:07:55.650148 sshd[4845]: Disconnected from invalid user renault 165.227.228.212 port 59664 [preauth] Feb 9 14:07:55.652634 systemd[1]: sshd@37-86.109.11.101:22-165.227.228.212:59664.service: Deactivated successfully. Feb 9 14:08:00.203319 systemd[1]: Started sshd@39-86.109.11.101:22-147.75.109.163:54520.service. Feb 9 14:08:00.233966 sshd[4877]: Accepted publickey for core from 147.75.109.163 port 54520 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:00.234901 sshd[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:00.238229 systemd-logind[1545]: New session 21 of user core. Feb 9 14:08:00.239339 systemd[1]: Started session-21.scope. Feb 9 14:08:00.326063 sshd[4877]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:00.327384 systemd[1]: sshd@39-86.109.11.101:22-147.75.109.163:54520.service: Deactivated successfully. Feb 9 14:08:00.328051 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 14:08:00.328067 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Feb 9 14:08:00.328705 systemd-logind[1545]: Removed session 21. Feb 9 14:08:05.332843 systemd[1]: Started sshd@40-86.109.11.101:22-147.75.109.163:35904.service. Feb 9 14:08:05.363504 sshd[4906]: Accepted publickey for core from 147.75.109.163 port 35904 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:05.364304 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:05.367712 systemd-logind[1545]: New session 22 of user core. Feb 9 14:08:05.368571 systemd[1]: Started session-22.scope. Feb 9 14:08:05.458984 sshd[4906]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:05.460439 systemd[1]: sshd@40-86.109.11.101:22-147.75.109.163:35904.service: Deactivated successfully. Feb 9 14:08:05.461053 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Feb 9 14:08:05.461064 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 14:08:05.461584 systemd-logind[1545]: Removed session 22. Feb 9 14:08:09.122116 systemd[1]: Started sshd@41-86.109.11.101:22-170.64.194.223:37834.service. Feb 9 14:08:09.956328 sshd[4933]: Invalid user pact from 170.64.194.223 port 37834 Feb 9 14:08:09.962239 sshd[4933]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:09.963199 sshd[4933]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:08:09.963285 sshd[4933]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.194.223 Feb 9 14:08:09.964204 sshd[4933]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:10.465746 systemd[1]: Started sshd@42-86.109.11.101:22-147.75.109.163:35910.service. Feb 9 14:08:10.496504 sshd[4937]: Accepted publickey for core from 147.75.109.163 port 35910 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:10.497395 sshd[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:10.500592 systemd-logind[1545]: New session 23 of user core. Feb 9 14:08:10.501664 systemd[1]: Started session-23.scope. Feb 9 14:08:10.593809 sshd[4937]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:10.595433 systemd[1]: sshd@42-86.109.11.101:22-147.75.109.163:35910.service: Deactivated successfully. Feb 9 14:08:10.596249 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Feb 9 14:08:10.596299 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 14:08:10.596926 systemd-logind[1545]: Removed session 23. Feb 9 14:08:11.579002 sshd[4933]: Failed password for invalid user pact from 170.64.194.223 port 37834 ssh2 Feb 9 14:08:12.077218 sshd[4933]: Received disconnect from 170.64.194.223 port 37834:11: Bye Bye [preauth] Feb 9 14:08:12.077218 sshd[4933]: Disconnected from invalid user pact 170.64.194.223 port 37834 [preauth] Feb 9 14:08:12.079705 systemd[1]: sshd@41-86.109.11.101:22-170.64.194.223:37834.service: Deactivated successfully. Feb 9 14:08:15.601639 systemd[1]: Started sshd@43-86.109.11.101:22-147.75.109.163:59656.service. Feb 9 14:08:15.632082 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 59656 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:15.632953 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:15.635923 systemd-logind[1545]: New session 24 of user core. Feb 9 14:08:15.636892 systemd[1]: Started session-24.scope. Feb 9 14:08:15.725592 sshd[4967]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:15.727033 systemd[1]: sshd@43-86.109.11.101:22-147.75.109.163:59656.service: Deactivated successfully. Feb 9 14:08:15.727678 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Feb 9 14:08:15.727708 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 14:08:15.728271 systemd-logind[1545]: Removed session 24. Feb 9 14:08:20.731275 systemd[1]: Started sshd@44-86.109.11.101:22-147.75.109.163:59664.service. Feb 9 14:08:20.762347 sshd[4994]: Accepted publickey for core from 147.75.109.163 port 59664 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:20.763318 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:20.766865 systemd-logind[1545]: New session 25 of user core. Feb 9 14:08:20.767957 systemd[1]: Started session-25.scope. Feb 9 14:08:20.858475 sshd[4994]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:20.859909 systemd[1]: sshd@44-86.109.11.101:22-147.75.109.163:59664.service: Deactivated successfully. Feb 9 14:08:20.860541 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Feb 9 14:08:20.860572 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 14:08:20.861143 systemd-logind[1545]: Removed session 25. Feb 9 14:08:23.629105 systemd[1]: Started sshd@45-86.109.11.101:22-165.154.183.15:15424.service. Feb 9 14:08:23.811614 sshd[5020]: Invalid user azikh from 165.154.183.15 port 15424 Feb 9 14:08:23.812941 sshd[5020]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:23.813191 sshd[5020]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:08:23.813210 sshd[5020]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.183.15 Feb 9 14:08:23.813399 sshd[5020]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:25.866270 systemd[1]: Started sshd@46-86.109.11.101:22-147.75.109.163:53308.service. Feb 9 14:08:25.896706 sshd[5024]: Accepted publickey for core from 147.75.109.163 port 53308 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:25.897528 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:25.900541 systemd-logind[1545]: New session 26 of user core. Feb 9 14:08:25.901167 systemd[1]: Started session-26.scope. Feb 9 14:08:25.995662 sshd[5024]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:25.997734 systemd[1]: sshd@46-86.109.11.101:22-147.75.109.163:53308.service: Deactivated successfully. Feb 9 14:08:25.998674 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Feb 9 14:08:25.998676 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 14:08:25.999563 systemd-logind[1545]: Removed session 26. Feb 9 14:08:26.014975 sshd[5020]: Failed password for invalid user azikh from 165.154.183.15 port 15424 ssh2 Feb 9 14:08:26.588713 sshd[5020]: Received disconnect from 165.154.183.15 port 15424:11: Bye Bye [preauth] Feb 9 14:08:26.588713 sshd[5020]: Disconnected from invalid user azikh 165.154.183.15 port 15424 [preauth] Feb 9 14:08:26.591270 systemd[1]: sshd@45-86.109.11.101:22-165.154.183.15:15424.service: Deactivated successfully. Feb 9 14:08:31.002679 systemd[1]: Started sshd@47-86.109.11.101:22-147.75.109.163:53316.service. Feb 9 14:08:31.033205 sshd[5054]: Accepted publickey for core from 147.75.109.163 port 53316 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:31.034184 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:31.037756 systemd-logind[1545]: New session 27 of user core. Feb 9 14:08:31.038484 systemd[1]: Started session-27.scope. Feb 9 14:08:31.124993 sshd[5054]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:31.126501 systemd[1]: sshd@47-86.109.11.101:22-147.75.109.163:53316.service: Deactivated successfully. Feb 9 14:08:31.127119 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 14:08:31.127164 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. Feb 9 14:08:31.127684 systemd-logind[1545]: Removed session 27. Feb 9 14:08:32.206731 systemd[1]: Started sshd@48-86.109.11.101:22-101.42.135.203:49714.service. Feb 9 14:08:33.919301 sshd[5080]: Invalid user arvin from 101.42.135.203 port 49714 Feb 9 14:08:33.925427 sshd[5080]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:33.926570 sshd[5080]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:08:33.926657 sshd[5080]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=101.42.135.203 Feb 9 14:08:33.927654 sshd[5080]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:35.502562 sshd[5080]: Failed password for invalid user arvin from 101.42.135.203 port 49714 ssh2 Feb 9 14:08:36.132497 systemd[1]: Started sshd@49-86.109.11.101:22-147.75.109.163:47530.service. Feb 9 14:08:36.149404 sshd[5080]: Received disconnect from 101.42.135.203 port 49714:11: Bye Bye [preauth] Feb 9 14:08:36.149404 sshd[5080]: Disconnected from invalid user arvin 101.42.135.203 port 49714 [preauth] Feb 9 14:08:36.150010 systemd[1]: sshd@48-86.109.11.101:22-101.42.135.203:49714.service: Deactivated successfully. Feb 9 14:08:36.163158 sshd[5082]: Accepted publickey for core from 147.75.109.163 port 47530 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:36.164193 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:36.167804 systemd-logind[1545]: New session 28 of user core. Feb 9 14:08:36.168647 systemd[1]: Started session-28.scope. Feb 9 14:08:36.259683 sshd[5082]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:36.261191 systemd[1]: sshd@49-86.109.11.101:22-147.75.109.163:47530.service: Deactivated successfully. Feb 9 14:08:36.261735 systemd-logind[1545]: Session 28 logged out. Waiting for processes to exit. Feb 9 14:08:36.261746 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 14:08:36.262388 systemd-logind[1545]: Removed session 28. Feb 9 14:08:41.266509 systemd[1]: Started sshd@50-86.109.11.101:22-147.75.109.163:47534.service. Feb 9 14:08:41.296933 sshd[5112]: Accepted publickey for core from 147.75.109.163 port 47534 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:41.297869 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:41.301079 systemd-logind[1545]: New session 29 of user core. Feb 9 14:08:41.301812 systemd[1]: Started session-29.scope. Feb 9 14:08:41.390102 sshd[5112]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:41.391524 systemd[1]: sshd@50-86.109.11.101:22-147.75.109.163:47534.service: Deactivated successfully. Feb 9 14:08:41.392158 systemd-logind[1545]: Session 29 logged out. Waiting for processes to exit. Feb 9 14:08:41.392169 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 14:08:41.392664 systemd-logind[1545]: Removed session 29. Feb 9 14:08:41.650415 systemd[1]: Started sshd@51-86.109.11.101:22-165.227.228.212:49852.service. Feb 9 14:08:42.479430 sshd[5137]: Invalid user twofan from 165.227.228.212 port 49852 Feb 9 14:08:42.485368 sshd[5137]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:42.486360 sshd[5137]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:08:42.486448 sshd[5137]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.228.212 Feb 9 14:08:42.487459 sshd[5137]: pam_faillock(sshd:auth): User unknown Feb 9 14:08:44.298206 sshd[5137]: Failed password for invalid user twofan from 165.227.228.212 port 49852 ssh2 Feb 9 14:08:45.779488 sshd[5137]: Received disconnect from 165.227.228.212 port 49852:11: Bye Bye [preauth] Feb 9 14:08:45.779488 sshd[5137]: Disconnected from invalid user twofan 165.227.228.212 port 49852 [preauth] Feb 9 14:08:45.782136 systemd[1]: sshd@51-86.109.11.101:22-165.227.228.212:49852.service: Deactivated successfully. Feb 9 14:08:46.396613 systemd[1]: Started sshd@52-86.109.11.101:22-147.75.109.163:38128.service. Feb 9 14:08:46.427040 sshd[5141]: Accepted publickey for core from 147.75.109.163 port 38128 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:46.428000 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:46.431626 systemd-logind[1545]: New session 30 of user core. Feb 9 14:08:46.432364 systemd[1]: Started session-30.scope. Feb 9 14:08:46.520521 sshd[5141]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:46.522137 systemd[1]: sshd@52-86.109.11.101:22-147.75.109.163:38128.service: Deactivated successfully. Feb 9 14:08:46.522780 systemd-logind[1545]: Session 30 logged out. Waiting for processes to exit. Feb 9 14:08:46.522802 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 14:08:46.523493 systemd-logind[1545]: Removed session 30. Feb 9 14:08:51.527373 systemd[1]: Started sshd@53-86.109.11.101:22-147.75.109.163:38140.service. Feb 9 14:08:51.558593 sshd[5167]: Accepted publickey for core from 147.75.109.163 port 38140 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:51.561878 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:51.572664 systemd-logind[1545]: New session 31 of user core. Feb 9 14:08:51.575773 systemd[1]: Started session-31.scope. Feb 9 14:08:51.692397 sshd[5167]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:51.697821 systemd[1]: sshd@53-86.109.11.101:22-147.75.109.163:38140.service: Deactivated successfully. Feb 9 14:08:51.700349 systemd-logind[1545]: Session 31 logged out. Waiting for processes to exit. Feb 9 14:08:51.700460 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 14:08:51.702880 systemd-logind[1545]: Removed session 31. Feb 9 14:08:56.698261 systemd[1]: Started sshd@54-86.109.11.101:22-147.75.109.163:41222.service. Feb 9 14:08:56.728695 sshd[5194]: Accepted publickey for core from 147.75.109.163 port 41222 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:08:56.729636 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:08:56.732776 systemd-logind[1545]: New session 32 of user core. Feb 9 14:08:56.733732 systemd[1]: Started session-32.scope. Feb 9 14:08:56.824345 sshd[5194]: pam_unix(sshd:session): session closed for user core Feb 9 14:08:56.825792 systemd[1]: sshd@54-86.109.11.101:22-147.75.109.163:41222.service: Deactivated successfully. Feb 9 14:08:56.826484 systemd-logind[1545]: Session 32 logged out. Waiting for processes to exit. Feb 9 14:08:56.826488 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 14:08:56.827013 systemd-logind[1545]: Removed session 32. Feb 9 14:09:01.827369 systemd[1]: Started sshd@55-86.109.11.101:22-147.75.109.163:41232.service. Feb 9 14:09:01.864274 sshd[5220]: Accepted publickey for core from 147.75.109.163 port 41232 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:01.867482 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:01.878044 systemd-logind[1545]: New session 33 of user core. Feb 9 14:09:01.880423 systemd[1]: Started session-33.scope. Feb 9 14:09:01.987426 sshd[5220]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:01.988938 systemd[1]: sshd@55-86.109.11.101:22-147.75.109.163:41232.service: Deactivated successfully. Feb 9 14:09:01.989616 systemd-logind[1545]: Session 33 logged out. Waiting for processes to exit. Feb 9 14:09:01.989629 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 14:09:01.990321 systemd-logind[1545]: Removed session 33. Feb 9 14:09:06.993378 systemd[1]: Started sshd@56-86.109.11.101:22-147.75.109.163:59312.service. Feb 9 14:09:07.023785 sshd[5246]: Accepted publickey for core from 147.75.109.163 port 59312 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:07.024686 sshd[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:07.027998 systemd-logind[1545]: New session 34 of user core. Feb 9 14:09:07.028711 systemd[1]: Started session-34.scope. Feb 9 14:09:07.126902 sshd[5246]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:07.132379 systemd[1]: sshd@56-86.109.11.101:22-147.75.109.163:59312.service: Deactivated successfully. Feb 9 14:09:07.134840 systemd-logind[1545]: Session 34 logged out. Waiting for processes to exit. Feb 9 14:09:07.134940 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 14:09:07.137425 systemd-logind[1545]: Removed session 34. Feb 9 14:09:12.132021 systemd[1]: Started sshd@57-86.109.11.101:22-147.75.109.163:59324.service. Feb 9 14:09:12.162724 sshd[5273]: Accepted publickey for core from 147.75.109.163 port 59324 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:12.163562 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:12.167065 systemd-logind[1545]: New session 35 of user core. Feb 9 14:09:12.167753 systemd[1]: Started session-35.scope. Feb 9 14:09:12.255239 sshd[5273]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:12.256736 systemd[1]: sshd@57-86.109.11.101:22-147.75.109.163:59324.service: Deactivated successfully. Feb 9 14:09:12.257392 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 14:09:12.257440 systemd-logind[1545]: Session 35 logged out. Waiting for processes to exit. Feb 9 14:09:12.257876 systemd-logind[1545]: Removed session 35. Feb 9 14:09:14.168655 systemd[1]: Started sshd@58-86.109.11.101:22-170.64.194.223:35184.service. Feb 9 14:09:14.986600 sshd[5299]: Invalid user azikh from 170.64.194.223 port 35184 Feb 9 14:09:14.992702 sshd[5299]: pam_faillock(sshd:auth): User unknown Feb 9 14:09:14.993685 sshd[5299]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:09:14.993776 sshd[5299]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.194.223 Feb 9 14:09:14.994710 sshd[5299]: pam_faillock(sshd:auth): User unknown Feb 9 14:09:16.865409 sshd[5299]: Failed password for invalid user azikh from 170.64.194.223 port 35184 ssh2 Feb 9 14:09:17.261958 systemd[1]: Started sshd@59-86.109.11.101:22-147.75.109.163:49998.service. Feb 9 14:09:17.292405 sshd[5301]: Accepted publickey for core from 147.75.109.163 port 49998 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:17.293347 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:17.296768 systemd-logind[1545]: New session 36 of user core. Feb 9 14:09:17.297504 systemd[1]: Started session-36.scope. Feb 9 14:09:17.384448 sshd[5301]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:17.385903 systemd[1]: sshd@59-86.109.11.101:22-147.75.109.163:49998.service: Deactivated successfully. Feb 9 14:09:17.386532 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 14:09:17.386562 systemd-logind[1545]: Session 36 logged out. Waiting for processes to exit. Feb 9 14:09:17.387165 systemd-logind[1545]: Removed session 36. Feb 9 14:09:17.889846 sshd[5299]: Received disconnect from 170.64.194.223 port 35184:11: Bye Bye [preauth] Feb 9 14:09:17.889846 sshd[5299]: Disconnected from invalid user azikh 170.64.194.223 port 35184 [preauth] Feb 9 14:09:17.892387 systemd[1]: sshd@58-86.109.11.101:22-170.64.194.223:35184.service: Deactivated successfully. Feb 9 14:09:22.390712 systemd[1]: Started sshd@60-86.109.11.101:22-147.75.109.163:50012.service. Feb 9 14:09:22.421276 sshd[5329]: Accepted publickey for core from 147.75.109.163 port 50012 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:22.422150 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:22.425491 systemd-logind[1545]: New session 37 of user core. Feb 9 14:09:22.426159 systemd[1]: Started session-37.scope. Feb 9 14:09:22.512810 sshd[5329]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:22.514118 systemd[1]: sshd@60-86.109.11.101:22-147.75.109.163:50012.service: Deactivated successfully. Feb 9 14:09:22.514747 systemd-logind[1545]: Session 37 logged out. Waiting for processes to exit. Feb 9 14:09:22.514755 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 14:09:22.515352 systemd-logind[1545]: Removed session 37. Feb 9 14:09:27.519623 systemd[1]: Started sshd@61-86.109.11.101:22-147.75.109.163:51732.service. Feb 9 14:09:27.550530 sshd[5357]: Accepted publickey for core from 147.75.109.163 port 51732 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:27.551463 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:27.554721 systemd-logind[1545]: New session 38 of user core. Feb 9 14:09:27.555422 systemd[1]: Started session-38.scope. Feb 9 14:09:27.643139 sshd[5357]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:27.644623 systemd[1]: sshd@61-86.109.11.101:22-147.75.109.163:51732.service: Deactivated successfully. Feb 9 14:09:27.645296 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 14:09:27.645340 systemd-logind[1545]: Session 38 logged out. Waiting for processes to exit. Feb 9 14:09:27.645790 systemd-logind[1545]: Removed session 38. Feb 9 14:09:29.027653 systemd[1]: Started sshd@62-86.109.11.101:22-165.154.183.15:34238.service. Feb 9 14:09:29.199671 sshd[5382]: Invalid user sogand from 165.154.183.15 port 34238 Feb 9 14:09:29.205762 sshd[5382]: pam_faillock(sshd:auth): User unknown Feb 9 14:09:29.206753 sshd[5382]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:09:29.206865 sshd[5382]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.154.183.15 Feb 9 14:09:29.207761 sshd[5382]: pam_faillock(sshd:auth): User unknown Feb 9 14:09:30.478326 systemd[1]: Started sshd@63-86.109.11.101:22-165.227.228.212:40036.service. Feb 9 14:09:31.137982 sshd[5382]: Failed password for invalid user sogand from 165.154.183.15 port 34238 ssh2 Feb 9 14:09:31.312629 sshd[5384]: Invalid user habila from 165.227.228.212 port 40036 Feb 9 14:09:31.318877 sshd[5384]: pam_faillock(sshd:auth): User unknown Feb 9 14:09:31.319660 sshd[5384]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:09:31.319698 sshd[5384]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.228.212 Feb 9 14:09:31.319994 sshd[5384]: pam_faillock(sshd:auth): User unknown Feb 9 14:09:32.172838 sshd[5382]: Received disconnect from 165.154.183.15 port 34238:11: Bye Bye [preauth] Feb 9 14:09:32.172838 sshd[5382]: Disconnected from invalid user sogand 165.154.183.15 port 34238 [preauth] Feb 9 14:09:32.175337 systemd[1]: sshd@62-86.109.11.101:22-165.154.183.15:34238.service: Deactivated successfully. Feb 9 14:09:32.650000 systemd[1]: Started sshd@64-86.109.11.101:22-147.75.109.163:51742.service. Feb 9 14:09:32.680692 sshd[5388]: Accepted publickey for core from 147.75.109.163 port 51742 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:32.681617 sshd[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:32.685004 systemd-logind[1545]: New session 39 of user core. Feb 9 14:09:32.685721 systemd[1]: Started session-39.scope. Feb 9 14:09:32.815034 sshd[5388]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:32.816668 systemd[1]: sshd@64-86.109.11.101:22-147.75.109.163:51742.service: Deactivated successfully. Feb 9 14:09:32.817372 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 14:09:32.817413 systemd-logind[1545]: Session 39 logged out. Waiting for processes to exit. Feb 9 14:09:32.817971 systemd-logind[1545]: Removed session 39. Feb 9 14:09:33.526045 sshd[5384]: Failed password for invalid user habila from 165.227.228.212 port 40036 ssh2 Feb 9 14:09:34.708318 sshd[5384]: Received disconnect from 165.227.228.212 port 40036:11: Bye Bye [preauth] Feb 9 14:09:34.708318 sshd[5384]: Disconnected from invalid user habila 165.227.228.212 port 40036 [preauth] Feb 9 14:09:34.710813 systemd[1]: sshd@63-86.109.11.101:22-165.227.228.212:40036.service: Deactivated successfully. Feb 9 14:09:37.821954 systemd[1]: Started sshd@65-86.109.11.101:22-147.75.109.163:33912.service. Feb 9 14:09:37.852339 sshd[5416]: Accepted publickey for core from 147.75.109.163 port 33912 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:37.853207 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:37.856282 systemd-logind[1545]: New session 40 of user core. Feb 9 14:09:37.856895 systemd[1]: Started session-40.scope. Feb 9 14:09:37.946510 sshd[5416]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:37.948016 systemd[1]: sshd@65-86.109.11.101:22-147.75.109.163:33912.service: Deactivated successfully. Feb 9 14:09:37.948606 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 14:09:37.948639 systemd-logind[1545]: Session 40 logged out. Waiting for processes to exit. Feb 9 14:09:37.949304 systemd-logind[1545]: Removed session 40. Feb 9 14:09:41.087822 systemd[1]: Started sshd@66-86.109.11.101:22-101.42.135.203:44592.service. Feb 9 14:09:42.954203 systemd[1]: Started sshd@67-86.109.11.101:22-147.75.109.163:33924.service. Feb 9 14:09:42.988579 sshd[5446]: Accepted publickey for core from 147.75.109.163 port 33924 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:42.989512 sshd[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:42.992671 systemd-logind[1545]: New session 41 of user core. Feb 9 14:09:42.993313 systemd[1]: Started session-41.scope. Feb 9 14:09:43.082770 sshd[5446]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:43.084281 systemd[1]: sshd@67-86.109.11.101:22-147.75.109.163:33924.service: Deactivated successfully. Feb 9 14:09:43.084846 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 14:09:43.084873 systemd-logind[1545]: Session 41 logged out. Waiting for processes to exit. Feb 9 14:09:43.085523 systemd-logind[1545]: Removed session 41. Feb 9 14:09:48.089644 systemd[1]: Started sshd@68-86.109.11.101:22-147.75.109.163:58248.service. Feb 9 14:09:48.120050 sshd[5474]: Accepted publickey for core from 147.75.109.163 port 58248 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:48.120963 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:48.124255 systemd-logind[1545]: New session 42 of user core. Feb 9 14:09:48.124914 systemd[1]: Started session-42.scope. Feb 9 14:09:48.216132 sshd[5474]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:48.217825 systemd[1]: Started sshd@69-86.109.11.101:22-147.75.109.163:58250.service. Feb 9 14:09:48.218181 systemd[1]: sshd@68-86.109.11.101:22-147.75.109.163:58248.service: Deactivated successfully. Feb 9 14:09:48.218747 systemd-logind[1545]: Session 42 logged out. Waiting for processes to exit. Feb 9 14:09:48.218791 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 14:09:48.219414 systemd-logind[1545]: Removed session 42. Feb 9 14:09:48.249362 sshd[5498]: Accepted publickey for core from 147.75.109.163 port 58250 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:48.250264 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:48.253427 systemd-logind[1545]: New session 43 of user core. Feb 9 14:09:48.254162 systemd[1]: Started session-43.scope. Feb 9 14:09:49.638229 sshd[5498]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:49.643166 systemd[1]: Started sshd@70-86.109.11.101:22-147.75.109.163:58256.service. Feb 9 14:09:49.643511 systemd[1]: sshd@69-86.109.11.101:22-147.75.109.163:58250.service: Deactivated successfully. Feb 9 14:09:49.644198 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 14:09:49.644205 systemd-logind[1545]: Session 43 logged out. Waiting for processes to exit. Feb 9 14:09:49.644715 systemd-logind[1545]: Removed session 43. Feb 9 14:09:49.674362 sshd[5523]: Accepted publickey for core from 147.75.109.163 port 58256 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:49.677313 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:49.687741 systemd-logind[1545]: New session 44 of user core. Feb 9 14:09:49.690332 systemd[1]: Started session-44.scope. Feb 9 14:09:50.565455 sshd[5523]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:50.569100 systemd[1]: Started sshd@71-86.109.11.101:22-147.75.109.163:58264.service. Feb 9 14:09:50.570245 systemd[1]: sshd@70-86.109.11.101:22-147.75.109.163:58256.service: Deactivated successfully. Feb 9 14:09:50.571928 systemd-logind[1545]: Session 44 logged out. Waiting for processes to exit. Feb 9 14:09:50.572009 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 14:09:50.573464 systemd-logind[1545]: Removed session 44. Feb 9 14:09:50.608745 sshd[5565]: Accepted publickey for core from 147.75.109.163 port 58264 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:50.609934 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:50.613198 systemd-logind[1545]: New session 45 of user core. Feb 9 14:09:50.613918 systemd[1]: Started session-45.scope. Feb 9 14:09:50.813222 sshd[5565]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:50.815032 systemd[1]: Started sshd@72-86.109.11.101:22-147.75.109.163:58266.service. Feb 9 14:09:50.815300 systemd[1]: sshd@71-86.109.11.101:22-147.75.109.163:58264.service: Deactivated successfully. Feb 9 14:09:50.815835 systemd-logind[1545]: Session 45 logged out. Waiting for processes to exit. Feb 9 14:09:50.815870 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 14:09:50.816412 systemd-logind[1545]: Removed session 45. Feb 9 14:09:50.845045 sshd[5627]: Accepted publickey for core from 147.75.109.163 port 58266 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:50.845918 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:50.848630 systemd-logind[1545]: New session 46 of user core. Feb 9 14:09:50.849231 systemd[1]: Started session-46.scope. Feb 9 14:09:50.975766 sshd[5627]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:50.977568 systemd[1]: sshd@72-86.109.11.101:22-147.75.109.163:58266.service: Deactivated successfully. Feb 9 14:09:50.978374 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 14:09:50.978421 systemd-logind[1545]: Session 46 logged out. Waiting for processes to exit. Feb 9 14:09:50.979106 systemd-logind[1545]: Removed session 46. Feb 9 14:09:55.981712 systemd[1]: Started sshd@73-86.109.11.101:22-147.75.109.163:40888.service. Feb 9 14:09:56.012565 sshd[5681]: Accepted publickey for core from 147.75.109.163 port 40888 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:09:56.013486 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:09:56.016635 systemd-logind[1545]: New session 47 of user core. Feb 9 14:09:56.017271 systemd[1]: Started session-47.scope. Feb 9 14:09:56.142960 sshd[5681]: pam_unix(sshd:session): session closed for user core Feb 9 14:09:56.144430 systemd[1]: sshd@73-86.109.11.101:22-147.75.109.163:40888.service: Deactivated successfully. Feb 9 14:09:56.145064 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 14:09:56.145126 systemd-logind[1545]: Session 47 logged out. Waiting for processes to exit. Feb 9 14:09:56.145652 systemd-logind[1545]: Removed session 47. Feb 9 14:10:01.150721 systemd[1]: Started sshd@74-86.109.11.101:22-147.75.109.163:40900.service. Feb 9 14:10:01.185199 sshd[5707]: Accepted publickey for core from 147.75.109.163 port 40900 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:10:01.186013 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:10:01.189209 systemd-logind[1545]: New session 48 of user core. Feb 9 14:10:01.189789 systemd[1]: Started session-48.scope. Feb 9 14:10:01.275303 sshd[5707]: pam_unix(sshd:session): session closed for user core Feb 9 14:10:01.276871 systemd[1]: sshd@74-86.109.11.101:22-147.75.109.163:40900.service: Deactivated successfully. Feb 9 14:10:01.277491 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 14:10:01.277519 systemd-logind[1545]: Session 48 logged out. Waiting for processes to exit. Feb 9 14:10:01.278063 systemd-logind[1545]: Removed session 48. Feb 9 14:10:06.282475 systemd[1]: Started sshd@75-86.109.11.101:22-147.75.109.163:53656.service. Feb 9 14:10:06.313283 sshd[5734]: Accepted publickey for core from 147.75.109.163 port 53656 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:10:06.314173 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:10:06.317581 systemd-logind[1545]: New session 49 of user core. Feb 9 14:10:06.318257 systemd[1]: Started session-49.scope. Feb 9 14:10:06.438123 sshd[5734]: pam_unix(sshd:session): session closed for user core Feb 9 14:10:06.439771 systemd[1]: Started sshd@76-86.109.11.101:22-147.75.109.163:53666.service. Feb 9 14:10:06.440106 systemd[1]: sshd@75-86.109.11.101:22-147.75.109.163:53656.service: Deactivated successfully. Feb 9 14:10:06.440675 systemd-logind[1545]: Session 49 logged out. Waiting for processes to exit. Feb 9 14:10:06.440727 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 14:10:06.441182 systemd-logind[1545]: Removed session 49. Feb 9 14:10:06.470696 sshd[5761]: Accepted publickey for core from 147.75.109.163 port 53666 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:10:06.471394 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:10:06.473833 systemd-logind[1545]: New session 50 of user core. Feb 9 14:10:06.474318 systemd[1]: Started session-50.scope. Feb 9 14:10:07.833840 env[1559]: time="2024-02-09T14:10:07.833814664Z" level=info msg="StopContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" with timeout 30 (s)" Feb 9 14:10:07.834073 env[1559]: time="2024-02-09T14:10:07.834031664Z" level=info msg="Stop container \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" with signal terminated" Feb 9 14:10:07.857192 env[1559]: time="2024-02-09T14:10:07.857159576Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 14:10:07.860128 env[1559]: time="2024-02-09T14:10:07.860112487Z" level=info msg="StopContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" with timeout 1 (s)" Feb 9 14:10:07.860231 env[1559]: time="2024-02-09T14:10:07.860219872Z" level=info msg="Stop container \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" with signal terminated" Feb 9 14:10:07.873242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1-rootfs.mount: Deactivated successfully. Feb 9 14:10:07.875252 systemd-networkd[1415]: lxc_health: Link DOWN Feb 9 14:10:07.875257 systemd-networkd[1415]: lxc_health: Lost carrier Feb 9 14:10:07.882575 env[1559]: time="2024-02-09T14:10:07.882547568Z" level=info msg="shim disconnected" id=b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1 Feb 9 14:10:07.882651 env[1559]: time="2024-02-09T14:10:07.882576713Z" level=warning msg="cleaning up after shim disconnected" id=b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1 namespace=k8s.io Feb 9 14:10:07.882651 env[1559]: time="2024-02-09T14:10:07.882586069Z" level=info msg="cleaning up dead shim" Feb 9 14:10:07.887243 env[1559]: time="2024-02-09T14:10:07.887192905Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5827 runtime=io.containerd.runc.v2\n" Feb 9 14:10:07.887895 env[1559]: time="2024-02-09T14:10:07.887848176Z" level=info msg="StopContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" returns successfully" Feb 9 14:10:07.888298 env[1559]: time="2024-02-09T14:10:07.888247471Z" level=info msg="StopPodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\"" Feb 9 14:10:07.888298 env[1559]: time="2024-02-09T14:10:07.888286294Z" level=info msg="Container to stop \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:07.889744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f-shm.mount: Deactivated successfully. Feb 9 14:10:07.914905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f-rootfs.mount: Deactivated successfully. Feb 9 14:10:07.915147 env[1559]: time="2024-02-09T14:10:07.915098318Z" level=info msg="shim disconnected" id=12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f Feb 9 14:10:07.915234 env[1559]: time="2024-02-09T14:10:07.915153635Z" level=warning msg="cleaning up after shim disconnected" id=12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f namespace=k8s.io Feb 9 14:10:07.915234 env[1559]: time="2024-02-09T14:10:07.915169795Z" level=info msg="cleaning up dead shim" Feb 9 14:10:07.921426 env[1559]: time="2024-02-09T14:10:07.921396807Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5863 runtime=io.containerd.runc.v2\n" Feb 9 14:10:07.921690 env[1559]: time="2024-02-09T14:10:07.921646596Z" level=info msg="TearDown network for sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" successfully" Feb 9 14:10:07.921690 env[1559]: time="2024-02-09T14:10:07.921667662Z" level=info msg="StopPodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" returns successfully" Feb 9 14:10:07.938295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718-rootfs.mount: Deactivated successfully. Feb 9 14:10:07.938474 env[1559]: time="2024-02-09T14:10:07.938437572Z" level=info msg="shim disconnected" id=16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718 Feb 9 14:10:07.938544 env[1559]: time="2024-02-09T14:10:07.938474955Z" level=warning msg="cleaning up after shim disconnected" id=16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718 namespace=k8s.io Feb 9 14:10:07.938544 env[1559]: time="2024-02-09T14:10:07.938485059Z" level=info msg="cleaning up dead shim" Feb 9 14:10:07.944672 env[1559]: time="2024-02-09T14:10:07.944619606Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5890 runtime=io.containerd.runc.v2\n" Feb 9 14:10:07.945650 env[1559]: time="2024-02-09T14:10:07.945600557Z" level=info msg="StopContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" returns successfully" Feb 9 14:10:07.946044 env[1559]: time="2024-02-09T14:10:07.945996538Z" level=info msg="StopPodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\"" Feb 9 14:10:07.946112 env[1559]: time="2024-02-09T14:10:07.946048623Z" level=info msg="Container to stop \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:07.946112 env[1559]: time="2024-02-09T14:10:07.946063722Z" level=info msg="Container to stop \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:07.946112 env[1559]: time="2024-02-09T14:10:07.946075480Z" level=info msg="Container to stop \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:07.946112 env[1559]: time="2024-02-09T14:10:07.946085887Z" level=info msg="Container to stop \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:07.946112 env[1559]: time="2024-02-09T14:10:07.946096203Z" level=info msg="Container to stop \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:07.963684 env[1559]: time="2024-02-09T14:10:07.963629883Z" level=info msg="shim disconnected" id=94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1 Feb 9 14:10:07.963886 env[1559]: time="2024-02-09T14:10:07.963686632Z" level=warning msg="cleaning up after shim disconnected" id=94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1 namespace=k8s.io Feb 9 14:10:07.963886 env[1559]: time="2024-02-09T14:10:07.963703107Z" level=info msg="cleaning up dead shim" Feb 9 14:10:07.969802 env[1559]: time="2024-02-09T14:10:07.969741169Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5922 runtime=io.containerd.runc.v2\n" Feb 9 14:10:07.970035 env[1559]: time="2024-02-09T14:10:07.969983167Z" level=info msg="TearDown network for sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" successfully" Feb 9 14:10:07.970035 env[1559]: time="2024-02-09T14:10:07.970004514Z" level=info msg="StopPodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" returns successfully" Feb 9 14:10:08.016371 kubelet[2690]: I0209 14:10:08.016268 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7092c37-d7c1-458d-8dfb-a389a62463af-cilium-config-path\") pod \"b7092c37-d7c1-458d-8dfb-a389a62463af\" (UID: \"b7092c37-d7c1-458d-8dfb-a389a62463af\") " Feb 9 14:10:08.016371 kubelet[2690]: I0209 14:10:08.016389 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzdd4\" (UniqueName: \"kubernetes.io/projected/b7092c37-d7c1-458d-8dfb-a389a62463af-kube-api-access-fzdd4\") pod \"b7092c37-d7c1-458d-8dfb-a389a62463af\" (UID: \"b7092c37-d7c1-458d-8dfb-a389a62463af\") " Feb 9 14:10:08.017337 kubelet[2690]: W0209 14:10:08.016677 2690 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b7092c37-d7c1-458d-8dfb-a389a62463af/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 14:10:08.021035 kubelet[2690]: I0209 14:10:08.020939 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7092c37-d7c1-458d-8dfb-a389a62463af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7092c37-d7c1-458d-8dfb-a389a62463af" (UID: "b7092c37-d7c1-458d-8dfb-a389a62463af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 14:10:08.022242 kubelet[2690]: I0209 14:10:08.022143 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7092c37-d7c1-458d-8dfb-a389a62463af-kube-api-access-fzdd4" (OuterVolumeSpecName: "kube-api-access-fzdd4") pod "b7092c37-d7c1-458d-8dfb-a389a62463af" (UID: "b7092c37-d7c1-458d-8dfb-a389a62463af"). InnerVolumeSpecName "kube-api-access-fzdd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:10:08.117071 kubelet[2690]: I0209 14:10:08.116859 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-cgroup\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.117071 kubelet[2690]: I0209 14:10:08.116988 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgnkf\" (UniqueName: \"kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-kube-api-access-mgnkf\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.117071 kubelet[2690]: I0209 14:10:08.117008 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.117571 kubelet[2690]: I0209 14:10:08.117049 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-etc-cni-netd\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.117571 kubelet[2690]: I0209 14:10:08.117097 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.117571 kubelet[2690]: I0209 14:10:08.117195 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-net\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.117571 kubelet[2690]: I0209 14:10:08.117263 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cni-path\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.117571 kubelet[2690]: I0209 14:10:08.117265 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.118388 kubelet[2690]: I0209 14:10:08.117326 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-hubble-tls\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.118388 kubelet[2690]: I0209 14:10:08.117328 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cni-path" (OuterVolumeSpecName: "cni-path") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.118388 kubelet[2690]: I0209 14:10:08.117393 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6209d075-666f-40ed-aab4-d0989090d806-clustermesh-secrets\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.118388 kubelet[2690]: I0209 14:10:08.117458 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-kernel\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.118388 kubelet[2690]: I0209 14:10:08.117513 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-bpf-maps\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.118388 kubelet[2690]: I0209 14:10:08.117584 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6209d075-666f-40ed-aab4-d0989090d806-cilium-config-path\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.119085 kubelet[2690]: I0209 14:10:08.117566 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.119085 kubelet[2690]: I0209 14:10:08.117645 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-xtables-lock\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.119085 kubelet[2690]: I0209 14:10:08.117703 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-hostproc\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.119085 kubelet[2690]: I0209 14:10:08.117646 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.119085 kubelet[2690]: I0209 14:10:08.117756 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-run\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.119085 kubelet[2690]: I0209 14:10:08.117839 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-lib-modules\") pod \"6209d075-666f-40ed-aab4-d0989090d806\" (UID: \"6209d075-666f-40ed-aab4-d0989090d806\") " Feb 9 14:10:08.119735 kubelet[2690]: I0209 14:10:08.117848 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-hostproc" (OuterVolumeSpecName: "hostproc") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.119735 kubelet[2690]: I0209 14:10:08.117830 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.119735 kubelet[2690]: I0209 14:10:08.117943 2690 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fzdd4\" (UniqueName: \"kubernetes.io/projected/b7092c37-d7c1-458d-8dfb-a389a62463af-kube-api-access-fzdd4\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.119735 kubelet[2690]: I0209 14:10:08.117956 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.119735 kubelet[2690]: I0209 14:10:08.117888 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118030 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7092c37-d7c1-458d-8dfb-a389a62463af-cilium-config-path\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118095 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-cgroup\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.120290 kubelet[2690]: W0209 14:10:08.118126 2690 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6209d075-666f-40ed-aab4-d0989090d806/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118161 2690 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-etc-cni-netd\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118208 2690 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-net\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118257 2690 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cni-path\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118316 2690 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.120290 kubelet[2690]: I0209 14:10:08.118376 2690 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-bpf-maps\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.122984 kubelet[2690]: I0209 14:10:08.122877 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6209d075-666f-40ed-aab4-d0989090d806-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 14:10:08.123807 kubelet[2690]: I0209 14:10:08.123686 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-kube-api-access-mgnkf" (OuterVolumeSpecName: "kube-api-access-mgnkf") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "kube-api-access-mgnkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:10:08.124680 kubelet[2690]: I0209 14:10:08.124581 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6209d075-666f-40ed-aab4-d0989090d806-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 14:10:08.125073 kubelet[2690]: I0209 14:10:08.124973 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6209d075-666f-40ed-aab4-d0989090d806" (UID: "6209d075-666f-40ed-aab4-d0989090d806"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:10:08.219646 kubelet[2690]: I0209 14:10:08.219530 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6209d075-666f-40ed-aab4-d0989090d806-cilium-config-path\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.219646 kubelet[2690]: I0209 14:10:08.219605 2690 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-xtables-lock\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.219646 kubelet[2690]: I0209 14:10:08.219644 2690 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-hostproc\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.219646 kubelet[2690]: I0209 14:10:08.219674 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-cilium-run\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.220308 kubelet[2690]: I0209 14:10:08.219705 2690 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6209d075-666f-40ed-aab4-d0989090d806-lib-modules\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.220308 kubelet[2690]: I0209 14:10:08.219737 2690 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6209d075-666f-40ed-aab4-d0989090d806-clustermesh-secrets\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.220308 kubelet[2690]: I0209 14:10:08.219771 2690 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mgnkf\" (UniqueName: \"kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-kube-api-access-mgnkf\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.220308 kubelet[2690]: I0209 14:10:08.219819 2690 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6209d075-666f-40ed-aab4-d0989090d806-hubble-tls\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:08.744093 kubelet[2690]: I0209 14:10:08.743992 2690 scope.go:115] "RemoveContainer" containerID="16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718" Feb 9 14:10:08.746742 env[1559]: time="2024-02-09T14:10:08.746628651Z" level=info msg="RemoveContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\"" Feb 9 14:10:08.749822 env[1559]: time="2024-02-09T14:10:08.749792551Z" level=info msg="RemoveContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" returns successfully" Feb 9 14:10:08.749910 kubelet[2690]: I0209 14:10:08.749903 2690 scope.go:115] "RemoveContainer" containerID="83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208" Feb 9 14:10:08.750540 env[1559]: time="2024-02-09T14:10:08.750525755Z" level=info msg="RemoveContainer for \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\"" Feb 9 14:10:08.751525 env[1559]: time="2024-02-09T14:10:08.751512786Z" level=info msg="RemoveContainer for \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\" returns successfully" Feb 9 14:10:08.751578 kubelet[2690]: I0209 14:10:08.751570 2690 scope.go:115] "RemoveContainer" containerID="1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9" Feb 9 14:10:08.751964 env[1559]: time="2024-02-09T14:10:08.751952250Z" level=info msg="RemoveContainer for \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\"" Feb 9 14:10:08.752955 env[1559]: time="2024-02-09T14:10:08.752944300Z" level=info msg="RemoveContainer for \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\" returns successfully" Feb 9 14:10:08.753001 kubelet[2690]: I0209 14:10:08.752993 2690 scope.go:115] "RemoveContainer" containerID="46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619" Feb 9 14:10:08.753415 env[1559]: time="2024-02-09T14:10:08.753402740Z" level=info msg="RemoveContainer for \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\"" Feb 9 14:10:08.754582 env[1559]: time="2024-02-09T14:10:08.754539033Z" level=info msg="RemoveContainer for \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\" returns successfully" Feb 9 14:10:08.754628 kubelet[2690]: I0209 14:10:08.754617 2690 scope.go:115] "RemoveContainer" containerID="9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833" Feb 9 14:10:08.755077 env[1559]: time="2024-02-09T14:10:08.755036811Z" level=info msg="RemoveContainer for \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\"" Feb 9 14:10:08.756043 env[1559]: time="2024-02-09T14:10:08.756031264Z" level=info msg="RemoveContainer for \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\" returns successfully" Feb 9 14:10:08.756094 kubelet[2690]: I0209 14:10:08.756088 2690 scope.go:115] "RemoveContainer" containerID="16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718" Feb 9 14:10:08.756201 env[1559]: time="2024-02-09T14:10:08.756153996Z" level=error msg="ContainerStatus for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\": not found" Feb 9 14:10:08.756286 kubelet[2690]: E0209 14:10:08.756275 2690 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\": not found" containerID="16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718" Feb 9 14:10:08.756330 kubelet[2690]: I0209 14:10:08.756301 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718} err="failed to get container status \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\": rpc error: code = NotFound desc = an error occurred when try to find container \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\": not found" Feb 9 14:10:08.756330 kubelet[2690]: I0209 14:10:08.756310 2690 scope.go:115] "RemoveContainer" containerID="83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208" Feb 9 14:10:08.756399 env[1559]: time="2024-02-09T14:10:08.756374660Z" level=error msg="ContainerStatus for \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\": not found" Feb 9 14:10:08.756447 kubelet[2690]: E0209 14:10:08.756439 2690 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\": not found" containerID="83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208" Feb 9 14:10:08.756489 kubelet[2690]: I0209 14:10:08.756457 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208} err="failed to get container status \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\": rpc error: code = NotFound desc = an error occurred when try to find container \"83f647b96be4b33609d88008cd8e8726bbcec85e8c553e2c24d2673822f83208\": not found" Feb 9 14:10:08.756489 kubelet[2690]: I0209 14:10:08.756467 2690 scope.go:115] "RemoveContainer" containerID="1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9" Feb 9 14:10:08.756597 env[1559]: time="2024-02-09T14:10:08.756571891Z" level=error msg="ContainerStatus for \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\": not found" Feb 9 14:10:08.756662 kubelet[2690]: E0209 14:10:08.756655 2690 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\": not found" containerID="1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9" Feb 9 14:10:08.756689 kubelet[2690]: I0209 14:10:08.756676 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9} err="failed to get container status \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\": rpc error: code = NotFound desc = an error occurred when try to find container \"1607a9e43bf39ae029ecff87c0db56e183bb82a5b5fac554053dc9b4ce9ceff9\": not found" Feb 9 14:10:08.756689 kubelet[2690]: I0209 14:10:08.756686 2690 scope.go:115] "RemoveContainer" containerID="46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619" Feb 9 14:10:08.756817 env[1559]: time="2024-02-09T14:10:08.756780696Z" level=error msg="ContainerStatus for \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\": not found" Feb 9 14:10:08.756873 kubelet[2690]: E0209 14:10:08.756865 2690 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\": not found" containerID="46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619" Feb 9 14:10:08.756908 kubelet[2690]: I0209 14:10:08.756882 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619} err="failed to get container status \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\": rpc error: code = NotFound desc = an error occurred when try to find container \"46eabbfc5ed0efcc703ea6b0bbb118ecbf7182f32cc9ce97e71f754ba1e63619\": not found" Feb 9 14:10:08.756908 kubelet[2690]: I0209 14:10:08.756887 2690 scope.go:115] "RemoveContainer" containerID="9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833" Feb 9 14:10:08.757007 env[1559]: time="2024-02-09T14:10:08.756975742Z" level=error msg="ContainerStatus for \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\": not found" Feb 9 14:10:08.757059 kubelet[2690]: E0209 14:10:08.757053 2690 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\": not found" containerID="9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833" Feb 9 14:10:08.757085 kubelet[2690]: I0209 14:10:08.757067 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833} err="failed to get container status \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f0d83fb202c563eb50decf45e3db147557ca43d4e88d351ef49ce0a83d9a833\": not found" Feb 9 14:10:08.757085 kubelet[2690]: I0209 14:10:08.757073 2690 scope.go:115] "RemoveContainer" containerID="b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1" Feb 9 14:10:08.757466 env[1559]: time="2024-02-09T14:10:08.757455247Z" level=info msg="RemoveContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\"" Feb 9 14:10:08.758495 env[1559]: time="2024-02-09T14:10:08.758484340Z" level=info msg="RemoveContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" returns successfully" Feb 9 14:10:08.758573 kubelet[2690]: I0209 14:10:08.758568 2690 scope.go:115] "RemoveContainer" containerID="b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1" Feb 9 14:10:08.758667 env[1559]: time="2024-02-09T14:10:08.758642256Z" level=error msg="ContainerStatus for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\": not found" Feb 9 14:10:08.758710 kubelet[2690]: E0209 14:10:08.758705 2690 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\": not found" containerID="b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1" Feb 9 14:10:08.758736 kubelet[2690]: I0209 14:10:08.758718 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1} err="failed to get container status \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\": not found" Feb 9 14:10:08.853436 systemd[1]: var-lib-kubelet-pods-b7092c37\x2dd7c1\x2d458d\x2d8dfb\x2da389a62463af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzdd4.mount: Deactivated successfully. Feb 9 14:10:08.853519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1-rootfs.mount: Deactivated successfully. Feb 9 14:10:08.853580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1-shm.mount: Deactivated successfully. Feb 9 14:10:08.853627 systemd[1]: var-lib-kubelet-pods-6209d075\x2d666f\x2d40ed\x2daab4\x2dd0989090d806-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgnkf.mount: Deactivated successfully. Feb 9 14:10:08.853681 systemd[1]: var-lib-kubelet-pods-6209d075\x2d666f\x2d40ed\x2daab4\x2dd0989090d806-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 14:10:08.853733 systemd[1]: var-lib-kubelet-pods-6209d075\x2d666f\x2d40ed\x2daab4\x2dd0989090d806-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 14:10:08.872389 env[1559]: time="2024-02-09T14:10:08.872356897Z" level=info msg="StopContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" with timeout 1 (s)" Feb 9 14:10:08.872389 env[1559]: time="2024-02-09T14:10:08.872363633Z" level=info msg="StopContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" with timeout 1 (s)" Feb 9 14:10:08.872724 env[1559]: time="2024-02-09T14:10:08.872398740Z" level=error msg="StopContainer for \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\": not found" Feb 9 14:10:08.872724 env[1559]: time="2024-02-09T14:10:08.872402581Z" level=error msg="StopContainer for \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\": not found" Feb 9 14:10:08.872724 env[1559]: time="2024-02-09T14:10:08.872647855Z" level=info msg="StopPodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\"" Feb 9 14:10:08.872724 env[1559]: time="2024-02-09T14:10:08.872665163Z" level=info msg="StopPodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\"" Feb 9 14:10:08.872838 kubelet[2690]: E0209 14:10:08.872517 2690 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1\": not found" containerID="b72e02b58336f7dc147c822eeca8ca03470206c34d1794356321b5b586d89bd1" Feb 9 14:10:08.872838 kubelet[2690]: E0209 14:10:08.872519 2690 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718\": not found" containerID="16aae49616260f5506f36964a0d63baaf9e42c19d74016ae82fb8eb078022718" Feb 9 14:10:08.872904 env[1559]: time="2024-02-09T14:10:08.872710440Z" level=info msg="TearDown network for sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" successfully" Feb 9 14:10:08.872904 env[1559]: time="2024-02-09T14:10:08.872711860Z" level=info msg="TearDown network for sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" successfully" Feb 9 14:10:08.872904 env[1559]: time="2024-02-09T14:10:08.872733313Z" level=info msg="StopPodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" returns successfully" Feb 9 14:10:08.872904 env[1559]: time="2024-02-09T14:10:08.872737992Z" level=info msg="StopPodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" returns successfully" Feb 9 14:10:08.872987 kubelet[2690]: I0209 14:10:08.872943 2690 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6209d075-666f-40ed-aab4-d0989090d806 path="/var/lib/kubelet/pods/6209d075-666f-40ed-aab4-d0989090d806/volumes" Feb 9 14:10:08.873330 kubelet[2690]: I0209 14:10:08.873321 2690 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b7092c37-d7c1-458d-8dfb-a389a62463af path="/var/lib/kubelet/pods/b7092c37-d7c1-458d-8dfb-a389a62463af/volumes" Feb 9 14:10:09.789100 sshd[5761]: pam_unix(sshd:session): session closed for user core Feb 9 14:10:09.794115 systemd[1]: Started sshd@77-86.109.11.101:22-147.75.109.163:53682.service. Feb 9 14:10:09.794512 systemd[1]: sshd@76-86.109.11.101:22-147.75.109.163:53666.service: Deactivated successfully. Feb 9 14:10:09.795183 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 14:10:09.795193 systemd-logind[1545]: Session 50 logged out. Waiting for processes to exit. Feb 9 14:10:09.795773 systemd-logind[1545]: Removed session 50. Feb 9 14:10:09.825128 sshd[5939]: Accepted publickey for core from 147.75.109.163 port 53682 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:10:09.826059 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:10:09.829532 systemd-logind[1545]: New session 51 of user core. Feb 9 14:10:09.830448 systemd[1]: Started session-51.scope. Feb 9 14:10:10.130853 sshd[5939]: pam_unix(sshd:session): session closed for user core Feb 9 14:10:10.132949 systemd[1]: Started sshd@78-86.109.11.101:22-147.75.109.163:53684.service. Feb 9 14:10:10.133397 systemd[1]: sshd@77-86.109.11.101:22-147.75.109.163:53682.service: Deactivated successfully. Feb 9 14:10:10.134110 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 14:10:10.134111 systemd-logind[1545]: Session 51 logged out. Waiting for processes to exit. Feb 9 14:10:10.134548 systemd-logind[1545]: Removed session 51. Feb 9 14:10:10.138906 kubelet[2690]: I0209 14:10:10.138887 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 14:10:10.139155 kubelet[2690]: E0209 14:10:10.138920 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7092c37-d7c1-458d-8dfb-a389a62463af" containerName="cilium-operator" Feb 9 14:10:10.139155 kubelet[2690]: E0209 14:10:10.138927 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6209d075-666f-40ed-aab4-d0989090d806" containerName="apply-sysctl-overwrites" Feb 9 14:10:10.139155 kubelet[2690]: E0209 14:10:10.138932 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6209d075-666f-40ed-aab4-d0989090d806" containerName="mount-cgroup" Feb 9 14:10:10.139155 kubelet[2690]: E0209 14:10:10.138936 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6209d075-666f-40ed-aab4-d0989090d806" containerName="mount-bpf-fs" Feb 9 14:10:10.139155 kubelet[2690]: E0209 14:10:10.138940 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6209d075-666f-40ed-aab4-d0989090d806" containerName="clean-cilium-state" Feb 9 14:10:10.139155 kubelet[2690]: E0209 14:10:10.138944 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6209d075-666f-40ed-aab4-d0989090d806" containerName="cilium-agent" Feb 9 14:10:10.139155 kubelet[2690]: I0209 14:10:10.138956 2690 memory_manager.go:346] "RemoveStaleState removing state" podUID="6209d075-666f-40ed-aab4-d0989090d806" containerName="cilium-agent" Feb 9 14:10:10.139155 kubelet[2690]: I0209 14:10:10.138960 2690 memory_manager.go:346] "RemoveStaleState removing state" podUID="b7092c37-d7c1-458d-8dfb-a389a62463af" containerName="cilium-operator" Feb 9 14:10:10.164297 sshd[5965]: Accepted publickey for core from 147.75.109.163 port 53684 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:10:10.165074 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:10:10.167481 systemd-logind[1545]: New session 52 of user core. Feb 9 14:10:10.168071 systemd[1]: Started session-52.scope. Feb 9 14:10:10.188209 kubelet[2690]: E0209 14:10:10.188175 2690 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 14:10:10.235847 kubelet[2690]: I0209 14:10:10.235745 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-hubble-tls\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236075 kubelet[2690]: I0209 14:10:10.235871 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-xtables-lock\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236075 kubelet[2690]: I0209 14:10:10.236007 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-clustermesh-secrets\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236317 kubelet[2690]: I0209 14:10:10.236098 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-net\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236317 kubelet[2690]: I0209 14:10:10.236160 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-kernel\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236517 kubelet[2690]: I0209 14:10:10.236313 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-run\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236517 kubelet[2690]: I0209 14:10:10.236400 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cni-path\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236701 kubelet[2690]: I0209 14:10:10.236519 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjcw\" (UniqueName: \"kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-kube-api-access-kxjcw\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236701 kubelet[2690]: I0209 14:10:10.236612 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-bpf-maps\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236925 kubelet[2690]: I0209 14:10:10.236742 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-hostproc\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236925 kubelet[2690]: I0209 14:10:10.236858 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-etc-cni-netd\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.236925 kubelet[2690]: I0209 14:10:10.236921 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-ipsec-secrets\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.237195 kubelet[2690]: I0209 14:10:10.236983 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-lib-modules\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.237195 kubelet[2690]: I0209 14:10:10.237131 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-cgroup\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.237375 kubelet[2690]: I0209 14:10:10.237222 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-config-path\") pod \"cilium-5tqrw\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " pod="kube-system/cilium-5tqrw" Feb 9 14:10:10.311578 sshd[5965]: pam_unix(sshd:session): session closed for user core Feb 9 14:10:10.313115 systemd[1]: Started sshd@79-86.109.11.101:22-147.75.109.163:53688.service. Feb 9 14:10:10.313423 systemd[1]: sshd@78-86.109.11.101:22-147.75.109.163:53684.service: Deactivated successfully. Feb 9 14:10:10.313960 systemd-logind[1545]: Session 52 logged out. Waiting for processes to exit. Feb 9 14:10:10.314026 systemd[1]: session-52.scope: Deactivated successfully. Feb 9 14:10:10.314513 systemd-logind[1545]: Removed session 52. Feb 9 14:10:10.344285 sshd[5991]: Accepted publickey for core from 147.75.109.163 port 53688 ssh2: RSA SHA256:U9TXCuLWmaN/vK3Q2mHS688pTPlFTaEaiETkjLfvCUc Feb 9 14:10:10.347458 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 14:10:10.375858 systemd-logind[1545]: New session 53 of user core. Feb 9 14:10:10.377103 systemd[1]: Started session-53.scope. Feb 9 14:10:10.442136 env[1559]: time="2024-02-09T14:10:10.442009868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5tqrw,Uid:6355fa65-0c3c-468a-ba2b-2e326e9db638,Namespace:kube-system,Attempt:0,}" Feb 9 14:10:10.462263 env[1559]: time="2024-02-09T14:10:10.462115318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:10:10.462263 env[1559]: time="2024-02-09T14:10:10.462184968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:10:10.462263 env[1559]: time="2024-02-09T14:10:10.462209794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:10:10.462678 env[1559]: time="2024-02-09T14:10:10.462500118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf pid=6018 runtime=io.containerd.runc.v2 Feb 9 14:10:10.511089 env[1559]: time="2024-02-09T14:10:10.511060869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5tqrw,Uid:6355fa65-0c3c-468a-ba2b-2e326e9db638,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\"" Feb 9 14:10:10.512450 env[1559]: time="2024-02-09T14:10:10.512433619Z" level=info msg="CreateContainer within sandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 14:10:10.517397 env[1559]: time="2024-02-09T14:10:10.517376345Z" level=info msg="CreateContainer within sandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe\"" Feb 9 14:10:10.517657 env[1559]: time="2024-02-09T14:10:10.517640653Z" level=info msg="StartContainer for \"9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe\"" Feb 9 14:10:10.574766 env[1559]: time="2024-02-09T14:10:10.574695818Z" level=info msg="StartContainer for \"9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe\" returns successfully" Feb 9 14:10:10.641282 env[1559]: time="2024-02-09T14:10:10.641172124Z" level=info msg="shim disconnected" id=9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe Feb 9 14:10:10.641622 env[1559]: time="2024-02-09T14:10:10.641284455Z" level=warning msg="cleaning up after shim disconnected" id=9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe namespace=k8s.io Feb 9 14:10:10.641622 env[1559]: time="2024-02-09T14:10:10.641317469Z" level=info msg="cleaning up dead shim" Feb 9 14:10:10.669032 env[1559]: time="2024-02-09T14:10:10.668928241Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6108 runtime=io.containerd.runc.v2\n" Feb 9 14:10:10.760299 env[1559]: time="2024-02-09T14:10:10.760072590Z" level=info msg="StopPodSandbox for \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\"" Feb 9 14:10:10.760299 env[1559]: time="2024-02-09T14:10:10.760208057Z" level=info msg="Container to stop \"9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 14:10:10.831888 env[1559]: time="2024-02-09T14:10:10.831749775Z" level=info msg="shim disconnected" id=1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf Feb 9 14:10:10.832241 env[1559]: time="2024-02-09T14:10:10.831891984Z" level=warning msg="cleaning up after shim disconnected" id=1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf namespace=k8s.io Feb 9 14:10:10.832241 env[1559]: time="2024-02-09T14:10:10.831932917Z" level=info msg="cleaning up dead shim" Feb 9 14:10:10.848344 env[1559]: time="2024-02-09T14:10:10.848233818Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6140 runtime=io.containerd.runc.v2\n" Feb 9 14:10:10.848943 env[1559]: time="2024-02-09T14:10:10.848874007Z" level=info msg="TearDown network for sandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" successfully" Feb 9 14:10:10.848943 env[1559]: time="2024-02-09T14:10:10.848927818Z" level=info msg="StopPodSandbox for \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" returns successfully" Feb 9 14:10:11.045169 kubelet[2690]: I0209 14:10:11.044950 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-ipsec-secrets\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.045169 kubelet[2690]: I0209 14:10:11.045056 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-lib-modules\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.045169 kubelet[2690]: I0209 14:10:11.045119 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-hostproc\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.045741 kubelet[2690]: I0209 14:10:11.045184 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-config-path\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.045741 kubelet[2690]: I0209 14:10:11.045190 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.045741 kubelet[2690]: I0209 14:10:11.045244 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-net\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.045741 kubelet[2690]: I0209 14:10:11.045253 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-hostproc" (OuterVolumeSpecName: "hostproc") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.045741 kubelet[2690]: I0209 14:10:11.045305 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-hubble-tls\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.046641 kubelet[2690]: I0209 14:10:11.045370 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.046641 kubelet[2690]: I0209 14:10:11.045478 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-cgroup\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.046641 kubelet[2690]: I0209 14:10:11.045544 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.046641 kubelet[2690]: I0209 14:10:11.045591 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-bpf-maps\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.046641 kubelet[2690]: W0209 14:10:11.045624 2690 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6355fa65-0c3c-468a-ba2b-2e326e9db638/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 14:10:11.046641 kubelet[2690]: I0209 14:10:11.045694 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cni-path\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.047566 kubelet[2690]: I0209 14:10:11.045677 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.047566 kubelet[2690]: I0209 14:10:11.045827 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-run\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.047566 kubelet[2690]: I0209 14:10:11.045825 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cni-path" (OuterVolumeSpecName: "cni-path") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.047566 kubelet[2690]: I0209 14:10:11.045877 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.047566 kubelet[2690]: I0209 14:10:11.045958 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxjcw\" (UniqueName: \"kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-kube-api-access-kxjcw\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.048133 kubelet[2690]: I0209 14:10:11.046068 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-kernel\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.048133 kubelet[2690]: I0209 14:10:11.046112 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.048133 kubelet[2690]: I0209 14:10:11.046183 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-xtables-lock\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.048133 kubelet[2690]: I0209 14:10:11.046247 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.048133 kubelet[2690]: I0209 14:10:11.046269 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-clustermesh-secrets\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046324 2690 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-etc-cni-netd\") pod \"6355fa65-0c3c-468a-ba2b-2e326e9db638\" (UID: \"6355fa65-0c3c-468a-ba2b-2e326e9db638\") " Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046417 2690 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046454 2690 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-xtables-lock\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046451 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046486 2690 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-lib-modules\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046518 2690 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-hostproc\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.048658 kubelet[2690]: I0209 14:10:11.046549 2690 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-host-proc-sys-net\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.049394 kubelet[2690]: I0209 14:10:11.046580 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-cgroup\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.049394 kubelet[2690]: I0209 14:10:11.046610 2690 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cni-path\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.049394 kubelet[2690]: I0209 14:10:11.046639 2690 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-bpf-maps\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.049394 kubelet[2690]: I0209 14:10:11.046668 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-run\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.050655 kubelet[2690]: I0209 14:10:11.050563 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 14:10:11.051081 kubelet[2690]: I0209 14:10:11.051040 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 14:10:11.051190 kubelet[2690]: I0209 14:10:11.051124 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:10:11.051229 kubelet[2690]: I0209 14:10:11.051197 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-kube-api-access-kxjcw" (OuterVolumeSpecName: "kube-api-access-kxjcw") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "kube-api-access-kxjcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 14:10:11.051313 kubelet[2690]: I0209 14:10:11.051265 2690 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6355fa65-0c3c-468a-ba2b-2e326e9db638" (UID: "6355fa65-0c3c-468a-ba2b-2e326e9db638"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 14:10:11.148003 kubelet[2690]: I0209 14:10:11.147895 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.148003 kubelet[2690]: I0209 14:10:11.147969 2690 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6355fa65-0c3c-468a-ba2b-2e326e9db638-cilium-config-path\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.148003 kubelet[2690]: I0209 14:10:11.148006 2690 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-hubble-tls\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.149194 kubelet[2690]: I0209 14:10:11.148044 2690 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-kxjcw\" (UniqueName: \"kubernetes.io/projected/6355fa65-0c3c-468a-ba2b-2e326e9db638-kube-api-access-kxjcw\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.149194 kubelet[2690]: I0209 14:10:11.148076 2690 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6355fa65-0c3c-468a-ba2b-2e326e9db638-etc-cni-netd\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.149194 kubelet[2690]: I0209 14:10:11.148110 2690 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6355fa65-0c3c-468a-ba2b-2e326e9db638-clustermesh-secrets\") on node \"ci-3510.3.2-a-2834128369\" DevicePath \"\"" Feb 9 14:10:11.349767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf-rootfs.mount: Deactivated successfully. Feb 9 14:10:11.349903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf-shm.mount: Deactivated successfully. Feb 9 14:10:11.349955 systemd[1]: var-lib-kubelet-pods-6355fa65\x2d0c3c\x2d468a\x2dba2b\x2d2e326e9db638-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxjcw.mount: Deactivated successfully. Feb 9 14:10:11.350003 systemd[1]: var-lib-kubelet-pods-6355fa65\x2d0c3c\x2d468a\x2dba2b\x2d2e326e9db638-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 14:10:11.350047 systemd[1]: var-lib-kubelet-pods-6355fa65\x2d0c3c\x2d468a\x2dba2b\x2d2e326e9db638-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 14:10:11.350124 systemd[1]: var-lib-kubelet-pods-6355fa65\x2d0c3c\x2d468a\x2dba2b\x2d2e326e9db638-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 14:10:11.765483 kubelet[2690]: I0209 14:10:11.765395 2690 scope.go:115] "RemoveContainer" containerID="9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe" Feb 9 14:10:11.768045 env[1559]: time="2024-02-09T14:10:11.767955585Z" level=info msg="RemoveContainer for \"9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe\"" Feb 9 14:10:11.771394 env[1559]: time="2024-02-09T14:10:11.771382551Z" level=info msg="RemoveContainer for \"9d58653f19983db4fbda68af25fc37f9fc0f9ca356cea687855931acdfc707fe\" returns successfully" Feb 9 14:10:11.784358 kubelet[2690]: I0209 14:10:11.784339 2690 topology_manager.go:210] "Topology Admit Handler" Feb 9 14:10:11.784479 kubelet[2690]: E0209 14:10:11.784376 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6355fa65-0c3c-468a-ba2b-2e326e9db638" containerName="mount-cgroup" Feb 9 14:10:11.784479 kubelet[2690]: I0209 14:10:11.784403 2690 memory_manager.go:346] "RemoveStaleState removing state" podUID="6355fa65-0c3c-468a-ba2b-2e326e9db638" containerName="mount-cgroup" Feb 9 14:10:11.952651 kubelet[2690]: I0209 14:10:11.952635 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-etc-cni-netd\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952651 kubelet[2690]: I0209 14:10:11.952657 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67cce8fb-1723-409a-9e6f-873110e09c91-cilium-ipsec-secrets\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952800 kubelet[2690]: I0209 14:10:11.952690 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-host-proc-sys-kernel\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952800 kubelet[2690]: I0209 14:10:11.952717 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-hostproc\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952800 kubelet[2690]: I0209 14:10:11.952749 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-host-proc-sys-net\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952800 kubelet[2690]: I0209 14:10:11.952772 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-cni-path\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952878 kubelet[2690]: I0209 14:10:11.952805 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-lib-modules\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952878 kubelet[2690]: I0209 14:10:11.952839 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-xtables-lock\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952878 kubelet[2690]: I0209 14:10:11.952855 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67cce8fb-1723-409a-9e6f-873110e09c91-clustermesh-secrets\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952878 kubelet[2690]: I0209 14:10:11.952867 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-cilium-cgroup\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952878 kubelet[2690]: I0209 14:10:11.952878 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd8r9\" (UniqueName: \"kubernetes.io/projected/67cce8fb-1723-409a-9e6f-873110e09c91-kube-api-access-pd8r9\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952969 kubelet[2690]: I0209 14:10:11.952904 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-cilium-run\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.952969 kubelet[2690]: I0209 14:10:11.952949 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67cce8fb-1723-409a-9e6f-873110e09c91-cilium-config-path\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.953008 kubelet[2690]: I0209 14:10:11.952973 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67cce8fb-1723-409a-9e6f-873110e09c91-bpf-maps\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:11.953008 kubelet[2690]: I0209 14:10:11.952990 2690 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67cce8fb-1723-409a-9e6f-873110e09c91-hubble-tls\") pod \"cilium-ztvmp\" (UID: \"67cce8fb-1723-409a-9e6f-873110e09c91\") " pod="kube-system/cilium-ztvmp" Feb 9 14:10:12.387976 env[1559]: time="2024-02-09T14:10:12.387877377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztvmp,Uid:67cce8fb-1723-409a-9e6f-873110e09c91,Namespace:kube-system,Attempt:0,}" Feb 9 14:10:12.401778 env[1559]: time="2024-02-09T14:10:12.401745403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 14:10:12.401778 env[1559]: time="2024-02-09T14:10:12.401771811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 14:10:12.401866 env[1559]: time="2024-02-09T14:10:12.401780319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 14:10:12.401889 env[1559]: time="2024-02-09T14:10:12.401865905Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f pid=6167 runtime=io.containerd.runc.v2 Feb 9 14:10:12.435379 env[1559]: time="2024-02-09T14:10:12.435342267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztvmp,Uid:67cce8fb-1723-409a-9e6f-873110e09c91,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\"" Feb 9 14:10:12.437343 env[1559]: time="2024-02-09T14:10:12.437308577Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 14:10:12.443521 env[1559]: time="2024-02-09T14:10:12.443487991Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2a33b041945a95dc57fe82885c62122fdb858cc854cd47f8189a0676cb1d46e1\"" Feb 9 14:10:12.443876 env[1559]: time="2024-02-09T14:10:12.443845341Z" level=info msg="StartContainer for \"2a33b041945a95dc57fe82885c62122fdb858cc854cd47f8189a0676cb1d46e1\"" Feb 9 14:10:12.445871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552804683.mount: Deactivated successfully. Feb 9 14:10:12.497391 env[1559]: time="2024-02-09T14:10:12.497324056Z" level=info msg="StartContainer for \"2a33b041945a95dc57fe82885c62122fdb858cc854cd47f8189a0676cb1d46e1\" returns successfully" Feb 9 14:10:12.538054 env[1559]: time="2024-02-09T14:10:12.538004478Z" level=info msg="shim disconnected" id=2a33b041945a95dc57fe82885c62122fdb858cc854cd47f8189a0676cb1d46e1 Feb 9 14:10:12.538234 env[1559]: time="2024-02-09T14:10:12.538061025Z" level=warning msg="cleaning up after shim disconnected" id=2a33b041945a95dc57fe82885c62122fdb858cc854cd47f8189a0676cb1d46e1 namespace=k8s.io Feb 9 14:10:12.538234 env[1559]: time="2024-02-09T14:10:12.538076000Z" level=info msg="cleaning up dead shim" Feb 9 14:10:12.558144 env[1559]: time="2024-02-09T14:10:12.558075679Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6249 runtime=io.containerd.runc.v2\n" Feb 9 14:10:12.777838 env[1559]: time="2024-02-09T14:10:12.777690557Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 14:10:12.793507 env[1559]: time="2024-02-09T14:10:12.793368209Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99532febf92935fca5835670e330ce00b571aacfd5dd602d165f90e7a38f46b4\"" Feb 9 14:10:12.794386 env[1559]: time="2024-02-09T14:10:12.794282001Z" level=info msg="StartContainer for \"99532febf92935fca5835670e330ce00b571aacfd5dd602d165f90e7a38f46b4\"" Feb 9 14:10:12.878438 kubelet[2690]: I0209 14:10:12.878357 2690 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6355fa65-0c3c-468a-ba2b-2e326e9db638 path="/var/lib/kubelet/pods/6355fa65-0c3c-468a-ba2b-2e326e9db638/volumes" Feb 9 14:10:12.910709 env[1559]: time="2024-02-09T14:10:12.910588043Z" level=info msg="StartContainer for \"99532febf92935fca5835670e330ce00b571aacfd5dd602d165f90e7a38f46b4\" returns successfully" Feb 9 14:10:12.971715 env[1559]: time="2024-02-09T14:10:12.971581686Z" level=info msg="shim disconnected" id=99532febf92935fca5835670e330ce00b571aacfd5dd602d165f90e7a38f46b4 Feb 9 14:10:12.971715 env[1559]: time="2024-02-09T14:10:12.971685358Z" level=warning msg="cleaning up after shim disconnected" id=99532febf92935fca5835670e330ce00b571aacfd5dd602d165f90e7a38f46b4 namespace=k8s.io Feb 9 14:10:12.971715 env[1559]: time="2024-02-09T14:10:12.971716937Z" level=info msg="cleaning up dead shim" Feb 9 14:10:12.999022 env[1559]: time="2024-02-09T14:10:12.998914485Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6311 runtime=io.containerd.runc.v2\n" Feb 9 14:10:13.784191 env[1559]: time="2024-02-09T14:10:13.784082218Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 14:10:13.790529 env[1559]: time="2024-02-09T14:10:13.790507797Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"26b7e3e1a6b6dbf319f44dbd673a9e94e51518fd531aa0d0cb0650801f8751fb\"" Feb 9 14:10:13.790851 env[1559]: time="2024-02-09T14:10:13.790797650Z" level=info msg="StartContainer for \"26b7e3e1a6b6dbf319f44dbd673a9e94e51518fd531aa0d0cb0650801f8751fb\"" Feb 9 14:10:13.792053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077680928.mount: Deactivated successfully. Feb 9 14:10:13.831041 env[1559]: time="2024-02-09T14:10:13.831011142Z" level=info msg="StartContainer for \"26b7e3e1a6b6dbf319f44dbd673a9e94e51518fd531aa0d0cb0650801f8751fb\" returns successfully" Feb 9 14:10:13.856267 env[1559]: time="2024-02-09T14:10:13.856232622Z" level=info msg="shim disconnected" id=26b7e3e1a6b6dbf319f44dbd673a9e94e51518fd531aa0d0cb0650801f8751fb Feb 9 14:10:13.856267 env[1559]: time="2024-02-09T14:10:13.856269254Z" level=warning msg="cleaning up after shim disconnected" id=26b7e3e1a6b6dbf319f44dbd673a9e94e51518fd531aa0d0cb0650801f8751fb namespace=k8s.io Feb 9 14:10:13.856426 env[1559]: time="2024-02-09T14:10:13.856277998Z" level=info msg="cleaning up dead shim" Feb 9 14:10:13.861401 env[1559]: time="2024-02-09T14:10:13.861341490Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6367 runtime=io.containerd.runc.v2\n" Feb 9 14:10:14.084530 systemd[1]: Started sshd@80-86.109.11.101:22-218.92.0.25:63191.service. Feb 9 14:10:14.399730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26b7e3e1a6b6dbf319f44dbd673a9e94e51518fd531aa0d0cb0650801f8751fb-rootfs.mount: Deactivated successfully. Feb 9 14:10:14.789911 env[1559]: time="2024-02-09T14:10:14.789866883Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 14:10:14.794055 env[1559]: time="2024-02-09T14:10:14.794032652Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39ac97b3f195af8b8d2b556fe07ac148f758c45ef3fa36a6f61ef8f3227f1c7a\"" Feb 9 14:10:14.794334 env[1559]: time="2024-02-09T14:10:14.794285620Z" level=info msg="StartContainer for \"39ac97b3f195af8b8d2b556fe07ac148f758c45ef3fa36a6f61ef8f3227f1c7a\"" Feb 9 14:10:14.795064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991083676.mount: Deactivated successfully. Feb 9 14:10:14.844799 env[1559]: time="2024-02-09T14:10:14.844741072Z" level=info msg="StartContainer for \"39ac97b3f195af8b8d2b556fe07ac148f758c45ef3fa36a6f61ef8f3227f1c7a\" returns successfully" Feb 9 14:10:14.888040 sshd[5445]: Connection closed by 101.42.135.203 port 44592 [preauth] Feb 9 14:10:14.889633 systemd[1]: sshd@66-86.109.11.101:22-101.42.135.203:44592.service: Deactivated successfully. Feb 9 14:10:14.896591 env[1559]: time="2024-02-09T14:10:14.896497632Z" level=info msg="shim disconnected" id=39ac97b3f195af8b8d2b556fe07ac148f758c45ef3fa36a6f61ef8f3227f1c7a Feb 9 14:10:14.896930 env[1559]: time="2024-02-09T14:10:14.896589826Z" level=warning msg="cleaning up after shim disconnected" id=39ac97b3f195af8b8d2b556fe07ac148f758c45ef3fa36a6f61ef8f3227f1c7a namespace=k8s.io Feb 9 14:10:14.896930 env[1559]: time="2024-02-09T14:10:14.896617446Z" level=info msg="cleaning up dead shim" Feb 9 14:10:14.923944 env[1559]: time="2024-02-09T14:10:14.923862176Z" level=warning msg="cleanup warnings time=\"2024-02-09T14:10:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6425 runtime=io.containerd.runc.v2\n" Feb 9 14:10:15.190352 kubelet[2690]: E0209 14:10:15.190260 2690 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 14:10:15.399881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ac97b3f195af8b8d2b556fe07ac148f758c45ef3fa36a6f61ef8f3227f1c7a-rootfs.mount: Deactivated successfully. Feb 9 14:10:15.462429 kubelet[2690]: I0209 14:10:15.462246 2690 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-2834128369" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 14:10:15.462143452 +0000 UTC m=+1010.709281495 LastTransitionTime:2024-02-09 14:10:15.462143452 +0000 UTC m=+1010.709281495 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 14:10:15.792866 env[1559]: time="2024-02-09T14:10:15.792817105Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 14:10:15.797562 env[1559]: time="2024-02-09T14:10:15.797537281Z" level=info msg="CreateContainer within sandbox \"9d1f330772ca0d79042ce593be80160bd2d4736ed14bddcd1b624f677e14bd6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3462b8c75143397159f6de5510d3d1191bf6805c6197958c3a0541dbef7bb5f7\"" Feb 9 14:10:15.797884 env[1559]: time="2024-02-09T14:10:15.797866128Z" level=info msg="StartContainer for \"3462b8c75143397159f6de5510d3d1191bf6805c6197958c3a0541dbef7bb5f7\"" Feb 9 14:10:15.842526 env[1559]: time="2024-02-09T14:10:15.842487399Z" level=info msg="StartContainer for \"3462b8c75143397159f6de5510d3d1191bf6805c6197958c3a0541dbef7bb5f7\" returns successfully" Feb 9 14:10:16.019799 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 14:10:16.809829 kubelet[2690]: I0209 14:10:16.809811 2690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ztvmp" podStartSLOduration=5.809781022 pod.CreationTimestamp="2024-02-09 14:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 14:10:16.809608255 +0000 UTC m=+1012.056746244" watchObservedRunningTime="2024-02-09 14:10:16.809781022 +0000 UTC m=+1012.056919004" Feb 9 14:10:18.891244 systemd-networkd[1415]: lxc_health: Link UP Feb 9 14:10:18.914423 systemd[1]: Started sshd@81-86.109.11.101:22-170.64.194.223:38220.service. Feb 9 14:10:18.914713 systemd-networkd[1415]: lxc_health: Gained carrier Feb 9 14:10:18.914845 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 14:10:19.738950 sshd[7137]: Invalid user javadmn from 170.64.194.223 port 38220 Feb 9 14:10:19.740106 sshd[7137]: pam_faillock(sshd:auth): User unknown Feb 9 14:10:19.740317 sshd[7137]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:10:19.740336 sshd[7137]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.194.223 Feb 9 14:10:19.740510 sshd[7137]: pam_faillock(sshd:auth): User unknown Feb 9 14:10:20.425948 systemd-networkd[1415]: lxc_health: Gained IPv6LL Feb 9 14:10:21.534952 sshd[7137]: Failed password for invalid user javadmn from 170.64.194.223 port 38220 ssh2 Feb 9 14:10:21.763334 sshd[6379]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 14:10:22.538690 sshd[7137]: Received disconnect from 170.64.194.223 port 38220:11: Bye Bye [preauth] Feb 9 14:10:22.538690 sshd[7137]: Disconnected from invalid user javadmn 170.64.194.223 port 38220 [preauth] Feb 9 14:10:22.541074 systemd[1]: sshd@81-86.109.11.101:22-170.64.194.223:38220.service: Deactivated successfully. Feb 9 14:10:24.165011 sshd[6379]: Failed password for root from 218.92.0.25 port 63191 ssh2 Feb 9 14:10:24.872021 env[1559]: time="2024-02-09T14:10:24.871887297Z" level=info msg="StopPodSandbox for \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\"" Feb 9 14:10:24.872811 env[1559]: time="2024-02-09T14:10:24.872155832Z" level=info msg="TearDown network for sandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" successfully" Feb 9 14:10:24.872811 env[1559]: time="2024-02-09T14:10:24.872291281Z" level=info msg="StopPodSandbox for \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" returns successfully" Feb 9 14:10:24.873452 env[1559]: time="2024-02-09T14:10:24.873340457Z" level=info msg="RemovePodSandbox for \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\"" Feb 9 14:10:24.873618 env[1559]: time="2024-02-09T14:10:24.873434893Z" level=info msg="Forcibly stopping sandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\"" Feb 9 14:10:24.873731 env[1559]: time="2024-02-09T14:10:24.873678155Z" level=info msg="TearDown network for sandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" successfully" Feb 9 14:10:24.877657 env[1559]: time="2024-02-09T14:10:24.877623625Z" level=info msg="RemovePodSandbox \"1e9395931c2ce362873b2bfa657bc15431f9c46ab36f6ca79ea7df613ded93cf\" returns successfully" Feb 9 14:10:24.877785 env[1559]: time="2024-02-09T14:10:24.877771720Z" level=info msg="StopPodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\"" Feb 9 14:10:24.877880 env[1559]: time="2024-02-09T14:10:24.877831913Z" level=info msg="TearDown network for sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" successfully" Feb 9 14:10:24.877880 env[1559]: time="2024-02-09T14:10:24.877851812Z" level=info msg="StopPodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" returns successfully" Feb 9 14:10:24.878315 env[1559]: time="2024-02-09T14:10:24.878297501Z" level=info msg="RemovePodSandbox for \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\"" Feb 9 14:10:24.878448 env[1559]: time="2024-02-09T14:10:24.878421906Z" level=info msg="Forcibly stopping sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\"" Feb 9 14:10:24.878506 env[1559]: time="2024-02-09T14:10:24.878492465Z" level=info msg="TearDown network for sandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" successfully" Feb 9 14:10:24.880333 env[1559]: time="2024-02-09T14:10:24.880317201Z" level=info msg="RemovePodSandbox \"94acb830adc4f081b396209ab4ed60f89a94bcce2ec43b766e95fc3775e0c0d1\" returns successfully" Feb 9 14:10:24.880631 env[1559]: time="2024-02-09T14:10:24.880594868Z" level=info msg="StopPodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\"" Feb 9 14:10:24.880656 env[1559]: time="2024-02-09T14:10:24.880629981Z" level=info msg="TearDown network for sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" successfully" Feb 9 14:10:24.880656 env[1559]: time="2024-02-09T14:10:24.880646647Z" level=info msg="StopPodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" returns successfully" Feb 9 14:10:24.880760 env[1559]: time="2024-02-09T14:10:24.880750145Z" level=info msg="RemovePodSandbox for \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\"" Feb 9 14:10:24.880796 env[1559]: time="2024-02-09T14:10:24.880763765Z" level=info msg="Forcibly stopping sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\"" Feb 9 14:10:24.880818 env[1559]: time="2024-02-09T14:10:24.880802843Z" level=info msg="TearDown network for sandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" successfully" Feb 9 14:10:24.881848 env[1559]: time="2024-02-09T14:10:24.881808873Z" level=info msg="RemovePodSandbox \"12d0532765a74edc4e723e54b9da42f39e6324cf288e68e1c5887387cffa4d1f\" returns successfully" Feb 9 14:10:24.913387 systemd[1]: Started sshd@82-86.109.11.101:22-165.227.228.212:58460.service. Feb 9 14:10:25.756360 sshd[7238]: Invalid user maryloli from 165.227.228.212 port 58460 Feb 9 14:10:25.762425 sshd[7238]: pam_faillock(sshd:auth): User unknown Feb 9 14:10:25.763530 sshd[7238]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:10:25.763617 sshd[7238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.228.212 Feb 9 14:10:25.764570 sshd[7238]: pam_faillock(sshd:auth): User unknown Feb 9 14:10:26.390066 systemd[1]: Started sshd@83-86.109.11.101:22-101.42.135.203:37976.service. Feb 9 14:10:27.043922 sshd[5991]: pam_unix(sshd:session): session closed for user core Feb 9 14:10:27.049403 systemd[1]: sshd@79-86.109.11.101:22-147.75.109.163:53688.service: Deactivated successfully. Feb 9 14:10:27.051934 systemd-logind[1545]: Session 53 logged out. Waiting for processes to exit. Feb 9 14:10:27.052069 systemd[1]: session-53.scope: Deactivated successfully. Feb 9 14:10:27.054331 systemd-logind[1545]: Removed session 53. Feb 9 14:10:27.278830 sshd[7259]: Invalid user samwon from 101.42.135.203 port 37976 Feb 9 14:10:27.284751 sshd[7259]: pam_faillock(sshd:auth): User unknown Feb 9 14:10:27.285865 sshd[7259]: pam_unix(sshd:auth): check pass; user unknown Feb 9 14:10:27.285954 sshd[7259]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=101.42.135.203 Feb 9 14:10:27.286889 sshd[7259]: pam_faillock(sshd:auth): User unknown Feb 9 14:10:27.715761 sshd[7238]: Failed password for invalid user maryloli from 165.227.228.212 port 58460 ssh2 Feb 9 14:10:28.002300 sshd[7238]: Received disconnect from 165.227.228.212 port 58460:11: Bye Bye [preauth] Feb 9 14:10:28.002300 sshd[7238]: Disconnected from invalid user maryloli 165.227.228.212 port 58460 [preauth] Feb 9 14:10:28.004626 systemd[1]: sshd@82-86.109.11.101:22-165.227.228.212:58460.service: Deactivated successfully.