Feb 9 09:13:30.570029 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 09:13:30.570042 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:13:30.570050 kernel: BIOS-provided physical RAM map: Feb 9 09:13:30.570054 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 09:13:30.570057 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 09:13:30.570061 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 09:13:30.570065 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 09:13:30.570069 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 09:13:30.570073 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000820e1fff] usable Feb 9 09:13:30.570077 kernel: BIOS-e820: [mem 0x00000000820e2000-0x00000000820e2fff] ACPI NVS Feb 9 09:13:30.570081 kernel: BIOS-e820: [mem 0x00000000820e3000-0x00000000820e3fff] reserved Feb 9 09:13:30.570085 kernel: BIOS-e820: [mem 0x00000000820e4000-0x000000008afccfff] usable Feb 9 09:13:30.570089 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 9 09:13:30.570093 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 9 09:13:30.570098 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 9 09:13:30.570103 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 9 09:13:30.570107 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 9 09:13:30.570111 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 9 09:13:30.570115 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 09:13:30.570119 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 09:13:30.570123 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 09:13:30.570128 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 09:13:30.570132 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 09:13:30.570136 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 9 09:13:30.570140 kernel: NX (Execute Disable) protection: active Feb 9 09:13:30.570144 kernel: SMBIOS 3.2.1 present. Feb 9 09:13:30.570149 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Feb 9 09:13:30.570153 kernel: tsc: Detected 3400.000 MHz processor Feb 9 09:13:30.570157 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 09:13:30.570162 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 09:13:30.570166 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 09:13:30.570171 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 9 09:13:30.570175 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 09:13:30.570179 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 9 09:13:30.570184 kernel: Using GB pages for direct mapping Feb 9 09:13:30.570188 kernel: ACPI: Early table checksum verification disabled Feb 9 09:13:30.570193 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 09:13:30.570197 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 09:13:30.570202 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 9 09:13:30.570206 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 09:13:30.570212 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 9 09:13:30.570217 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 9 09:13:30.570222 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 9 09:13:30.570227 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 09:13:30.570232 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 09:13:30.570236 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 09:13:30.570241 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 09:13:30.570246 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 09:13:30.570250 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 09:13:30.570255 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:13:30.570260 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 09:13:30.570265 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 09:13:30.570270 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:13:30.570274 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:13:30.570279 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 09:13:30.570283 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 09:13:30.570288 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:13:30.570293 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:13:30.570298 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 09:13:30.570303 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 9 09:13:30.570307 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 09:13:30.570312 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 09:13:30.570317 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 09:13:30.570321 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 9 09:13:30.570326 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 09:13:30.570330 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 09:13:30.570335 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 09:13:30.570340 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 09:13:30.570345 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 09:13:30.570350 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 9 09:13:30.570354 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 9 09:13:30.570359 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 9 09:13:30.570364 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 9 09:13:30.570368 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 9 09:13:30.570373 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 9 09:13:30.570377 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 9 09:13:30.570383 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 9 09:13:30.570387 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 9 09:13:30.570392 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 9 09:13:30.570396 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 9 09:13:30.570401 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 9 09:13:30.570406 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 9 09:13:30.570410 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 9 09:13:30.570415 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 9 09:13:30.570419 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 9 09:13:30.570425 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 9 09:13:30.570429 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 9 09:13:30.570434 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 9 09:13:30.570439 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 9 09:13:30.570443 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 9 09:13:30.570448 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 9 09:13:30.570453 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 9 09:13:30.570457 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 9 09:13:30.570463 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 9 09:13:30.570467 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 9 09:13:30.570472 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 9 09:13:30.570476 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 9 09:13:30.570481 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 9 09:13:30.570486 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 9 09:13:30.570490 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 9 09:13:30.570495 kernel: No NUMA configuration found Feb 9 09:13:30.570500 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 9 09:13:30.570504 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 9 09:13:30.570510 kernel: Zone ranges: Feb 9 09:13:30.570515 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 09:13:30.570519 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 09:13:30.570524 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 9 09:13:30.570528 kernel: Movable zone start for each node Feb 9 09:13:30.570533 kernel: Early memory node ranges Feb 9 09:13:30.570538 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 09:13:30.570542 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 09:13:30.570547 kernel: node 0: [mem 0x0000000040400000-0x00000000820e1fff] Feb 9 09:13:30.570552 kernel: node 0: [mem 0x00000000820e4000-0x000000008afccfff] Feb 9 09:13:30.570557 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 9 09:13:30.570564 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 9 09:13:30.570569 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 9 09:13:30.570574 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 9 09:13:30.570578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 09:13:30.570587 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 09:13:30.570592 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 09:13:30.570597 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 09:13:30.570602 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 9 09:13:30.570608 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 9 09:13:30.570613 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 9 09:13:30.570618 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 9 09:13:30.570623 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 09:13:30.570628 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 09:13:30.570633 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 09:13:30.570638 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 09:13:30.570643 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 09:13:30.570648 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 09:13:30.570653 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 09:13:30.570658 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 09:13:30.570663 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 09:13:30.570668 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 09:13:30.570673 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 09:13:30.570678 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 09:13:30.570683 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 09:13:30.570688 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 09:13:30.570693 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 09:13:30.570698 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 09:13:30.570703 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 09:13:30.570708 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 09:13:30.570713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 09:13:30.570718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 09:13:30.570723 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 09:13:30.570728 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 09:13:30.570734 kernel: TSC deadline timer available Feb 9 09:13:30.570739 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 09:13:30.570744 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 9 09:13:30.570749 kernel: Booting paravirtualized kernel on bare hardware Feb 9 09:13:30.570754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 09:13:30.570759 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 09:13:30.570764 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 09:13:30.570769 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 09:13:30.570773 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 09:13:30.570779 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 9 09:13:30.570784 kernel: Policy zone: Normal Feb 9 09:13:30.570790 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:13:30.570795 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:13:30.570800 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 09:13:30.570805 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 09:13:30.570810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:13:30.570815 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 09:13:30.570821 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 09:13:30.570826 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 09:13:30.570831 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 09:13:30.570836 kernel: rcu: Hierarchical RCU implementation. Feb 9 09:13:30.570841 kernel: rcu: RCU event tracing is enabled. Feb 9 09:13:30.570846 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 09:13:30.570851 kernel: Rude variant of Tasks RCU enabled. Feb 9 09:13:30.570856 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:13:30.570861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:13:30.570867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 09:13:30.570872 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 09:13:30.570877 kernel: random: crng init done Feb 9 09:13:30.570882 kernel: Console: colour dummy device 80x25 Feb 9 09:13:30.570887 kernel: printk: console [tty0] enabled Feb 9 09:13:30.570892 kernel: printk: console [ttyS1] enabled Feb 9 09:13:30.570897 kernel: ACPI: Core revision 20210730 Feb 9 09:13:30.570902 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 9 09:13:30.570907 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 09:13:30.570913 kernel: DMAR: Host address width 39 Feb 9 09:13:30.570918 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 09:13:30.570923 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 09:13:30.570928 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 9 09:13:30.570933 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 9 09:13:30.570938 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 09:13:30.570943 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 09:13:30.570948 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 09:13:30.570952 kernel: x2apic enabled Feb 9 09:13:30.570958 kernel: Switched APIC routing to cluster x2apic. Feb 9 09:13:30.570963 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 09:13:30.570968 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 09:13:30.570973 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 09:13:30.570978 kernel: process: using mwait in idle threads Feb 9 09:13:30.570983 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 09:13:30.570988 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 09:13:30.570993 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 09:13:30.570998 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:13:30.571004 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 09:13:30.571008 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 09:13:30.571013 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 09:13:30.571018 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 09:13:30.571023 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 09:13:30.571028 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 09:13:30.571033 kernel: TAA: Mitigation: TSX disabled Feb 9 09:13:30.571038 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 09:13:30.571043 kernel: SRBDS: Mitigation: Microcode Feb 9 09:13:30.571047 kernel: GDS: Vulnerable: No microcode Feb 9 09:13:30.571052 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 09:13:30.571058 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 09:13:30.571063 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 09:13:30.571068 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 09:13:30.571073 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 09:13:30.571078 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 09:13:30.571082 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 09:13:30.571087 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 09:13:30.571092 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 09:13:30.571097 kernel: Freeing SMP alternatives memory: 32K Feb 9 09:13:30.571102 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:13:30.571107 kernel: LSM: Security Framework initializing Feb 9 09:13:30.571112 kernel: SELinux: Initializing. Feb 9 09:13:30.571117 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:13:30.571122 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:13:30.571127 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 09:13:30.571132 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 09:13:30.571137 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 09:13:30.571142 kernel: ... version: 4 Feb 9 09:13:30.571147 kernel: ... bit width: 48 Feb 9 09:13:30.571152 kernel: ... generic registers: 4 Feb 9 09:13:30.571157 kernel: ... value mask: 0000ffffffffffff Feb 9 09:13:30.571162 kernel: ... max period: 00007fffffffffff Feb 9 09:13:30.571167 kernel: ... fixed-purpose events: 3 Feb 9 09:13:30.571172 kernel: ... event mask: 000000070000000f Feb 9 09:13:30.571177 kernel: signal: max sigframe size: 2032 Feb 9 09:13:30.571182 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:13:30.571187 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 09:13:30.571192 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:13:30.571197 kernel: x86: Booting SMP configuration: Feb 9 09:13:30.571202 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 09:13:30.571207 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 09:13:30.571213 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 09:13:30.571218 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 09:13:30.571223 kernel: smpboot: Max logical packages: 1 Feb 9 09:13:30.571228 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 09:13:30.571233 kernel: devtmpfs: initialized Feb 9 09:13:30.571238 kernel: x86/mm: Memory block size: 128MB Feb 9 09:13:30.571243 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x820e2000-0x820e2fff] (4096 bytes) Feb 9 09:13:30.571248 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 9 09:13:30.571254 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:13:30.571259 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 09:13:30.571264 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:13:30.571269 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:13:30.571274 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:13:30.571279 kernel: audit: type=2000 audit(1707470005.040:1): state=initialized audit_enabled=0 res=1 Feb 9 09:13:30.571284 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:13:30.571289 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 09:13:30.571294 kernel: cpuidle: using governor menu Feb 9 09:13:30.571299 kernel: ACPI: bus type PCI registered Feb 9 09:13:30.571304 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:13:30.571309 kernel: dca service started, version 1.12.1 Feb 9 09:13:30.571314 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 09:13:30.571319 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 09:13:30.571324 kernel: PCI: Using configuration type 1 for base access Feb 9 09:13:30.571329 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 09:13:30.571334 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 09:13:30.571339 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:13:30.571345 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:13:30.571350 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:13:30.571355 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:13:30.571360 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:13:30.571365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:13:30.571370 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:13:30.571375 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:13:30.571380 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:13:30.571385 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 09:13:30.571391 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:13:30.571396 kernel: ACPI: SSDT 0xFFFF8D1300212E00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 09:13:30.571401 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 09:13:30.571406 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:13:30.571411 kernel: ACPI: SSDT 0xFFFF8D1301AE6C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 09:13:30.571416 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:13:30.571421 kernel: ACPI: SSDT 0xFFFF8D1301A5B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 09:13:30.571425 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:13:30.571430 kernel: ACPI: SSDT 0xFFFF8D1301A5C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 09:13:30.571435 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:13:30.571441 kernel: ACPI: SSDT 0xFFFF8D130014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 09:13:30.571446 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:13:30.571451 kernel: ACPI: SSDT 0xFFFF8D1301AE2000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 09:13:30.571456 kernel: ACPI: Interpreter enabled Feb 9 09:13:30.571460 kernel: ACPI: PM: (supports S0 S5) Feb 9 09:13:30.571465 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 09:13:30.571470 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 09:13:30.571475 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 09:13:30.571480 kernel: HEST: Table parsing has been initialized. Feb 9 09:13:30.571486 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 09:13:30.571491 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 09:13:30.571496 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 09:13:30.571501 kernel: ACPI: PM: Power Resource [USBC] Feb 9 09:13:30.571506 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 09:13:30.571510 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 09:13:30.571515 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 09:13:30.571520 kernel: ACPI: PM: Power Resource [WRST] Feb 9 09:13:30.571525 kernel: ACPI: PM: Power Resource [FN00] Feb 9 09:13:30.571531 kernel: ACPI: PM: Power Resource [FN01] Feb 9 09:13:30.571536 kernel: ACPI: PM: Power Resource [FN02] Feb 9 09:13:30.571541 kernel: ACPI: PM: Power Resource [FN03] Feb 9 09:13:30.571545 kernel: ACPI: PM: Power Resource [FN04] Feb 9 09:13:30.571550 kernel: ACPI: PM: Power Resource [PIN] Feb 9 09:13:30.571555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 09:13:30.571626 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:13:30.571673 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 09:13:30.571717 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 09:13:30.571725 kernel: PCI host bridge to bus 0000:00 Feb 9 09:13:30.571769 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 09:13:30.571807 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 09:13:30.571845 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 09:13:30.571882 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 9 09:13:30.571920 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 09:13:30.571958 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 09:13:30.572011 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 09:13:30.572062 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 09:13:30.572107 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.572156 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 09:13:30.572199 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 9 09:13:30.572249 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 09:13:30.572294 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 9 09:13:30.572342 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 09:13:30.572386 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 9 09:13:30.572429 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 09:13:30.572475 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 09:13:30.572520 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 9 09:13:30.572566 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 9 09:13:30.572615 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 09:13:30.572659 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:13:30.572706 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 09:13:30.572749 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:13:30.572798 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 09:13:30.572840 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 9 09:13:30.572883 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 09:13:30.572928 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 09:13:30.572972 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 9 09:13:30.573014 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 09:13:30.573060 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 09:13:30.573105 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 9 09:13:30.573147 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 09:13:30.573193 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 09:13:30.573236 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 9 09:13:30.573279 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 9 09:13:30.573320 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 9 09:13:30.573363 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 9 09:13:30.573411 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 9 09:13:30.573455 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 9 09:13:30.573499 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 09:13:30.573548 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 09:13:30.573595 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.573642 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 09:13:30.573688 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.573737 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 09:13:30.573782 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.573828 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 09:13:30.573872 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.573920 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 9 09:13:30.573965 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.574012 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 09:13:30.574056 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:13:30.574103 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 09:13:30.574153 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 09:13:30.574196 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 9 09:13:30.574239 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 09:13:30.574285 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 09:13:30.574329 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 09:13:30.574377 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 09:13:30.574425 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 09:13:30.574469 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 9 09:13:30.574513 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 9 09:13:30.574557 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 09:13:30.574605 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 09:13:30.574654 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 09:13:30.574699 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 09:13:30.574746 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 9 09:13:30.574790 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 9 09:13:30.574834 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 09:13:30.574878 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 09:13:30.574923 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 09:13:30.574966 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 09:13:30.575010 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:13:30.575053 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 09:13:30.575103 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 9 09:13:30.575149 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 9 09:13:30.575193 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 09:13:30.575237 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 9 09:13:30.575281 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.575325 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 09:13:30.575368 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 09:13:30.575413 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 09:13:30.575461 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 09:13:30.575509 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 9 09:13:30.575553 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 09:13:30.575601 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 9 09:13:30.575647 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 09:13:30.575690 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 09:13:30.575733 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 09:13:30.575777 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 09:13:30.575821 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 09:13:30.575871 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 09:13:30.575918 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 9 09:13:30.575963 kernel: pci 0000:06:00.0: supports D1 D2 Feb 9 09:13:30.576008 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:13:30.576051 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 09:13:30.576095 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 09:13:30.576139 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 09:13:30.576186 kernel: pci_bus 0000:07: extended config space not accessible Feb 9 09:13:30.576237 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 09:13:30.576285 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 9 09:13:30.576336 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 9 09:13:30.576432 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 09:13:30.576480 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 09:13:30.576528 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 09:13:30.576578 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:13:30.576623 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 09:13:30.576668 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 09:13:30.576712 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 09:13:30.576720 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 09:13:30.576725 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 09:13:30.576732 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 09:13:30.576737 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 09:13:30.576743 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 09:13:30.576748 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 09:13:30.576753 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 09:13:30.576758 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 09:13:30.576764 kernel: iommu: Default domain type: Translated Feb 9 09:13:30.576769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 09:13:30.576815 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 9 09:13:30.576862 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 09:13:30.576910 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 9 09:13:30.576918 kernel: vgaarb: loaded Feb 9 09:13:30.576923 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:13:30.576929 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:13:30.576934 kernel: PTP clock support registered Feb 9 09:13:30.576939 kernel: PCI: Using ACPI for IRQ routing Feb 9 09:13:30.576945 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 09:13:30.576950 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 09:13:30.576956 kernel: e820: reserve RAM buffer [mem 0x820e2000-0x83ffffff] Feb 9 09:13:30.576961 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 9 09:13:30.576966 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 9 09:13:30.576971 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 9 09:13:30.576976 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 9 09:13:30.576982 kernel: clocksource: Switched to clocksource tsc-early Feb 9 09:13:30.576987 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:13:30.576992 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:13:30.576998 kernel: pnp: PnP ACPI init Feb 9 09:13:30.577042 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 09:13:30.577086 kernel: pnp 00:02: [dma 0 disabled] Feb 9 09:13:30.577132 kernel: pnp 00:03: [dma 0 disabled] Feb 9 09:13:30.577175 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 09:13:30.577214 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 09:13:30.577257 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 09:13:30.577300 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 09:13:30.577339 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 09:13:30.577377 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 09:13:30.577415 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 09:13:30.577456 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 09:13:30.577493 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 09:13:30.577532 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 09:13:30.577574 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 09:13:30.577615 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 09:13:30.577655 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 09:13:30.577693 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 09:13:30.577732 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 09:13:30.577769 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 09:13:30.577807 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 09:13:30.577847 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 09:13:30.577889 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 09:13:30.577897 kernel: pnp: PnP ACPI: found 10 devices Feb 9 09:13:30.577903 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 09:13:30.577908 kernel: NET: Registered PF_INET protocol family Feb 9 09:13:30.577914 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:13:30.577919 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 09:13:30.577924 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:13:30.577931 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:13:30.577937 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 09:13:30.577942 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 09:13:30.577948 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 09:13:30.577953 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 09:13:30.577958 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:13:30.577963 kernel: NET: Registered PF_XDP protocol family Feb 9 09:13:30.578006 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 9 09:13:30.578051 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 9 09:13:30.578095 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 9 09:13:30.578140 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 09:13:30.578185 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 09:13:30.578230 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 09:13:30.578275 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 09:13:30.578318 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 09:13:30.578361 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 9 09:13:30.578406 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:13:30.578448 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 9 09:13:30.578492 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 9 09:13:30.578535 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 09:13:30.578583 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 9 09:13:30.578627 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 9 09:13:30.578671 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 09:13:30.578714 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 9 09:13:30.578757 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 9 09:13:30.578802 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 9 09:13:30.578846 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 9 09:13:30.578890 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 9 09:13:30.578933 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 9 09:13:30.578977 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 9 09:13:30.579020 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 9 09:13:30.579060 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 09:13:30.579098 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 09:13:30.579135 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 09:13:30.579173 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 09:13:30.579211 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 9 09:13:30.579249 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 09:13:30.579294 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 9 09:13:30.579337 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:13:30.579381 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 9 09:13:30.579421 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 9 09:13:30.579466 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 9 09:13:30.579506 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 9 09:13:30.579549 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 9 09:13:30.579593 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 9 09:13:30.579636 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 09:13:30.579678 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 9 09:13:30.579685 kernel: PCI: CLS 64 bytes, default 64 Feb 9 09:13:30.579691 kernel: DMAR: No ATSR found Feb 9 09:13:30.579696 kernel: DMAR: No SATC found Feb 9 09:13:30.579702 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 09:13:30.579745 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 09:13:30.579791 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 09:13:30.579835 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 9 09:13:30.579878 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 9 09:13:30.579920 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 9 09:13:30.579963 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 9 09:13:30.580005 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 9 09:13:30.580048 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 9 09:13:30.580090 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 9 09:13:30.580134 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 9 09:13:30.580178 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 9 09:13:30.580220 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 9 09:13:30.580263 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 9 09:13:30.580306 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 9 09:13:30.580349 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 9 09:13:30.580392 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 9 09:13:30.580435 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 9 09:13:30.580478 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 9 09:13:30.580521 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 9 09:13:30.580567 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 9 09:13:30.580611 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 9 09:13:30.580656 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 9 09:13:30.580701 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 9 09:13:30.580746 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 9 09:13:30.580791 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 09:13:30.580838 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 9 09:13:30.580885 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 9 09:13:30.580892 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 09:13:30.580898 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 09:13:30.580903 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 9 09:13:30.580909 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 9 09:13:30.580914 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 09:13:30.580920 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 09:13:30.580926 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 09:13:30.580972 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 09:13:30.580980 kernel: Initialise system trusted keyrings Feb 9 09:13:30.580986 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 09:13:30.580991 kernel: Key type asymmetric registered Feb 9 09:13:30.580996 kernel: Asymmetric key parser 'x509' registered Feb 9 09:13:30.581001 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:13:30.581007 kernel: io scheduler mq-deadline registered Feb 9 09:13:30.581013 kernel: io scheduler kyber registered Feb 9 09:13:30.581019 kernel: io scheduler bfq registered Feb 9 09:13:30.581061 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 9 09:13:30.581103 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 9 09:13:30.581148 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 9 09:13:30.581191 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 9 09:13:30.581234 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 9 09:13:30.581277 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 9 09:13:30.581329 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 09:13:30.581337 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 09:13:30.581343 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 09:13:30.581349 kernel: pstore: Registered erst as persistent store backend Feb 9 09:13:30.581354 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 09:13:30.581360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:13:30.581365 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 09:13:30.581370 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 09:13:30.581377 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 9 09:13:30.581419 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 09:13:30.581428 kernel: i8042: PNP: No PS/2 controller found. Feb 9 09:13:30.581467 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 09:13:30.581506 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 09:13:30.581545 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T09:13:29 UTC (1707470009) Feb 9 09:13:30.581588 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 09:13:30.581596 kernel: fail to initialize ptp_kvm Feb 9 09:13:30.581603 kernel: intel_pstate: Intel P-state driver initializing Feb 9 09:13:30.581609 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 09:13:30.581614 kernel: intel_pstate: HWP enabled Feb 9 09:13:30.581619 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 09:13:30.581625 kernel: vesafb: scrolling: redraw Feb 9 09:13:30.581630 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 09:13:30.581635 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000d831ade3, using 768k, total 768k Feb 9 09:13:30.581641 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:13:30.581646 kernel: fb0: VESA VGA frame buffer device Feb 9 09:13:30.581652 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:13:30.581658 kernel: Segment Routing with IPv6 Feb 9 09:13:30.581663 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:13:30.581668 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:13:30.581674 kernel: Key type dns_resolver registered Feb 9 09:13:30.581679 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 09:13:30.581684 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 09:13:30.581689 kernel: IPI shorthand broadcast: enabled Feb 9 09:13:30.581695 kernel: sched_clock: Marking stable (1677460090, 1334094467)->(4429178954, -1417624397) Feb 9 09:13:30.581701 kernel: registered taskstats version 1 Feb 9 09:13:30.581706 kernel: Loading compiled-in X.509 certificates Feb 9 09:13:30.581712 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 09:13:30.581717 kernel: Key type .fscrypt registered Feb 9 09:13:30.581722 kernel: Key type fscrypt-provisioning registered Feb 9 09:13:30.581727 kernel: pstore: Using crash dump compression: deflate Feb 9 09:13:30.581733 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:13:30.581738 kernel: ima: No architecture policies found Feb 9 09:13:30.581743 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 09:13:30.581749 kernel: Write protecting the kernel read-only data: 28672k Feb 9 09:13:30.581755 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 09:13:30.581760 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 09:13:30.581765 kernel: Run /init as init process Feb 9 09:13:30.581771 kernel: with arguments: Feb 9 09:13:30.581776 kernel: /init Feb 9 09:13:30.581781 kernel: with environment: Feb 9 09:13:30.581786 kernel: HOME=/ Feb 9 09:13:30.581791 kernel: TERM=linux Feb 9 09:13:30.581797 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:13:30.581804 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:13:30.581810 systemd[1]: Detected architecture x86-64. Feb 9 09:13:30.581816 systemd[1]: Running in initrd. Feb 9 09:13:30.581822 systemd[1]: No hostname configured, using default hostname. Feb 9 09:13:30.581827 systemd[1]: Hostname set to . Feb 9 09:13:30.581832 systemd[1]: Initializing machine ID from random generator. Feb 9 09:13:30.581839 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:13:30.581844 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:13:30.581850 systemd[1]: Reached target cryptsetup.target. Feb 9 09:13:30.581855 systemd[1]: Reached target ignition-diskful-subsequent.target. Feb 9 09:13:30.581861 systemd[1]: Reached target paths.target. Feb 9 09:13:30.581866 systemd[1]: Reached target slices.target. Feb 9 09:13:30.581871 systemd[1]: Reached target swap.target. Feb 9 09:13:30.581877 systemd[1]: Reached target timers.target. Feb 9 09:13:30.581883 systemd[1]: Listening on iscsid.socket. Feb 9 09:13:30.581889 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:13:30.581894 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:13:30.581900 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:13:30.581905 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 9 09:13:30.581911 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:13:30.581916 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 9 09:13:30.581922 kernel: clocksource: Switched to clocksource tsc Feb 9 09:13:30.581928 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:13:30.581934 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:13:30.581939 systemd[1]: Reached target sockets.target. Feb 9 09:13:30.581945 systemd[1]: Starting iscsiuio.service... Feb 9 09:13:30.581950 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:13:30.581956 kernel: SCSI subsystem initialized Feb 9 09:13:30.581961 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:13:30.581966 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:13:30.581972 systemd[1]: Starting systemd-journald.service... Feb 9 09:13:30.581978 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:13:30.581986 systemd-journald[269]: Journal started Feb 9 09:13:30.582013 systemd-journald[269]: Runtime Journal (/run/log/journal/ec418c1c71ec4d3c9d7e2729375774e6) is 8.0M, max 640.1M, 632.1M free. Feb 9 09:13:30.585299 systemd-modules-load[270]: Inserted module 'overlay' Feb 9 09:13:30.609588 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:13:30.642593 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:13:30.642609 systemd[1]: Started iscsiuio.service. Feb 9 09:13:30.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.668605 kernel: Bridge firewalling registered Feb 9 09:13:30.668620 kernel: audit: type=1130 audit(1707470010.667:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.668628 systemd[1]: Started systemd-journald.service. Feb 9 09:13:30.712096 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 9 09:13:30.838663 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:13:30.838675 kernel: audit: type=1130 audit(1707470010.748:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.838683 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:13:30.838689 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:13:30.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.748779 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:13:30.887379 kernel: audit: type=1130 audit(1707470010.842:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.842745 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:13:30.939099 kernel: audit: type=1130 audit(1707470010.895:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.887912 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 9 09:13:30.992591 kernel: audit: type=1130 audit(1707470010.947:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.895856 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:13:31.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:30.947839 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:13:31.001118 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:13:31.047496 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:13:31.047633 kernel: audit: type=1130 audit(1707470011.000:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.047839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:13:31.050654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:13:31.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.051296 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:13:31.100770 kernel: audit: type=1130 audit(1707470011.050:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.112899 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:13:31.220362 kernel: audit: type=1130 audit(1707470011.112:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.220374 kernel: audit: type=1130 audit(1707470011.169:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.170163 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:13:31.252673 kernel: iscsi: registered transport (tcp) Feb 9 09:13:31.252684 dracut-cmdline[291]: dracut-dracut-053 Feb 9 09:13:31.252684 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 09:13:31.252684 dracut-cmdline[291]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:13:31.323811 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:13:31.323823 kernel: QLogic iSCSI HBA Driver Feb 9 09:13:31.312457 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:13:31.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.350445 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:13:31.365107 systemd[1]: Starting iscsid.service... Feb 9 09:13:31.379897 systemd[1]: Started iscsid.service. Feb 9 09:13:31.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.405070 iscsid[445]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:13:31.405070 iscsid[445]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:13:31.405070 iscsid[445]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:13:31.405070 iscsid[445]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:13:31.405070 iscsid[445]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:13:31.405070 iscsid[445]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:13:31.405070 iscsid[445]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:13:31.556655 kernel: raid6: avx2x4 gen() 20166 MB/s Feb 9 09:13:31.556670 kernel: raid6: avx2x4 xor() 21229 MB/s Feb 9 09:13:31.556677 kernel: raid6: avx2x2 gen() 55007 MB/s Feb 9 09:13:31.556683 kernel: raid6: avx2x2 xor() 32912 MB/s Feb 9 09:13:31.556689 kernel: raid6: avx2x1 gen() 45637 MB/s Feb 9 09:13:31.599599 kernel: raid6: avx2x1 xor() 28070 MB/s Feb 9 09:13:31.634621 kernel: raid6: sse2x4 gen() 21404 MB/s Feb 9 09:13:31.669621 kernel: raid6: sse2x4 xor() 11600 MB/s Feb 9 09:13:31.704597 kernel: raid6: sse2x2 gen() 21788 MB/s Feb 9 09:13:31.739620 kernel: raid6: sse2x2 xor() 13480 MB/s Feb 9 09:13:31.774621 kernel: raid6: sse2x1 gen() 18341 MB/s Feb 9 09:13:31.827379 kernel: raid6: sse2x1 xor() 8980 MB/s Feb 9 09:13:31.827394 kernel: raid6: using algorithm avx2x2 gen() 55007 MB/s Feb 9 09:13:31.827401 kernel: raid6: .... xor() 32912 MB/s, rmw enabled Feb 9 09:13:31.845827 kernel: raid6: using avx2x2 recovery algorithm Feb 9 09:13:31.892611 kernel: xor: automatically using best checksumming function avx Feb 9 09:13:31.970594 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 09:13:31.975422 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:13:31.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:31.984000 audit: BPF prog-id=6 op=LOAD Feb 9 09:13:31.984000 audit: BPF prog-id=7 op=LOAD Feb 9 09:13:31.985546 systemd[1]: Starting systemd-udevd.service... Feb 9 09:13:31.993983 systemd-udevd[468]: Using default interface naming scheme 'v252'. Feb 9 09:13:32.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:32.001028 systemd[1]: Started systemd-udevd.service. Feb 9 09:13:32.044688 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Feb 9 09:13:32.019402 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:13:32.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:32.045744 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:13:32.063381 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:13:32.112147 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:13:32.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:32.112713 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:13:32.151650 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:13:32.151670 kernel: libata version 3.00 loaded. Feb 9 09:13:32.151684 kernel: ACPI: bus type USB registered Feb 9 09:13:32.179678 kernel: usbcore: registered new interface driver usbfs Feb 9 09:13:32.179735 kernel: usbcore: registered new interface driver hub Feb 9 09:13:32.198055 kernel: usbcore: registered new device driver usb Feb 9 09:13:32.216567 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 09:13:32.216583 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 09:13:32.251246 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 09:13:32.269568 kernel: AES CTR mode by8 optimization enabled Feb 9 09:13:32.269584 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 09:13:32.286569 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 9 09:13:32.306297 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 09:13:32.310606 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Feb 9 09:13:32.310680 kernel: pps pps0: new PPS source ptp0 Feb 9 09:13:32.310738 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 9 09:13:32.310795 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 09:13:32.310846 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:82 Feb 9 09:13:32.310897 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 9 09:13:32.310948 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 09:13:32.356569 kernel: pps pps1: new PPS source ptp1 Feb 9 09:13:32.356642 kernel: scsi host0: ahci Feb 9 09:13:32.356728 kernel: scsi host1: ahci Feb 9 09:13:32.356795 kernel: scsi host2: ahci Feb 9 09:13:32.356857 kernel: scsi host3: ahci Feb 9 09:13:32.356920 kernel: scsi host4: ahci Feb 9 09:13:32.356980 kernel: scsi host5: ahci Feb 9 09:13:32.357099 kernel: scsi host6: ahci Feb 9 09:13:32.357152 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Feb 9 09:13:32.357160 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Feb 9 09:13:32.357166 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Feb 9 09:13:32.357172 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Feb 9 09:13:32.357180 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Feb 9 09:13:32.357187 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Feb 9 09:13:32.357193 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Feb 9 09:13:32.375507 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 09:13:32.375584 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 9 09:13:32.665568 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 09:13:32.665645 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 09:13:32.665703 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 09:13:32.665713 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 09:13:32.665720 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 09:13:32.666466 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 09:13:32.666485 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 9 09:13:32.666495 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 09:13:32.666503 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 09:13:32.666511 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 9 09:13:32.666520 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:83 Feb 9 09:13:32.666599 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 09:13:32.670620 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 09:13:32.670631 kernel: ata1.00: Features: NCQ-prio Feb 9 09:13:32.670639 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 09:13:32.670646 kernel: ata2.00: Features: NCQ-prio Feb 9 09:13:32.674565 kernel: ata1.00: configured for UDMA/133 Feb 9 09:13:32.675565 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 9 09:13:32.675646 kernel: ata2.00: configured for UDMA/133 Feb 9 09:13:32.675655 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 9 09:13:32.721235 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 09:13:32.721311 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 9 09:13:32.969216 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 09:13:32.983572 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 9 09:13:32.983677 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 09:13:33.011882 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 09:13:33.047772 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 09:13:33.047847 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 09:13:33.047906 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 09:13:33.076921 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 09:13:33.076993 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 09:13:33.104297 kernel: hub 1-0:1.0: USB hub found Feb 9 09:13:33.104382 kernel: hub 1-0:1.0: 16 ports detected Feb 9 09:13:33.129447 kernel: hub 2-0:1.0: USB hub found Feb 9 09:13:33.129531 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 9 09:13:33.129595 kernel: hub 2-0:1.0: 10 ports detected Feb 9 09:13:33.156618 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:13:33.156633 kernel: usb: port power management may be unreliable Feb 9 09:13:33.169381 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:13:33.197166 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 09:13:33.197249 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 09:13:33.206603 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Feb 9 09:13:33.206679 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 09:13:33.230597 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 09:13:33.230678 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 09:13:33.230737 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:13:33.230795 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:13:33.279592 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 09:13:33.279669 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 09:13:33.279732 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:13:33.279740 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 09:13:33.279796 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:13:33.370707 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 09:13:33.370787 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 09:13:33.378613 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 09:13:33.419609 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:13:33.450605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:13:33.450623 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:13:33.479569 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:13:33.505571 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 09:13:33.506962 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:13:33.633794 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by (udev-worker) (523) Feb 9 09:13:33.633815 kernel: port_module: 9 callbacks suppressed Feb 9 09:13:33.633903 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 9 09:13:33.634002 kernel: hub 1-14:1.0: USB hub found Feb 9 09:13:33.634073 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 09:13:33.634130 kernel: hub 1-14:1.0: 4 ports detected Feb 9 09:13:33.612675 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:13:33.617540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:13:33.652715 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:13:33.653725 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:13:33.669352 systemd[1]: Starting disk-uuid.service... Feb 9 09:13:33.817206 kernel: audit: type=1130 audit(1707470013.706:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.817223 kernel: audit: type=1131 audit(1707470013.706:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.817230 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 09:13:33.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.692103 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:13:33.845716 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 9 09:13:33.692249 systemd[1]: Finished disk-uuid.service. Feb 9 09:13:33.891725 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 9 09:13:33.891804 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 09:13:33.706999 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:13:33.825653 systemd[1]: Reached target local-fs.target. Feb 9 09:13:33.984546 kernel: audit: type=1130 audit(1707470013.920:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.984566 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 09:13:33.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.825688 systemd[1]: Reached target sysinit.target. Feb 9 09:13:33.860846 systemd[1]: Reached target basic.target. Feb 9 09:13:33.872314 systemd[1]: Starting verity-setup.service... Feb 9 09:13:33.900379 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:13:33.922904 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:13:33.992664 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:13:34.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.100572 kernel: audit: type=1130 audit(1707470014.054:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:33.992698 systemd[1]: Reached target remote-fs.target. Feb 9 09:13:34.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.020082 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:13:34.175696 kernel: audit: type=1130 audit(1707470014.108:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.175708 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:13:34.039142 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:13:34.202691 kernel: usbcore: registered new interface driver usbhid Feb 9 09:13:34.054926 systemd[1]: Finished verity-setup.service. Feb 9 09:13:34.266323 kernel: usbhid: USB HID core driver Feb 9 09:13:34.266336 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 09:13:34.110032 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:13:34.162965 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:13:34.185774 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:13:34.294948 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:13:34.295640 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:13:34.297549 systemd-fsck[720]: ROOT: clean, 641/553520 files, 133739/553472 blocks Feb 9 09:13:34.316077 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:13:34.524799 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 09:13:34.524886 kernel: audit: type=1130 audit(1707470014.335:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.524895 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:13:34.524905 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 09:13:34.524912 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 09:13:34.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.336100 systemd[1]: Mounting sysroot.mount... Feb 9 09:13:34.532203 systemd[1]: Mounted sysroot.mount. Feb 9 09:13:34.545814 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:13:34.561433 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:13:34.580602 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:13:34.591259 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:13:34.603811 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:13:34.705834 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:13:34.705848 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:13:34.705855 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:13:34.705862 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 09:13:34.696252 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:13:34.767664 kernel: audit: type=1130 audit(1707470014.714:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.715891 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:13:34.776197 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:13:34.783902 initrd-setup-root-after-ignition[805]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:13:34.799918 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:13:34.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.813864 systemd[1]: Reached target ignition-subsequent.target. Feb 9 09:13:34.887777 kernel: audit: type=1130 audit(1707470014.813:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.879085 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:13:34.892788 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:13:34.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.892828 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:13:34.985815 kernel: audit: type=1130 audit(1707470014.909:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.910123 systemd[1]: Reached target initrd-fs.target. Feb 9 09:13:34.972771 systemd[1]: Reached target initrd.target. Feb 9 09:13:35.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:34.972828 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:13:34.973162 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:13:34.992890 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:13:35.008137 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:13:35.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.025070 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:13:35.040840 systemd[1]: Stopped target timers.target. Feb 9 09:13:35.056020 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:13:35.056282 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:13:35.072440 systemd[1]: Stopped target initrd.target. Feb 9 09:13:35.086230 systemd[1]: Stopped target basic.target. Feb 9 09:13:35.102128 systemd[1]: Stopped target ignition-subsequent.target. Feb 9 09:13:35.118143 systemd[1]: Stopped target ignition-diskful-subsequent.target. Feb 9 09:13:35.137117 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:13:35.153115 systemd[1]: Stopped target paths.target. Feb 9 09:13:35.167114 systemd[1]: Stopped target remote-fs.target. Feb 9 09:13:35.182237 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:13:35.200115 systemd[1]: Stopped target slices.target. Feb 9 09:13:35.216110 systemd[1]: Stopped target sockets.target. Feb 9 09:13:35.233122 systemd[1]: Stopped target sysinit.target. Feb 9 09:13:35.248245 systemd[1]: Stopped target local-fs.target. Feb 9 09:13:35.264235 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:13:35.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.282122 systemd[1]: Stopped target swap.target. Feb 9 09:13:35.296075 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:13:35.296308 systemd[1]: Closed iscsid.socket. Feb 9 09:13:35.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.310153 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:13:35.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.310378 systemd[1]: Closed iscsiuio.socket. Feb 9 09:13:35.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.324117 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:13:35.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.324436 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:13:35.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.340317 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:13:35.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.356011 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:13:35.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.360829 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:13:35.373024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:13:35.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.373360 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:13:35.390251 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:13:35.390590 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:13:35.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.407207 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:13:35.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.407519 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:13:35.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.425216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:13:35.425526 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:13:35.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.441305 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:13:35.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.441646 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:13:35.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.459195 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:13:35.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.459509 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:13:35.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:35.475219 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:13:35.475535 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:13:35.490604 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:13:35.751000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:13:35.751000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:13:35.751000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:13:35.508172 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:13:35.751000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:13:35.751000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:13:35.508593 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:13:35.508656 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:13:35.806140 iscsid[445]: iscsid shutting down. Feb 9 09:13:35.524789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:13:35.524961 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:13:35.538928 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:13:35.539011 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:13:35.553814 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:13:35.553925 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:13:35.571978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:13:35.572092 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:13:35.588048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:13:35.588185 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:13:35.608759 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:13:35.621753 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:13:35.621782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:13:35.636854 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:13:35.636883 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:13:35.652890 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:13:35.652945 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:13:35.672345 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:13:35.673099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:13:35.673212 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:13:35.806578 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 9 09:13:35.687514 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:13:35.687739 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:13:35.706821 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:13:35.722499 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:13:35.743396 systemd[1]: Switching root. Feb 9 09:13:35.806739 systemd-journald[269]: Journal stopped Feb 9 09:13:39.672491 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:13:39.672505 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:13:39.672514 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:13:39.672520 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:13:39.672525 kernel: SELinux: policy capability open_perms=1 Feb 9 09:13:39.672530 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:13:39.672536 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:13:39.672542 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:13:39.672547 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:13:39.672553 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:13:39.672558 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:13:39.672568 systemd[1]: Successfully loaded SELinux policy in 299.237ms. Feb 9 09:13:39.672575 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.646ms. Feb 9 09:13:39.672582 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:13:39.672590 systemd[1]: Detected architecture x86-64. Feb 9 09:13:39.672597 systemd[1]: Detected first boot. Feb 9 09:13:39.672603 systemd[1]: Hostname set to . Feb 9 09:13:39.672609 systemd[1]: Initializing machine ID from random generator. Feb 9 09:13:39.672621 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:13:39.672631 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:13:39.672642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:13:39.672654 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:13:39.672663 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:13:39.672672 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:13:39.672679 systemd[1]: Unnecessary job was removed for dev-sda6.device. Feb 9 09:13:39.672685 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:13:39.672692 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:13:39.672699 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:13:39.672706 systemd[1]: Created slice system-getty.slice. Feb 9 09:13:39.672712 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:13:39.672718 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:13:39.672724 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:13:39.672730 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:13:39.672736 systemd[1]: Created slice user.slice. Feb 9 09:13:39.672742 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:13:39.672748 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:13:39.672755 systemd[1]: Set up automount boot.automount. Feb 9 09:13:39.672770 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:13:39.672781 systemd[1]: Reached target integritysetup.target. Feb 9 09:13:39.672790 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:13:39.672799 systemd[1]: Reached target remote-fs.target. Feb 9 09:13:39.672805 systemd[1]: Reached target slices.target. Feb 9 09:13:39.672812 systemd[1]: Reached target swap.target. Feb 9 09:13:39.672818 systemd[1]: Reached target torcx.target. Feb 9 09:13:39.672825 systemd[1]: Reached target veritysetup.target. Feb 9 09:13:39.672832 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:13:39.672838 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:13:39.672844 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:13:39.672850 kernel: kauditd_printk_skb: 39 callbacks suppressed Feb 9 09:13:39.672856 kernel: audit: type=1400 audit(1707470018.926:60): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:13:39.672863 kernel: audit: type=1335 audit(1707470018.926:61): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:13:39.672869 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:13:39.672876 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:13:39.672882 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:13:39.672889 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:13:39.672895 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:13:39.672903 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:13:39.672909 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:13:39.672916 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:13:39.672922 systemd[1]: Mounting media.mount... Feb 9 09:13:39.672929 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 09:13:39.672935 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:13:39.672942 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:13:39.672948 systemd[1]: Mounting tmp.mount... Feb 9 09:13:39.672959 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:13:39.672970 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:13:39.672977 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:13:39.672984 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:13:39.672990 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:13:39.672997 systemd[1]: Starting modprobe@drm.service... Feb 9 09:13:39.673003 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:13:39.673010 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:13:39.673016 kernel: fuse: init (API version 7.34) Feb 9 09:13:39.673022 systemd[1]: Starting modprobe@loop.service... Feb 9 09:13:39.673030 kernel: loop: module loaded Feb 9 09:13:39.673036 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:13:39.673043 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:13:39.673049 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:13:39.673055 systemd[1]: Starting systemd-journald.service... Feb 9 09:13:39.673062 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:13:39.673068 kernel: audit: type=1305 audit(1707470019.670:62): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:13:39.673075 systemd-journald[988]: Journal started Feb 9 09:13:39.673102 systemd-journald[988]: Runtime Journal (/run/log/journal/f23f7733895b4481a4d221d2bc6634cc) is 8.0M, max 640.1M, 632.1M free. Feb 9 09:13:38.926000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:13:38.926000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:13:39.670000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:13:39.670000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe2bb19e90 a2=4000 a3=7ffe2bb19f2c items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:13:39.670000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:13:39.718617 kernel: audit: type=1300 audit(1707470019.670:62): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe2bb19e90 a2=4000 a3=7ffe2bb19f2c items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:13:39.718632 kernel: audit: type=1327 audit(1707470019.670:62): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:13:39.832744 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:13:39.858743 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:13:39.883609 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:13:39.926614 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 09:13:39.945750 systemd[1]: Started systemd-journald.service. Feb 9 09:13:39.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:39.954278 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:13:40.001625 kernel: audit: type=1130 audit(1707470019.953:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.008820 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:13:40.015818 systemd[1]: Mounted media.mount. Feb 9 09:13:40.022814 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:13:40.031801 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:13:40.039801 systemd[1]: Mounted tmp.mount. Feb 9 09:13:40.046907 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:13:40.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.054960 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:13:40.102608 kernel: audit: type=1130 audit(1707470020.054:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.110876 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:13:40.110955 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:13:40.160742 kernel: audit: type=1130 audit(1707470020.110:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.168900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:13:40.168973 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:13:40.219584 kernel: audit: type=1130 audit(1707470020.168:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.219603 kernel: audit: type=1131 audit(1707470020.168:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.279884 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:13:40.279957 systemd[1]: Finished modprobe@drm.service. Feb 9 09:13:40.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.288874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:13:40.288948 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:13:40.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.297873 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:13:40.297946 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:13:40.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.306877 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:13:40.306956 systemd[1]: Finished modprobe@loop.service. Feb 9 09:13:40.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.316930 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:13:40.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.325896 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:13:40.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.335921 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:13:40.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.345014 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:13:40.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.354064 systemd[1]: Reached target network-pre.target. Feb 9 09:13:40.364380 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:13:40.373287 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:13:40.380759 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:13:40.381873 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:13:40.389254 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:13:40.393495 systemd-journald[988]: Time spent on flushing to /var/log/journal/f23f7733895b4481a4d221d2bc6634cc is 10.371ms for 1196 entries. Feb 9 09:13:40.393495 systemd-journald[988]: System Journal (/var/log/journal/f23f7733895b4481a4d221d2bc6634cc) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:13:40.431287 systemd-journald[988]: Received client request to flush runtime journal. Feb 9 09:13:40.405688 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:13:40.406204 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:13:40.424710 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:13:40.425267 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:13:40.432321 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:13:40.439216 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:13:40.446839 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:13:40.454752 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:13:40.462825 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:13:40.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.470857 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:13:40.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.478791 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:13:40.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.486840 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:13:40.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.495783 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:13:40.504376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:13:40.513862 udevadm[1016]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:13:40.523778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:13:40.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.684698 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:13:40.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.694498 systemd[1]: Starting systemd-udevd.service... Feb 9 09:13:40.706401 systemd-udevd[1024]: Using default interface naming scheme 'v252'. Feb 9 09:13:40.726276 systemd[1]: Started systemd-udevd.service. Feb 9 09:13:40.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.738761 systemd[1]: Found device dev-ttyS1.device. Feb 9 09:13:40.783222 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 09:13:40.783290 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 09:13:40.804233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:13:40.806233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 09:13:40.819864 systemd[1]: Starting systemd-networkd.service... Feb 9 09:13:40.827569 kernel: IPMI message handler: version 39.2 Feb 9 09:13:40.827610 kernel: ACPI: button: Power Button [PWRF] Feb 9 09:13:40.869578 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:13:40.778000 audit[1096]: AVC avc: denied { confidentiality } for pid=1096 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:13:40.871099 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:13:40.898147 systemd[1]: Started systemd-userdbd.service. Feb 9 09:13:40.778000 audit[1096]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7efe084d3010 a1=4d8bc a2=7efe0a180bc5 a3=5 items=42 ppid=1024 pid=1096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:13:40.778000 audit: CWD cwd="/" Feb 9 09:13:40.778000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=1 name=(null) inode=13431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=2 name=(null) inode=13431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=3 name=(null) inode=13432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=4 name=(null) inode=13431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=5 name=(null) inode=13433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=6 name=(null) inode=13431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=7 name=(null) inode=13434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=8 name=(null) inode=13434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=9 name=(null) inode=13435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=10 name=(null) inode=13434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=11 name=(null) inode=13436 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=12 name=(null) inode=13434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=13 name=(null) inode=13437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=14 name=(null) inode=13434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=15 name=(null) inode=13438 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=16 name=(null) inode=13434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=17 name=(null) inode=13439 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=18 name=(null) inode=13431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=19 name=(null) inode=13440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=20 name=(null) inode=13440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=21 name=(null) inode=13441 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=22 name=(null) inode=13440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=23 name=(null) inode=13442 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=24 name=(null) inode=13440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=25 name=(null) inode=13443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=26 name=(null) inode=13440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=27 name=(null) inode=13444 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=28 name=(null) inode=13440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=29 name=(null) inode=13445 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=30 name=(null) inode=13431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=31 name=(null) inode=13446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=32 name=(null) inode=13446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=33 name=(null) inode=13447 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=34 name=(null) inode=13446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=35 name=(null) inode=13448 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=36 name=(null) inode=13446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=37 name=(null) inode=13449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=38 name=(null) inode=13446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=39 name=(null) inode=13450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=40 name=(null) inode=13446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PATH item=41 name=(null) inode=13451 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:13:40.778000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:13:40.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:40.922568 kernel: ipmi device interface Feb 9 09:13:40.922590 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 09:13:40.964588 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 09:13:40.986569 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 09:13:41.014635 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 09:13:41.014727 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 09:13:41.090571 kernel: ipmi_si: IPMI System Interface driver Feb 9 09:13:41.090601 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 09:13:41.090691 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 09:13:41.133516 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 09:13:41.133569 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 09:13:41.133599 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 09:13:41.199072 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 09:13:41.222751 systemd-networkd[1105]: bond0: netdev ready Feb 9 09:13:41.224858 systemd-networkd[1105]: lo: Link UP Feb 9 09:13:41.224861 systemd-networkd[1105]: lo: Gained carrier Feb 9 09:13:41.225173 systemd-networkd[1105]: Enumeration completed Feb 9 09:13:41.225254 systemd[1]: Started systemd-networkd.service. Feb 9 09:13:41.225462 systemd-networkd[1105]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 09:13:41.226158 systemd-networkd[1105]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:b1.network. Feb 9 09:13:41.246028 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 9 09:13:41.246127 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 9 09:13:41.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:41.295965 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 09:13:41.296059 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 09:13:41.296090 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 09:13:41.395842 kernel: intel_rapl_common: Found RAPL domain package Feb 9 09:13:41.395880 kernel: intel_rapl_common: Found RAPL domain core Feb 9 09:13:41.395894 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 09:13:41.396000 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 09:13:41.434645 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 9 09:13:41.441623 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:13:41.441646 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 09:13:41.446697 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 9 09:13:41.457375 systemd-networkd[1105]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:b0.network. Feb 9 09:13:41.518629 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 09:13:41.584635 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 09:13:41.656640 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:13:41.656703 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 9 09:13:41.701729 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 9 09:13:41.727567 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 09:13:41.732975 systemd-networkd[1105]: bond0: Link UP Feb 9 09:13:41.733197 systemd-networkd[1105]: enp1s0f1np1: Link UP Feb 9 09:13:41.733343 systemd-networkd[1105]: enp1s0f0np0: Link UP Feb 9 09:13:41.733451 systemd-networkd[1105]: enp1s0f1np1: Gained carrier Feb 9 09:13:41.734479 systemd-networkd[1105]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f6:b0.network. Feb 9 09:13:41.785996 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 09:13:41.786025 kernel: bond0: active interface up! Feb 9 09:13:41.813567 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 09:13:41.832811 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:13:41.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:41.842379 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:13:41.858805 lvm[1132]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:13:41.894012 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:13:41.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:41.903709 systemd[1]: Reached target cryptsetup.target. Feb 9 09:13:41.912250 systemd[1]: Starting lvm2-activation.service... Feb 9 09:13:41.914316 lvm[1134]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:13:41.938653 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:41.961611 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:41.984566 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:41.986024 systemd[1]: Finished lvm2-activation.service. Feb 9 09:13:42.005567 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:42.023766 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:13:42.028566 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.045609 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:13:42.045623 systemd[1]: Reached target local-fs.target. Feb 9 09:13:42.050567 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.067608 systemd[1]: Reached target machines.target. Feb 9 09:13:42.071565 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.090371 systemd[1]: Starting ldconfig.service... Feb 9 09:13:42.093566 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.114257 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:13:42.114278 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:13:42.114586 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.114892 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:13:42.131097 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:13:42.135585 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.153223 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:13:42.155568 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.155651 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:13:42.155672 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:13:42.156277 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:13:42.156486 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1137 (bootctl) Feb 9 09:13:42.157132 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:13:42.175620 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.180716 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:13:42.182273 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:13:42.184447 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:13:42.195567 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.196381 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:13:42.196752 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:13:42.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:42.215086 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:13:42.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:42.215568 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.235612 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.236150 systemd-networkd[1105]: bond0: Gained carrier Feb 9 09:13:42.236267 systemd-networkd[1105]: enp1s0f0np0: Gained carrier Feb 9 09:13:42.256873 systemd-fsck[1147]: fsck.fat 4.2 (2021-01-31) Feb 9 09:13:42.256873 systemd-fsck[1147]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 09:13:42.261785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:13:42.269922 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:13:42.269956 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 9 09:13:42.271932 systemd-networkd[1105]: enp1s0f1np1: Link DOWN Feb 9 09:13:42.271935 systemd-networkd[1105]: enp1s0f1np1: Lost carrier Feb 9 09:13:42.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:42.281474 systemd[1]: Mounting boot.mount... Feb 9 09:13:42.302095 systemd[1]: Mounted boot.mount. Feb 9 09:13:42.320425 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:13:42.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:42.355613 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:13:42.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:13:42.364421 systemd[1]: Starting audit-rules.service... Feb 9 09:13:42.372240 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:13:42.380000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:13:42.380000 audit[1174]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8c3804a0 a2=420 a3=0 items=0 ppid=1157 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:13:42.380000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:13:42.381234 augenrules[1174]: No rules Feb 9 09:13:42.381287 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:13:42.390455 systemd[1]: Starting systemd-resolved.service... Feb 9 09:13:42.397430 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:13:42.405220 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:13:42.411561 systemd[1]: Finished audit-rules.service. Feb 9 09:13:42.413072 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:13:42.418848 systemd[1]: Finished ldconfig.service. Feb 9 09:13:42.425836 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:13:42.433953 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:13:42.449620 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 9 09:13:42.451229 systemd[1]: Starting systemd-update-done.service... Feb 9 09:13:42.464669 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:13:42.465304 systemd[1]: Finished systemd-update-done.service. Feb 9 09:13:42.468612 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Feb 9 09:13:42.469179 systemd-networkd[1105]: enp1s0f1np1: Link UP Feb 9 09:13:42.469376 systemd-networkd[1105]: enp1s0f1np1: Gained carrier Feb 9 09:13:42.476648 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:13:42.492620 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 9 09:13:42.510606 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 09:13:42.526011 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:13:42.528463 systemd-resolved[1181]: Positive Trust Anchors: Feb 9 09:13:42.528469 systemd-resolved[1181]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:13:42.528488 systemd-resolved[1181]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:13:42.534057 systemd[1]: Reached target time-set.target. Feb 9 09:13:42.547171 systemd-resolved[1181]: Using system hostname 'ci-3510.3.2-a-afd9ebe59c'. Feb 9 09:13:42.548284 systemd[1]: Started systemd-resolved.service. Feb 9 09:13:42.556673 systemd[1]: Reached target network.target. Feb 9 09:13:42.564655 systemd[1]: Reached target nss-lookup.target. Feb 9 09:13:42.572652 systemd[1]: Reached target sysinit.target. Feb 9 09:13:42.580688 systemd[1]: Started motdgen.path. Feb 9 09:13:42.587664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:13:42.597703 systemd[1]: Started logrotate.timer. Feb 9 09:13:42.604682 systemd[1]: Started mdadm.timer. Feb 9 09:13:42.611636 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:13:42.619640 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:13:42.619657 systemd[1]: Reached target paths.target. Feb 9 09:13:42.626631 systemd[1]: Reached target timers.target. Feb 9 09:13:42.633759 systemd[1]: Listening on dbus.socket. Feb 9 09:13:42.641265 systemd[1]: Starting docker.socket... Feb 9 09:13:42.648326 systemd[1]: Listening on sshd.socket. Feb 9 09:13:42.654696 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:13:42.654883 systemd[1]: Listening on docker.socket. Feb 9 09:13:42.661686 systemd[1]: Reached target sockets.target. Feb 9 09:13:42.669646 systemd[1]: Reached target basic.target. Feb 9 09:13:42.676706 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:13:42.676730 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:13:42.676742 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:13:42.677241 systemd[1]: Starting containerd.service... Feb 9 09:13:42.684081 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:13:42.693161 systemd[1]: Starting coreos-metadata.service... Feb 9 09:13:42.701157 systemd[1]: Starting dbus.service... Feb 9 09:13:42.708167 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:13:42.713540 jq[1201]: false Feb 9 09:13:42.715441 systemd[1]: Starting extend-filesystems.service... Feb 9 09:13:42.722619 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:13:42.722969 dbus-daemon[1198]: [system] SELinux support is enabled Feb 9 09:13:42.723256 systemd[1]: Starting motdgen.service... Feb 9 09:13:42.723912 extend-filesystems[1203]: Found sda Feb 9 09:13:42.753651 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 9 09:13:42.730265 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:13:42.753715 coreos-metadata[1194]: Feb 09 09:13:42.739 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:13:42.753856 coreos-metadata[1195]: Feb 09 09:13:42.739 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda1 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda2 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda3 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found usr Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda4 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda6 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda7 Feb 9 09:13:42.753973 extend-filesystems[1203]: Found sda9 Feb 9 09:13:42.753973 extend-filesystems[1203]: Checking size of /dev/sda9 Feb 9 09:13:42.753973 extend-filesystems[1203]: Resized partition /dev/sda9 Feb 9 09:13:42.761485 systemd[1]: Starting prepare-critools.service... Feb 9 09:13:42.880761 extend-filesystems[1217]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:13:42.775294 systemd[1]: Starting prepare-helm.service... Feb 9 09:13:42.782265 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:13:42.794278 systemd[1]: Starting sshd-keygen.service... Feb 9 09:13:42.813694 systemd[1]: Starting systemd-logind.service... Feb 9 09:13:42.831640 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:13:42.896025 update_engine[1239]: I0209 09:13:42.883879 1239 main.cc:92] Flatcar Update Engine starting Feb 9 09:13:42.896025 update_engine[1239]: I0209 09:13:42.887024 1239 update_check_scheduler.cc:74] Next update check in 7m35s Feb 9 09:13:42.832254 systemd[1]: Starting tcsd.service... Feb 9 09:13:42.896207 jq[1240]: true Feb 9 09:13:42.839337 systemd[1]: Starting update-engine.service... Feb 9 09:13:42.839710 systemd-logind[1237]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 09:13:42.839720 systemd-logind[1237]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 09:13:42.839729 systemd-logind[1237]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 09:13:42.839823 systemd-logind[1237]: New seat seat0. Feb 9 09:13:42.857386 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:13:42.873030 systemd[1]: Started dbus.service. Feb 9 09:13:42.889415 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:13:42.889541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:13:42.889688 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:13:42.889796 systemd[1]: Finished motdgen.service. Feb 9 09:13:42.903871 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:13:42.903990 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:13:42.910967 tar[1244]: ./ Feb 9 09:13:42.910967 tar[1244]: ./macvlan Feb 9 09:13:42.914228 jq[1250]: false Feb 9 09:13:42.914373 tar[1245]: crictl Feb 9 09:13:42.914732 dbus-daemon[1198]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:13:42.915490 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Feb 9 09:13:42.915662 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Feb 9 09:13:42.915850 tar[1246]: linux-amd64/helm Feb 9 09:13:42.920402 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 09:13:42.920532 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 09:13:42.920609 systemd[1]: Started update-engine.service. Feb 9 09:13:42.923137 env[1251]: time="2024-02-09T09:13:42.923115576Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:13:42.932066 env[1251]: time="2024-02-09T09:13:42.932045312Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:13:42.932142 env[1251]: time="2024-02-09T09:13:42.932131033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:13:42.932784 env[1251]: time="2024-02-09T09:13:42.932768143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:13:42.932821 env[1251]: time="2024-02-09T09:13:42.932783030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:13:42.932943 env[1251]: time="2024-02-09T09:13:42.932930747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:13:42.932980 env[1251]: time="2024-02-09T09:13:42.932942376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:13:42.932980 env[1251]: time="2024-02-09T09:13:42.932953557Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:13:42.932980 env[1251]: time="2024-02-09T09:13:42.932962707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:13:42.933056 env[1251]: time="2024-02-09T09:13:42.933020295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:13:42.933173 env[1251]: time="2024-02-09T09:13:42.933163192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:13:42.933268 env[1251]: time="2024-02-09T09:13:42.933255732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:13:42.933302 env[1251]: time="2024-02-09T09:13:42.933267381Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:13:42.933331 env[1251]: time="2024-02-09T09:13:42.933305126Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:13:42.933331 env[1251]: time="2024-02-09T09:13:42.933316584Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:13:42.934490 systemd[1]: Started systemd-logind.service. Feb 9 09:13:42.944174 systemd[1]: Started locksmithd.service. Feb 9 09:13:42.950685 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:13:42.950763 systemd[1]: Reached target system-config.target. Feb 9 09:13:42.951316 tar[1244]: ./static Feb 9 09:13:42.958545 env[1251]: time="2024-02-09T09:13:42.958523548Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:13:42.958591 env[1251]: time="2024-02-09T09:13:42.958552621Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:13:42.958591 env[1251]: time="2024-02-09T09:13:42.958568929Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:13:42.958644 env[1251]: time="2024-02-09T09:13:42.958595139Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958644 env[1251]: time="2024-02-09T09:13:42.958610114Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958644 env[1251]: time="2024-02-09T09:13:42.958622639Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958644 env[1251]: time="2024-02-09T09:13:42.958633787Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958743 env[1251]: time="2024-02-09T09:13:42.958646909Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958743 env[1251]: time="2024-02-09T09:13:42.958659466Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958743 env[1251]: time="2024-02-09T09:13:42.958671764Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958743 env[1251]: time="2024-02-09T09:13:42.958683655Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.958743 env[1251]: time="2024-02-09T09:13:42.958694921Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:13:42.958674 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:13:42.958917 env[1251]: time="2024-02-09T09:13:42.958769803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:13:42.958917 env[1251]: time="2024-02-09T09:13:42.958829935Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:13:42.958777 systemd[1]: Reached target user-config.target. Feb 9 09:13:42.959059 env[1251]: time="2024-02-09T09:13:42.959020394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:13:42.959059 env[1251]: time="2024-02-09T09:13:42.959037479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959059 env[1251]: time="2024-02-09T09:13:42.959045481Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959073049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959081378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959088774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959095599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959103133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959109888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959117562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959126 env[1251]: time="2024-02-09T09:13:42.959124302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959133813Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959198832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959211109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959218867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959227186Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959236357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959242231Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:13:42.959261 env[1251]: time="2024-02-09T09:13:42.959251896Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:13:42.959392 env[1251]: time="2024-02-09T09:13:42.959272714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:13:42.959411 env[1251]: time="2024-02-09T09:13:42.959386472Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959422417Z" level=info msg="Connect containerd service" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959448148Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959743594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959840205Z" level=info msg="Start subscribing containerd event" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959876396Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959877667Z" level=info msg="Start recovering state" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959909629Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959924259Z" level=info msg="Start event monitor" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959935694Z" level=info msg="Start snapshots syncer" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959942229Z" level=info msg="containerd successfully booted in 0.037158s" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959950417Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:13:42.961910 env[1251]: time="2024-02-09T09:13:42.959960044Z" level=info msg="Start streaming server" Feb 9 09:13:42.968102 systemd[1]: Started containerd.service. Feb 9 09:13:42.973714 tar[1244]: ./vlan Feb 9 09:13:42.998765 tar[1244]: ./portmap Feb 9 09:13:43.024111 tar[1244]: ./host-local Feb 9 09:13:43.039171 locksmithd[1273]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:13:43.046808 tar[1244]: ./vrf Feb 9 09:13:43.069218 tar[1244]: ./bridge Feb 9 09:13:43.096072 tar[1244]: ./tuning Feb 9 09:13:43.118932 tar[1244]: ./firewall Feb 9 09:13:43.147241 tar[1244]: ./host-device Feb 9 09:13:43.174703 tar[1244]: ./sbr Feb 9 09:13:43.196919 tar[1244]: ./loopback Feb 9 09:13:43.197840 sshd_keygen[1236]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:13:43.209719 systemd[1]: Finished sshd-keygen.service. Feb 9 09:13:43.210692 tar[1246]: linux-amd64/LICENSE Feb 9 09:13:43.210738 tar[1246]: linux-amd64/README.md Feb 9 09:13:43.218986 tar[1244]: ./dhcp Feb 9 09:13:43.219010 systemd[1]: Starting issuegen.service... Feb 9 09:13:43.225916 systemd[1]: Finished prepare-helm.service. Feb 9 09:13:43.233859 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:13:43.233976 systemd[1]: Finished issuegen.service. Feb 9 09:13:43.241995 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:13:43.243619 systemd-networkd[1105]: bond0: Gained IPv6LL Feb 9 09:13:43.243852 systemd-timesyncd[1183]: Network configuration changed, trying to establish connection. Feb 9 09:13:43.249996 systemd[1]: Finished prepare-critools.service. Feb 9 09:13:43.257898 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:13:43.266468 systemd[1]: Started getty@tty1.service. Feb 9 09:13:43.274360 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 09:13:43.278864 tar[1244]: ./ptp Feb 9 09:13:43.282712 systemd[1]: Reached target getty.target. Feb 9 09:13:43.303726 tar[1244]: ./ipvlan Feb 9 09:13:43.327970 tar[1244]: ./bandwidth Feb 9 09:13:43.355541 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:13:43.422569 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 9 09:13:43.448462 extend-filesystems[1217]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 09:13:43.448462 extend-filesystems[1217]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 09:13:43.448462 extend-filesystems[1217]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 9 09:13:43.487733 extend-filesystems[1203]: Resized filesystem in /dev/sda9 Feb 9 09:13:43.487733 extend-filesystems[1203]: Found sdb Feb 9 09:13:43.448902 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:13:43.449018 systemd[1]: Finished extend-filesystems.service. Feb 9 09:13:43.628459 systemd-timesyncd[1183]: Network configuration changed, trying to establish connection. Feb 9 09:13:43.628769 systemd-timesyncd[1183]: Network configuration changed, trying to establish connection. Feb 9 09:13:43.831767 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 09:13:48.294862 login[1304]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:13:48.303518 systemd-logind[1237]: New session 1 of user core. Feb 9 09:13:48.303599 login[1303]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:13:48.304099 systemd[1]: Created slice user-500.slice. Feb 9 09:13:48.304601 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:13:48.307099 systemd-logind[1237]: New session 2 of user core. Feb 9 09:13:48.310554 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:13:48.311273 systemd[1]: Starting user@500.service... Feb 9 09:13:48.313337 (systemd)[1319]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:13:48.386625 systemd[1319]: Queued start job for default target default.target. Feb 9 09:13:48.387105 systemd[1319]: Reached target paths.target. Feb 9 09:13:48.387161 systemd[1319]: Reached target sockets.target. Feb 9 09:13:48.387202 systemd[1319]: Reached target timers.target. Feb 9 09:13:48.387237 systemd[1319]: Reached target basic.target. Feb 9 09:13:48.387328 systemd[1319]: Reached target default.target. Feb 9 09:13:48.387390 systemd[1319]: Startup finished in 71ms. Feb 9 09:13:48.387561 systemd[1]: Started user@500.service. Feb 9 09:13:48.390083 systemd[1]: Started session-1.scope. Feb 9 09:13:48.391694 systemd[1]: Started session-2.scope. Feb 9 09:13:48.876973 coreos-metadata[1195]: Feb 09 09:13:48.876 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 09:13:48.877901 coreos-metadata[1194]: Feb 09 09:13:48.876 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 09:13:49.834572 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 9 09:13:49.834775 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 9 09:13:49.877102 coreos-metadata[1194]: Feb 09 09:13:49.877 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 09:13:49.877201 coreos-metadata[1195]: Feb 09 09:13:49.877 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 09:13:49.898502 coreos-metadata[1195]: Feb 09 09:13:49.898 INFO Fetch successful Feb 9 09:13:49.898666 coreos-metadata[1194]: Feb 09 09:13:49.898 INFO Fetch successful Feb 9 09:13:49.923394 systemd[1]: Finished coreos-metadata.service. Feb 9 09:13:49.924124 unknown[1194]: wrote ssh authorized keys file for user: core Feb 9 09:13:49.924470 systemd[1]: Started packet-phone-home.service. Feb 9 09:13:49.930940 curl[1346]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 09:13:49.930940 curl[1346]: Dload Upload Total Spent Left Speed Feb 9 09:13:49.941597 update-ssh-keys[1348]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:13:49.942623 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:13:49.943647 systemd[1]: Reached target multi-user.target. Feb 9 09:13:49.947009 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:13:49.951049 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:13:49.951152 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:13:49.951295 systemd[1]: Startup finished in 8.110s (kernel) + 13.856s (userspace) = 21.967s. Feb 9 09:13:51.153367 curl[1346]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 Feb 9 09:13:51.155346 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 09:13:56.609341 systemd[1]: Created slice system-sshd.slice. Feb 9 09:13:56.610004 systemd[1]: Started sshd@0-139.178.90.101:22-147.75.109.163:42878.service. Feb 9 09:13:56.674370 sshd[1354]: Accepted publickey for core from 147.75.109.163 port 42878 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:13:56.675140 sshd[1354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:13:56.678185 systemd-logind[1237]: New session 3 of user core. Feb 9 09:13:56.678895 systemd[1]: Started session-3.scope. Feb 9 09:13:56.730179 systemd[1]: Started sshd@1-139.178.90.101:22-147.75.109.163:42894.service. Feb 9 09:13:56.761528 sshd[1359]: Accepted publickey for core from 147.75.109.163 port 42894 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:13:56.762212 sshd[1359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:13:56.764385 systemd-logind[1237]: New session 4 of user core. Feb 9 09:13:56.764993 systemd[1]: Started session-4.scope. Feb 9 09:13:56.816283 sshd[1359]: pam_unix(sshd:session): session closed for user core Feb 9 09:13:56.819197 systemd[1]: Started sshd@2-139.178.90.101:22-147.75.109.163:42910.service. Feb 9 09:13:56.819979 systemd[1]: sshd@1-139.178.90.101:22-147.75.109.163:42894.service: Deactivated successfully. Feb 9 09:13:56.821153 systemd-logind[1237]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:13:56.821158 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:13:56.822339 systemd-logind[1237]: Removed session 4. Feb 9 09:13:56.853334 sshd[1365]: Accepted publickey for core from 147.75.109.163 port 42910 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:13:56.854170 sshd[1365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:13:56.857077 systemd-logind[1237]: New session 5 of user core. Feb 9 09:13:56.857784 systemd[1]: Started session-5.scope. Feb 9 09:13:56.909268 sshd[1365]: pam_unix(sshd:session): session closed for user core Feb 9 09:13:56.910870 systemd[1]: Started sshd@3-139.178.90.101:22-147.75.109.163:42914.service. Feb 9 09:13:56.911180 systemd[1]: sshd@2-139.178.90.101:22-147.75.109.163:42910.service: Deactivated successfully. Feb 9 09:13:56.911569 systemd-logind[1237]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:13:56.911656 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:13:56.912166 systemd-logind[1237]: Removed session 5. Feb 9 09:13:56.943124 sshd[1372]: Accepted publickey for core from 147.75.109.163 port 42914 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:13:56.944098 sshd[1372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:13:56.947527 systemd-logind[1237]: New session 6 of user core. Feb 9 09:13:56.948427 systemd[1]: Started session-6.scope. Feb 9 09:13:57.014174 sshd[1372]: pam_unix(sshd:session): session closed for user core Feb 9 09:13:57.020836 systemd[1]: Started sshd@4-139.178.90.101:22-147.75.109.163:42916.service. Feb 9 09:13:57.022550 systemd[1]: sshd@3-139.178.90.101:22-147.75.109.163:42914.service: Deactivated successfully. Feb 9 09:13:57.025047 systemd-logind[1237]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:13:57.025175 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:13:57.027728 systemd-logind[1237]: Removed session 6. Feb 9 09:13:57.074393 sshd[1379]: Accepted publickey for core from 147.75.109.163 port 42916 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:13:57.075054 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:13:57.077316 systemd-logind[1237]: New session 7 of user core. Feb 9 09:13:57.077884 systemd[1]: Started session-7.scope. Feb 9 09:13:57.151938 sudo[1384]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:13:57.152538 sudo[1384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:14:01.579902 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:14:01.584498 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:14:01.584739 systemd[1]: Reached target network-online.target. Feb 9 09:14:01.585568 systemd[1]: Starting docker.service... Feb 9 09:14:01.604807 env[1405]: time="2024-02-09T09:14:01.604748858Z" level=info msg="Starting up" Feb 9 09:14:01.605464 env[1405]: time="2024-02-09T09:14:01.605451074Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:14:01.605464 env[1405]: time="2024-02-09T09:14:01.605461628Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:14:01.605518 env[1405]: time="2024-02-09T09:14:01.605474325Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:14:01.605518 env[1405]: time="2024-02-09T09:14:01.605480777Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:14:01.606352 env[1405]: time="2024-02-09T09:14:01.606304533Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:14:01.606352 env[1405]: time="2024-02-09T09:14:01.606313730Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:14:01.606352 env[1405]: time="2024-02-09T09:14:01.606321412Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:14:01.606352 env[1405]: time="2024-02-09T09:14:01.606326408Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:14:02.038679 env[1405]: time="2024-02-09T09:14:02.038489919Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:14:02.038679 env[1405]: time="2024-02-09T09:14:02.038529811Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:14:02.039062 env[1405]: time="2024-02-09T09:14:02.038793941Z" level=info msg="Loading containers: start." Feb 9 09:14:02.152620 kernel: Initializing XFRM netlink socket Feb 9 09:14:02.173674 env[1405]: time="2024-02-09T09:14:02.173657019Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:14:02.174363 systemd-timesyncd[1183]: Network configuration changed, trying to establish connection. Feb 9 09:14:02.215680 systemd-networkd[1105]: docker0: Link UP Feb 9 09:14:02.222035 env[1405]: time="2024-02-09T09:14:02.222014235Z" level=info msg="Loading containers: done." Feb 9 09:14:02.230227 env[1405]: time="2024-02-09T09:14:02.230196281Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:14:02.230415 env[1405]: time="2024-02-09T09:14:02.230392043Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:14:02.230528 env[1405]: time="2024-02-09T09:14:02.230511014Z" level=info msg="Daemon has completed initialization" Feb 9 09:14:02.244282 systemd[1]: Started docker.service. Feb 9 09:14:02.255067 env[1405]: time="2024-02-09T09:14:02.254980248Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:14:02.284298 systemd[1]: Reloading. Feb 9 09:14:02.337781 /usr/lib/systemd/system-generators/torcx-generator[1563]: time="2024-02-09T09:14:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:14:02.337806 /usr/lib/systemd/system-generators/torcx-generator[1563]: time="2024-02-09T09:14:02Z" level=info msg="torcx already run" Feb 9 09:14:02.409009 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:14:02.409019 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:14:02.424729 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:14:02.437783 systemd-timesyncd[1183]: Contacted time server [2604:2dc0:101:200::e01]:123 (2.flatcar.pool.ntp.org). Feb 9 09:14:02.437808 systemd-timesyncd[1183]: Initial clock synchronization to Fri 2024-02-09 09:14:02.400516 UTC. Feb 9 09:14:02.473878 systemd[1]: Started kubelet.service. Feb 9 09:14:03.063109 kubelet[1624]: E0209 09:14:03.062932 1624 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:14:03.068146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:14:03.068536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:14:03.869306 env[1251]: time="2024-02-09T09:14:03.869156844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:14:04.538014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588465511.mount: Deactivated successfully. Feb 9 09:14:06.751323 env[1251]: time="2024-02-09T09:14:06.751276702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:06.751929 env[1251]: time="2024-02-09T09:14:06.751891093Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:06.753001 env[1251]: time="2024-02-09T09:14:06.752964833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:06.754145 env[1251]: time="2024-02-09T09:14:06.754088083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:06.754691 env[1251]: time="2024-02-09T09:14:06.754634206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 09:14:06.762790 env[1251]: time="2024-02-09T09:14:06.762739183Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:14:09.302267 env[1251]: time="2024-02-09T09:14:09.302212364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:09.302829 env[1251]: time="2024-02-09T09:14:09.302784674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:09.304342 env[1251]: time="2024-02-09T09:14:09.304308308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:09.305261 env[1251]: time="2024-02-09T09:14:09.305210375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:09.305729 env[1251]: time="2024-02-09T09:14:09.305687802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 09:14:09.311472 env[1251]: time="2024-02-09T09:14:09.311458384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:14:10.795353 env[1251]: time="2024-02-09T09:14:10.795306120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:10.796060 env[1251]: time="2024-02-09T09:14:10.796020435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:10.797690 env[1251]: time="2024-02-09T09:14:10.797677418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:10.798787 env[1251]: time="2024-02-09T09:14:10.798760639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:10.799301 env[1251]: time="2024-02-09T09:14:10.799279718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 09:14:10.804780 env[1251]: time="2024-02-09T09:14:10.804749394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:14:11.710402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739895548.mount: Deactivated successfully. Feb 9 09:14:12.055059 env[1251]: time="2024-02-09T09:14:12.054977517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.055617 env[1251]: time="2024-02-09T09:14:12.055579485Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.056314 env[1251]: time="2024-02-09T09:14:12.056292557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.057293 env[1251]: time="2024-02-09T09:14:12.057280761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.057495 env[1251]: time="2024-02-09T09:14:12.057469212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 09:14:12.064407 env[1251]: time="2024-02-09T09:14:12.064392335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:14:12.607745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871517392.mount: Deactivated successfully. Feb 9 09:14:12.608861 env[1251]: time="2024-02-09T09:14:12.608814252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.609398 env[1251]: time="2024-02-09T09:14:12.609357506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.610180 env[1251]: time="2024-02-09T09:14:12.610140649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.610914 env[1251]: time="2024-02-09T09:14:12.610872929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:12.611200 env[1251]: time="2024-02-09T09:14:12.611075941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 09:14:12.616216 env[1251]: time="2024-02-09T09:14:12.616202883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:14:13.207676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:14:13.207817 systemd[1]: Stopped kubelet.service. Feb 9 09:14:13.208873 systemd[1]: Started kubelet.service. Feb 9 09:14:13.244216 kubelet[1715]: E0209 09:14:13.244187 1715 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:14:13.246561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:14:13.246678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:14:13.294937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834377217.mount: Deactivated successfully. Feb 9 09:14:16.230844 env[1251]: time="2024-02-09T09:14:16.230790339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:16.231444 env[1251]: time="2024-02-09T09:14:16.231410756Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:16.232201 env[1251]: time="2024-02-09T09:14:16.232156143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:16.233036 env[1251]: time="2024-02-09T09:14:16.232994271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:16.233881 env[1251]: time="2024-02-09T09:14:16.233839590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 09:14:16.239266 env[1251]: time="2024-02-09T09:14:16.239252729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:14:16.854464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148314099.mount: Deactivated successfully. Feb 9 09:14:17.247797 env[1251]: time="2024-02-09T09:14:17.247681902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:17.248360 env[1251]: time="2024-02-09T09:14:17.248323405Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:17.249532 env[1251]: time="2024-02-09T09:14:17.249464549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:17.250528 env[1251]: time="2024-02-09T09:14:17.250515218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:17.251040 env[1251]: time="2024-02-09T09:14:17.250987450Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 09:14:19.012916 systemd[1]: Stopped kubelet.service. Feb 9 09:14:19.021274 systemd[1]: Reloading. Feb 9 09:14:19.057358 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2024-02-09T09:14:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:14:19.057374 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2024-02-09T09:14:19Z" level=info msg="torcx already run" Feb 9 09:14:19.111161 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:14:19.111170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:14:19.123890 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:14:19.177046 systemd[1]: Started kubelet.service. Feb 9 09:14:19.204640 kubelet[1932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:14:19.204640 kubelet[1932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:14:19.204942 kubelet[1932]: I0209 09:14:19.204707 1932 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:14:19.206283 kubelet[1932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:14:19.206283 kubelet[1932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:14:19.643423 kubelet[1932]: I0209 09:14:19.643382 1932 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:14:19.643423 kubelet[1932]: I0209 09:14:19.643393 1932 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:14:19.643531 kubelet[1932]: I0209 09:14:19.643527 1932 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:14:19.676430 kubelet[1932]: I0209 09:14:19.676346 1932 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:14:19.677768 kubelet[1932]: E0209 09:14:19.677704 1932 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.90.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.703280 kubelet[1932]: I0209 09:14:19.703262 1932 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:14:19.705875 kubelet[1932]: I0209 09:14:19.705838 1932 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:14:19.705906 kubelet[1932]: I0209 09:14:19.705879 1932 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:14:19.705906 kubelet[1932]: I0209 09:14:19.705889 1932 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:14:19.705906 kubelet[1932]: I0209 09:14:19.705895 1932 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:14:19.707166 kubelet[1932]: I0209 09:14:19.707131 1932 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:14:19.715221 kubelet[1932]: I0209 09:14:19.715186 1932 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:14:19.715221 kubelet[1932]: I0209 09:14:19.715196 1932 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:14:19.715221 kubelet[1932]: I0209 09:14:19.715213 1932 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:14:19.715221 kubelet[1932]: I0209 09:14:19.715220 1932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:14:19.715376 kubelet[1932]: W0209 09:14:19.715320 1932 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.90.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-afd9ebe59c&limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.715376 kubelet[1932]: E0209 09:14:19.715357 1932 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.90.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-afd9ebe59c&limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.715447 kubelet[1932]: W0209 09:14:19.715404 1932 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.90.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.715447 kubelet[1932]: E0209 09:14:19.715423 1932 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.90.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.716975 kubelet[1932]: I0209 09:14:19.716936 1932 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:14:19.719217 kubelet[1932]: W0209 09:14:19.719178 1932 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:14:19.719818 kubelet[1932]: I0209 09:14:19.719778 1932 server.go:1186] "Started kubelet" Feb 9 09:14:19.720033 kubelet[1932]: E0209 09:14:19.720020 1932 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:14:19.720033 kubelet[1932]: E0209 09:14:19.720034 1932 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:14:19.720888 kubelet[1932]: I0209 09:14:19.720874 1932 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:14:19.730796 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:14:19.730949 kubelet[1932]: E0209 09:14:19.730905 1932 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9063625d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 719763417, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 719763417, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.90.101:6443/api/v1/namespaces/default/events": dial tcp 139.178.90.101:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:14:19.731302 kubelet[1932]: I0209 09:14:19.731294 1932 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:14:19.731407 kubelet[1932]: I0209 09:14:19.731399 1932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:14:19.731437 kubelet[1932]: I0209 09:14:19.731428 1932 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:14:19.731473 kubelet[1932]: I0209 09:14:19.731465 1932 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:14:19.731594 kubelet[1932]: E0209 09:14:19.731580 1932 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.90.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-afd9ebe59c?timeout=10s": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.731642 kubelet[1932]: W0209 09:14:19.731617 1932 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.731668 kubelet[1932]: E0209 09:14:19.731650 1932 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.751848 kubelet[1932]: I0209 09:14:19.751808 1932 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:14:19.763138 kubelet[1932]: I0209 09:14:19.763093 1932 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:14:19.763138 kubelet[1932]: I0209 09:14:19.763104 1932 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:14:19.763138 kubelet[1932]: I0209 09:14:19.763115 1932 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:14:19.763233 kubelet[1932]: E0209 09:14:19.763146 1932 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:14:19.763690 kubelet[1932]: W0209 09:14:19.763637 1932 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.90.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.763690 kubelet[1932]: E0209 09:14:19.763667 1932 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.90.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.863918 kubelet[1932]: E0209 09:14:19.863795 1932 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:14:19.924036 kubelet[1932]: I0209 09:14:19.923855 1932 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:19.924533 kubelet[1932]: E0209 09:14:19.924462 1932 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.101:6443/api/v1/nodes\": dial tcp 139.178.90.101:6443: connect: connection refused" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:19.924945 kubelet[1932]: I0209 09:14:19.924892 1932 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:14:19.924945 kubelet[1932]: I0209 09:14:19.924936 1932 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:14:19.925342 kubelet[1932]: I0209 09:14:19.924984 1932 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:14:19.927378 kubelet[1932]: I0209 09:14:19.927333 1932 policy_none.go:49] "None policy: Start" Feb 9 09:14:19.928641 kubelet[1932]: I0209 09:14:19.928555 1932 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:14:19.928855 kubelet[1932]: I0209 09:14:19.928656 1932 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:14:19.932147 kubelet[1932]: E0209 09:14:19.932080 1932 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.90.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-afd9ebe59c?timeout=10s": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:19.936034 kubelet[1932]: I0209 09:14:19.936026 1932 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:14:19.936192 kubelet[1932]: I0209 09:14:19.936185 1932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:14:19.936360 kubelet[1932]: E0209 09:14:19.936352 1932 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-afd9ebe59c\" not found" Feb 9 09:14:20.064675 kubelet[1932]: I0209 09:14:20.064545 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:20.068957 kubelet[1932]: I0209 09:14:20.068881 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:20.072513 kubelet[1932]: I0209 09:14:20.072429 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:20.073134 kubelet[1932]: I0209 09:14:20.073064 1932 status_manager.go:698] "Failed to get status for pod" podUID=8a906a4bc7599f7a5ae3b9f770f1075d pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" err="Get \"https://139.178.90.101:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-afd9ebe59c\": dial tcp 139.178.90.101:6443: connect: connection refused" Feb 9 09:14:20.076605 kubelet[1932]: I0209 09:14:20.076524 1932 status_manager.go:698] "Failed to get status for pod" podUID=3087548171ad5cd7255fa3514c645016 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" err="Get \"https://139.178.90.101:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\": dial tcp 139.178.90.101:6443: connect: connection refused" Feb 9 09:14:20.077722 kubelet[1932]: I0209 09:14:20.077714 1932 status_manager.go:698] "Failed to get status for pod" podUID=95099a1e583751ff8f1f5ebd88b4ab66 pod="kube-system/kube-scheduler-ci-3510.3.2-a-afd9ebe59c" err="Get \"https://139.178.90.101:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-afd9ebe59c\": dial tcp 139.178.90.101:6443: connect: connection refused" Feb 9 09:14:20.127055 kubelet[1932]: I0209 09:14:20.126982 1932 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.127355 kubelet[1932]: E0209 09:14:20.127307 1932 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.101:6443/api/v1/nodes\": dial tcp 139.178.90.101:6443: connect: connection refused" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.236140 kubelet[1932]: I0209 09:14:20.235954 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a906a4bc7599f7a5ae3b9f770f1075d-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" (UID: \"8a906a4bc7599f7a5ae3b9f770f1075d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.236998 kubelet[1932]: I0209 09:14:20.236167 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.236998 kubelet[1932]: I0209 09:14:20.236298 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.236998 kubelet[1932]: I0209 09:14:20.236395 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.236998 kubelet[1932]: I0209 09:14:20.236457 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a906a4bc7599f7a5ae3b9f770f1075d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" (UID: \"8a906a4bc7599f7a5ae3b9f770f1075d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.236998 kubelet[1932]: I0209 09:14:20.236521 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a906a4bc7599f7a5ae3b9f770f1075d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" (UID: \"8a906a4bc7599f7a5ae3b9f770f1075d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.237473 kubelet[1932]: I0209 09:14:20.236598 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.237473 kubelet[1932]: I0209 09:14:20.236790 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.237473 kubelet[1932]: I0209 09:14:20.236917 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95099a1e583751ff8f1f5ebd88b4ab66-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-afd9ebe59c\" (UID: \"95099a1e583751ff8f1f5ebd88b4ab66\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.333470 kubelet[1932]: E0209 09:14:20.333348 1932 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.90.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-afd9ebe59c?timeout=10s": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:20.379209 env[1251]: time="2024-02-09T09:14:20.379087117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-afd9ebe59c,Uid:8a906a4bc7599f7a5ae3b9f770f1075d,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:20.380069 env[1251]: time="2024-02-09T09:14:20.379540592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-afd9ebe59c,Uid:3087548171ad5cd7255fa3514c645016,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:20.380438 env[1251]: time="2024-02-09T09:14:20.380317393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-afd9ebe59c,Uid:95099a1e583751ff8f1f5ebd88b4ab66,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:20.531702 kubelet[1932]: I0209 09:14:20.531515 1932 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.532232 kubelet[1932]: E0209 09:14:20.532165 1932 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.101:6443/api/v1/nodes\": dial tcp 139.178.90.101:6443: connect: connection refused" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:20.719759 kubelet[1932]: W0209 09:14:20.719512 1932 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.90.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:20.719759 kubelet[1932]: E0209 09:14:20.719675 1932 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.90.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:20.725032 kubelet[1932]: W0209 09:14:20.724904 1932 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:20.725032 kubelet[1932]: E0209 09:14:20.725006 1932 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 9 09:14:20.901758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819344892.mount: Deactivated successfully. Feb 9 09:14:20.902929 env[1251]: time="2024-02-09T09:14:20.902910870Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.903787 env[1251]: time="2024-02-09T09:14:20.903747874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.904685 env[1251]: time="2024-02-09T09:14:20.904665941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.905952 env[1251]: time="2024-02-09T09:14:20.905937524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.906679 env[1251]: time="2024-02-09T09:14:20.906666638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.907006 env[1251]: time="2024-02-09T09:14:20.906995070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.908725 env[1251]: time="2024-02-09T09:14:20.908676473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.910530 env[1251]: time="2024-02-09T09:14:20.910517042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.911348 env[1251]: time="2024-02-09T09:14:20.911335195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.911786 env[1251]: time="2024-02-09T09:14:20.911771826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.912228 env[1251]: time="2024-02-09T09:14:20.912199842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.912606 env[1251]: time="2024-02-09T09:14:20.912572788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:20.917517 env[1251]: time="2024-02-09T09:14:20.917485158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:20.917517 env[1251]: time="2024-02-09T09:14:20.917507389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:20.917517 env[1251]: time="2024-02-09T09:14:20.917514274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:20.917634 env[1251]: time="2024-02-09T09:14:20.917600121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9f29cc4b76dce0ba42e21aa3a9892cce68009cef21411b5d6382f4796156afe pid=2018 runtime=io.containerd.runc.v2 Feb 9 09:14:20.919191 env[1251]: time="2024-02-09T09:14:20.919154804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:20.919191 env[1251]: time="2024-02-09T09:14:20.919177347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:20.919191 env[1251]: time="2024-02-09T09:14:20.919184295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:20.919313 env[1251]: time="2024-02-09T09:14:20.919252567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f8ae377e44d34ac8aaeb21a8d76e6228507d1e31ef51d64dbd5cfc8751f7b3b pid=2042 runtime=io.containerd.runc.v2 Feb 9 09:14:20.919480 env[1251]: time="2024-02-09T09:14:20.919452974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:20.919480 env[1251]: time="2024-02-09T09:14:20.919470338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:20.919480 env[1251]: time="2024-02-09T09:14:20.919477237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:20.919586 env[1251]: time="2024-02-09T09:14:20.919541054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8a61770bdf1bbbaf4036d65eab1d1f0a2568e857548e871e23189ee1504e5cc pid=2044 runtime=io.containerd.runc.v2 Feb 9 09:14:20.959885 env[1251]: time="2024-02-09T09:14:20.959859744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-afd9ebe59c,Uid:3087548171ad5cd7255fa3514c645016,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8a61770bdf1bbbaf4036d65eab1d1f0a2568e857548e871e23189ee1504e5cc\"" Feb 9 09:14:20.959983 env[1251]: time="2024-02-09T09:14:20.959927141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-afd9ebe59c,Uid:8a906a4bc7599f7a5ae3b9f770f1075d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f8ae377e44d34ac8aaeb21a8d76e6228507d1e31ef51d64dbd5cfc8751f7b3b\"" Feb 9 09:14:20.961508 env[1251]: time="2024-02-09T09:14:20.961494107Z" level=info msg="CreateContainer within sandbox \"9f8ae377e44d34ac8aaeb21a8d76e6228507d1e31ef51d64dbd5cfc8751f7b3b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:14:20.961552 env[1251]: time="2024-02-09T09:14:20.961507182Z" level=info msg="CreateContainer within sandbox \"b8a61770bdf1bbbaf4036d65eab1d1f0a2568e857548e871e23189ee1504e5cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:14:20.967745 env[1251]: time="2024-02-09T09:14:20.967699032Z" level=info msg="CreateContainer within sandbox \"b8a61770bdf1bbbaf4036d65eab1d1f0a2568e857548e871e23189ee1504e5cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b1ff6e48dc7816eeff84966bd9f76bea3b31fff7ddc1fde917585011142e89c\"" Feb 9 09:14:20.967963 env[1251]: time="2024-02-09T09:14:20.967922040Z" level=info msg="StartContainer for \"3b1ff6e48dc7816eeff84966bd9f76bea3b31fff7ddc1fde917585011142e89c\"" Feb 9 09:14:20.968166 env[1251]: time="2024-02-09T09:14:20.968122334Z" level=info msg="CreateContainer within sandbox \"9f8ae377e44d34ac8aaeb21a8d76e6228507d1e31ef51d64dbd5cfc8751f7b3b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc4513b9555f777a54cf70eba6656950d265683fc93a52fe4a43a3ca831100ef\"" Feb 9 09:14:20.968307 env[1251]: time="2024-02-09T09:14:20.968272631Z" level=info msg="StartContainer for \"bc4513b9555f777a54cf70eba6656950d265683fc93a52fe4a43a3ca831100ef\"" Feb 9 09:14:20.970640 env[1251]: time="2024-02-09T09:14:20.970613990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-afd9ebe59c,Uid:95099a1e583751ff8f1f5ebd88b4ab66,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9f29cc4b76dce0ba42e21aa3a9892cce68009cef21411b5d6382f4796156afe\"" Feb 9 09:14:20.971689 env[1251]: time="2024-02-09T09:14:20.971671134Z" level=info msg="CreateContainer within sandbox \"b9f29cc4b76dce0ba42e21aa3a9892cce68009cef21411b5d6382f4796156afe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:14:20.975706 env[1251]: time="2024-02-09T09:14:20.975657913Z" level=info msg="CreateContainer within sandbox \"b9f29cc4b76dce0ba42e21aa3a9892cce68009cef21411b5d6382f4796156afe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34a2f8564bc1364f0a9b70200b4865a268eaacba8737e5e83fbf17327cad419b\"" Feb 9 09:14:20.975899 env[1251]: time="2024-02-09T09:14:20.975858791Z" level=info msg="StartContainer for \"34a2f8564bc1364f0a9b70200b4865a268eaacba8737e5e83fbf17327cad419b\"" Feb 9 09:14:21.014342 env[1251]: time="2024-02-09T09:14:21.014315781Z" level=info msg="StartContainer for \"bc4513b9555f777a54cf70eba6656950d265683fc93a52fe4a43a3ca831100ef\" returns successfully" Feb 9 09:14:21.014342 env[1251]: time="2024-02-09T09:14:21.014329550Z" level=info msg="StartContainer for \"3b1ff6e48dc7816eeff84966bd9f76bea3b31fff7ddc1fde917585011142e89c\" returns successfully" Feb 9 09:14:21.019638 env[1251]: time="2024-02-09T09:14:21.019615041Z" level=info msg="StartContainer for \"34a2f8564bc1364f0a9b70200b4865a268eaacba8737e5e83fbf17327cad419b\" returns successfully" Feb 9 09:14:21.334520 kubelet[1932]: I0209 09:14:21.334473 1932 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:22.058506 kubelet[1932]: I0209 09:14:22.058491 1932 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:22.717089 kubelet[1932]: I0209 09:14:22.717018 1932 apiserver.go:52] "Watching apiserver" Feb 9 09:14:22.732242 kubelet[1932]: I0209 09:14:22.732190 1932 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:14:22.753614 kubelet[1932]: I0209 09:14:22.753537 1932 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:14:22.925948 kubelet[1932]: E0209 09:14:22.925842 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-afd9ebe59c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:23.131768 kubelet[1932]: E0209 09:14:23.131478 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9063625d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 719763417, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 719763417, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.188187 kubelet[1932]: E0209 09:14:23.187992 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9063a32b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 720028852, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 720028852, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.244334 kubelet[1932]: E0209 09:14:23.244121 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9125ded05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-afd9ebe59c status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923696901, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923696901, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.302471 kubelet[1932]: E0209 09:14:23.302308 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9125e4228", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-afd9ebe59c status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923718696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923718696, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.358870 kubelet[1932]: E0209 09:14:23.358705 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9125e608b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-afd9ebe59c status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923726475, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923726475, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.420616 kubelet[1932]: E0209 09:14:23.420325 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9125ded05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-afd9ebe59c status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923696901, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923754885, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.482009 kubelet[1932]: E0209 09:14:23.481865 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9125e4228", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-afd9ebe59c status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923718696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923764183, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.541894 kubelet[1932]: E0209 09:14:23.541658 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9125e608b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-afd9ebe59c status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923726475, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 923776267, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.597831 kubelet[1932]: E0209 09:14:23.597658 1932 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-afd9ebe59c.17b226f9131d9df3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-afd9ebe59c", UID:"ci-3510.3.2-a-afd9ebe59c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-afd9ebe59c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 936259571, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 14, 19, 936259571, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:14:23.781497 kubelet[1932]: E0209 09:14:23.781350 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.323633 systemd[1]: Reloading. Feb 9 09:14:25.382690 /usr/lib/systemd/system-generators/torcx-generator[2303]: time="2024-02-09T09:14:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:14:25.382707 /usr/lib/systemd/system-generators/torcx-generator[2303]: time="2024-02-09T09:14:25Z" level=info msg="torcx already run" Feb 9 09:14:25.438806 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:14:25.438817 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:14:25.452771 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:14:25.507961 kubelet[1932]: I0209 09:14:25.507943 1932 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:14:25.507960 systemd[1]: Stopping kubelet.service... Feb 9 09:14:25.527833 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:14:25.527984 systemd[1]: Stopped kubelet.service. Feb 9 09:14:25.528908 systemd[1]: Started kubelet.service. Feb 9 09:14:25.551784 kubelet[2367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:14:25.551784 kubelet[2367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:14:25.552005 kubelet[2367]: I0209 09:14:25.551783 2367 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:14:25.552542 kubelet[2367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:14:25.552542 kubelet[2367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:14:25.554105 kubelet[2367]: I0209 09:14:25.554095 2367 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:14:25.554105 kubelet[2367]: I0209 09:14:25.554106 2367 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:14:25.554233 kubelet[2367]: I0209 09:14:25.554225 2367 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:14:25.554901 kubelet[2367]: I0209 09:14:25.554892 2367 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:14:25.555352 kubelet[2367]: I0209 09:14:25.555344 2367 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:14:25.572278 kubelet[2367]: I0209 09:14:25.572236 2367 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:14:25.572474 kubelet[2367]: I0209 09:14:25.572444 2367 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:14:25.572499 kubelet[2367]: I0209 09:14:25.572483 2367 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:14:25.572499 kubelet[2367]: I0209 09:14:25.572496 2367 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:14:25.572573 kubelet[2367]: I0209 09:14:25.572503 2367 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:14:25.572573 kubelet[2367]: I0209 09:14:25.572523 2367 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:14:25.573991 kubelet[2367]: I0209 09:14:25.573937 2367 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:14:25.573991 kubelet[2367]: I0209 09:14:25.573949 2367 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:14:25.573991 kubelet[2367]: I0209 09:14:25.573960 2367 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:14:25.573991 kubelet[2367]: I0209 09:14:25.573968 2367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:14:25.574266 kubelet[2367]: I0209 09:14:25.574256 2367 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:14:25.574490 kubelet[2367]: I0209 09:14:25.574483 2367 server.go:1186] "Started kubelet" Feb 9 09:14:25.574607 kubelet[2367]: I0209 09:14:25.574597 2367 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:14:25.574745 kubelet[2367]: E0209 09:14:25.574737 2367 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:14:25.574793 kubelet[2367]: E0209 09:14:25.574751 2367 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:14:25.575085 kubelet[2367]: I0209 09:14:25.575077 2367 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:14:25.575279 kubelet[2367]: I0209 09:14:25.575272 2367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:14:25.575314 kubelet[2367]: I0209 09:14:25.575306 2367 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:14:25.575524 kubelet[2367]: I0209 09:14:25.575513 2367 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:14:25.586396 kubelet[2367]: I0209 09:14:25.586382 2367 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:14:25.592440 kubelet[2367]: I0209 09:14:25.592428 2367 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:14:25.592440 kubelet[2367]: I0209 09:14:25.592439 2367 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:14:25.592547 kubelet[2367]: I0209 09:14:25.592449 2367 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:14:25.592547 kubelet[2367]: E0209 09:14:25.592474 2367 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:14:25.605124 kubelet[2367]: I0209 09:14:25.605109 2367 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:14:25.605124 kubelet[2367]: I0209 09:14:25.605121 2367 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:14:25.605223 kubelet[2367]: I0209 09:14:25.605132 2367 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:14:25.605266 kubelet[2367]: I0209 09:14:25.605240 2367 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:14:25.605266 kubelet[2367]: I0209 09:14:25.605251 2367 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:14:25.605266 kubelet[2367]: I0209 09:14:25.605257 2367 policy_none.go:49] "None policy: Start" Feb 9 09:14:25.605514 kubelet[2367]: I0209 09:14:25.605506 2367 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:14:25.605554 kubelet[2367]: I0209 09:14:25.605516 2367 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:14:25.605613 kubelet[2367]: I0209 09:14:25.605606 2367 state_mem.go:75] "Updated machine memory state" Feb 9 09:14:25.606895 kubelet[2367]: I0209 09:14:25.606885 2367 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:14:25.607032 kubelet[2367]: I0209 09:14:25.607025 2367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:14:25.680128 kubelet[2367]: I0209 09:14:25.680089 2367 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.690350 kubelet[2367]: I0209 09:14:25.690263 2367 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.690618 kubelet[2367]: I0209 09:14:25.690406 2367 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.693089 kubelet[2367]: I0209 09:14:25.693039 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:25.693269 kubelet[2367]: I0209 09:14:25.693203 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:25.693455 kubelet[2367]: I0209 09:14:25.693409 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:25.695338 sudo[2430]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:14:25.695910 sudo[2430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:14:25.702042 kubelet[2367]: E0209 09:14:25.701952 2367 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.776960 kubelet[2367]: I0209 09:14:25.776912 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.776960 kubelet[2367]: I0209 09:14:25.776936 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95099a1e583751ff8f1f5ebd88b4ab66-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-afd9ebe59c\" (UID: \"95099a1e583751ff8f1f5ebd88b4ab66\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.776960 kubelet[2367]: I0209 09:14:25.776949 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a906a4bc7599f7a5ae3b9f770f1075d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" (UID: \"8a906a4bc7599f7a5ae3b9f770f1075d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.777074 kubelet[2367]: I0209 09:14:25.776977 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.777074 kubelet[2367]: I0209 09:14:25.776997 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.777074 kubelet[2367]: I0209 09:14:25.777048 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.777074 kubelet[2367]: I0209 09:14:25.777069 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3087548171ad5cd7255fa3514c645016-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" (UID: \"3087548171ad5cd7255fa3514c645016\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.777152 kubelet[2367]: I0209 09:14:25.777081 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a906a4bc7599f7a5ae3b9f770f1075d-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" (UID: \"8a906a4bc7599f7a5ae3b9f770f1075d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.777152 kubelet[2367]: I0209 09:14:25.777094 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a906a4bc7599f7a5ae3b9f770f1075d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" (UID: \"8a906a4bc7599f7a5ae3b9f770f1075d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:25.981046 kubelet[2367]: E0209 09:14:25.980978 2367 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:26.153168 sudo[2430]: pam_unix(sudo:session): session closed for user root Feb 9 09:14:26.575258 kubelet[2367]: I0209 09:14:26.575180 2367 apiserver.go:52] "Watching apiserver" Feb 9 09:14:26.875980 kubelet[2367]: I0209 09:14:26.875782 2367 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:14:26.886027 kubelet[2367]: I0209 09:14:26.885947 2367 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:14:27.178810 kubelet[2367]: E0209 09:14:27.178786 2367 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-afd9ebe59c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:27.291515 sudo[1384]: pam_unix(sudo:session): session closed for user root Feb 9 09:14:27.293268 sshd[1379]: pam_unix(sshd:session): session closed for user core Feb 9 09:14:27.296740 systemd[1]: sshd@4-139.178.90.101:22-147.75.109.163:42916.service: Deactivated successfully. Feb 9 09:14:27.298468 systemd-logind[1237]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:14:27.298551 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:14:27.299858 systemd-logind[1237]: Removed session 7. Feb 9 09:14:27.382593 kubelet[2367]: E0209 09:14:27.382481 2367 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-afd9ebe59c\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:27.583045 kubelet[2367]: E0209 09:14:27.582882 2367 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-afd9ebe59c\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-afd9ebe59c" Feb 9 09:14:27.798878 kubelet[2367]: I0209 09:14:27.798824 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-afd9ebe59c" podStartSLOduration=4.798727643 pod.CreationTimestamp="2024-02-09 09:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:27.798123593 +0000 UTC m=+2.267415248" watchObservedRunningTime="2024-02-09 09:14:27.798727643 +0000 UTC m=+2.268019306" Feb 9 09:14:27.984835 update_engine[1239]: I0209 09:14:27.984714 1239 update_attempter.cc:509] Updating boot flags... Feb 9 09:14:28.983055 kubelet[2367]: I0209 09:14:28.983004 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-afd9ebe59c" podStartSLOduration=5.982984599 pod.CreationTimestamp="2024-02-09 09:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:28.586228489 +0000 UTC m=+3.055520149" watchObservedRunningTime="2024-02-09 09:14:28.982984599 +0000 UTC m=+3.452276180" Feb 9 09:14:32.375617 kubelet[2367]: I0209 09:14:32.375599 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-afd9ebe59c" podStartSLOduration=7.375555542 pod.CreationTimestamp="2024-02-09 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:28.982923733 +0000 UTC m=+3.452215326" watchObservedRunningTime="2024-02-09 09:14:32.375555542 +0000 UTC m=+6.844847129" Feb 9 09:14:39.457241 kubelet[2367]: I0209 09:14:39.457201 2367 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:14:39.457991 kubelet[2367]: I0209 09:14:39.457917 2367 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:14:39.458068 env[1251]: time="2024-02-09T09:14:39.457654525Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:14:40.236421 kubelet[2367]: I0209 09:14:40.236398 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:40.239054 kubelet[2367]: I0209 09:14:40.239035 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:40.266572 kubelet[2367]: I0209 09:14:40.266550 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-config-path\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266572 kubelet[2367]: I0209 09:14:40.266578 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-kernel\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266695 kubelet[2367]: I0209 09:14:40.266591 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m999\" (UniqueName: \"kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-kube-api-access-7m999\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266695 kubelet[2367]: I0209 09:14:40.266617 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-etc-cni-netd\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266695 kubelet[2367]: I0209 09:14:40.266634 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cni-path\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266695 kubelet[2367]: I0209 09:14:40.266657 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hubble-tls\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266695 kubelet[2367]: I0209 09:14:40.266676 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-lib-modules\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266695 kubelet[2367]: I0209 09:14:40.266691 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zdk\" (UniqueName: \"kubernetes.io/projected/f6e98b26-49cc-48c1-a43b-51f922717937-kube-api-access-j7zdk\") pod \"kube-proxy-c9gfv\" (UID: \"f6e98b26-49cc-48c1-a43b-51f922717937\") " pod="kube-system/kube-proxy-c9gfv" Feb 9 09:14:40.266813 kubelet[2367]: I0209 09:14:40.266704 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-run\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266813 kubelet[2367]: I0209 09:14:40.266715 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6e98b26-49cc-48c1-a43b-51f922717937-kube-proxy\") pod \"kube-proxy-c9gfv\" (UID: \"f6e98b26-49cc-48c1-a43b-51f922717937\") " pod="kube-system/kube-proxy-c9gfv" Feb 9 09:14:40.266813 kubelet[2367]: I0209 09:14:40.266727 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6e98b26-49cc-48c1-a43b-51f922717937-lib-modules\") pod \"kube-proxy-c9gfv\" (UID: \"f6e98b26-49cc-48c1-a43b-51f922717937\") " pod="kube-system/kube-proxy-c9gfv" Feb 9 09:14:40.266813 kubelet[2367]: I0209 09:14:40.266739 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-net\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266813 kubelet[2367]: I0209 09:14:40.266754 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6e98b26-49cc-48c1-a43b-51f922717937-xtables-lock\") pod \"kube-proxy-c9gfv\" (UID: \"f6e98b26-49cc-48c1-a43b-51f922717937\") " pod="kube-system/kube-proxy-c9gfv" Feb 9 09:14:40.266813 kubelet[2367]: I0209 09:14:40.266765 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-xtables-lock\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266931 kubelet[2367]: I0209 09:14:40.266777 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-bpf-maps\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266931 kubelet[2367]: I0209 09:14:40.266788 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hostproc\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266931 kubelet[2367]: I0209 09:14:40.266798 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-cgroup\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.266931 kubelet[2367]: I0209 09:14:40.266809 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-clustermesh-secrets\") pod \"cilium-q8zz2\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " pod="kube-system/cilium-q8zz2" Feb 9 09:14:40.320651 kubelet[2367]: I0209 09:14:40.320613 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:40.368039 kubelet[2367]: I0209 09:14:40.367929 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9lb\" (UniqueName: \"kubernetes.io/projected/12aa12f1-94c7-48f3-a090-87adf7e0a891-kube-api-access-4g9lb\") pod \"cilium-operator-f59cbd8c6-sfv7n\" (UID: \"12aa12f1-94c7-48f3-a090-87adf7e0a891\") " pod="kube-system/cilium-operator-f59cbd8c6-sfv7n" Feb 9 09:14:40.368345 kubelet[2367]: I0209 09:14:40.368287 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12aa12f1-94c7-48f3-a090-87adf7e0a891-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-sfv7n\" (UID: \"12aa12f1-94c7-48f3-a090-87adf7e0a891\") " pod="kube-system/cilium-operator-f59cbd8c6-sfv7n" Feb 9 09:14:40.839856 env[1251]: time="2024-02-09T09:14:40.839730555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9gfv,Uid:f6e98b26-49cc-48c1-a43b-51f922717937,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:40.860924 env[1251]: time="2024-02-09T09:14:40.860856419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:40.860924 env[1251]: time="2024-02-09T09:14:40.860880810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:40.860924 env[1251]: time="2024-02-09T09:14:40.860905819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:40.861031 env[1251]: time="2024-02-09T09:14:40.860971444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04a3c6fcc597ac58569e30be7e4dcfef44950c83c2566cc5e41eaab5dfb14787 pid=2564 runtime=io.containerd.runc.v2 Feb 9 09:14:40.902896 env[1251]: time="2024-02-09T09:14:40.902863173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9gfv,Uid:f6e98b26-49cc-48c1-a43b-51f922717937,Namespace:kube-system,Attempt:0,} returns sandbox id \"04a3c6fcc597ac58569e30be7e4dcfef44950c83c2566cc5e41eaab5dfb14787\"" Feb 9 09:14:40.904322 env[1251]: time="2024-02-09T09:14:40.904273456Z" level=info msg="CreateContainer within sandbox \"04a3c6fcc597ac58569e30be7e4dcfef44950c83c2566cc5e41eaab5dfb14787\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:14:40.910060 env[1251]: time="2024-02-09T09:14:40.910036264Z" level=info msg="CreateContainer within sandbox \"04a3c6fcc597ac58569e30be7e4dcfef44950c83c2566cc5e41eaab5dfb14787\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab1b6ca5c37e6375ada9d218eee28be284159dfd93586e1de390dbe638e7f2f8\"" Feb 9 09:14:40.910387 env[1251]: time="2024-02-09T09:14:40.910365566Z" level=info msg="StartContainer for \"ab1b6ca5c37e6375ada9d218eee28be284159dfd93586e1de390dbe638e7f2f8\"" Feb 9 09:14:41.002848 env[1251]: time="2024-02-09T09:14:41.002757444Z" level=info msg="StartContainer for \"ab1b6ca5c37e6375ada9d218eee28be284159dfd93586e1de390dbe638e7f2f8\" returns successfully" Feb 9 09:14:41.143106 env[1251]: time="2024-02-09T09:14:41.142924861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8zz2,Uid:ed5b5afd-8a00-42ba-ba02-d61dce4e997c,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:41.165530 env[1251]: time="2024-02-09T09:14:41.165329853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:41.165530 env[1251]: time="2024-02-09T09:14:41.165437381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:41.165530 env[1251]: time="2024-02-09T09:14:41.165476553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:41.166015 env[1251]: time="2024-02-09T09:14:41.165886607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71 pid=2679 runtime=io.containerd.runc.v2 Feb 9 09:14:41.246445 env[1251]: time="2024-02-09T09:14:41.246407641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8zz2,Uid:ed5b5afd-8a00-42ba-ba02-d61dce4e997c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\"" Feb 9 09:14:41.247608 env[1251]: time="2024-02-09T09:14:41.247583227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:14:41.525286 env[1251]: time="2024-02-09T09:14:41.525138074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-sfv7n,Uid:12aa12f1-94c7-48f3-a090-87adf7e0a891,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:41.552924 env[1251]: time="2024-02-09T09:14:41.552792101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:41.552924 env[1251]: time="2024-02-09T09:14:41.552887585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:41.553290 env[1251]: time="2024-02-09T09:14:41.552926756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:41.553459 env[1251]: time="2024-02-09T09:14:41.553366144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5 pid=2799 runtime=io.containerd.runc.v2 Feb 9 09:14:41.652363 env[1251]: time="2024-02-09T09:14:41.652333419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-sfv7n,Uid:12aa12f1-94c7-48f3-a090-87adf7e0a891,Namespace:kube-system,Attempt:0,} returns sandbox id \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\"" Feb 9 09:14:42.244153 kubelet[2367]: I0209 09:14:42.244083 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c9gfv" podStartSLOduration=2.24401777 pod.CreationTimestamp="2024-02-09 09:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:42.24365072 +0000 UTC m=+16.712942334" watchObservedRunningTime="2024-02-09 09:14:42.24401777 +0000 UTC m=+16.713309402" Feb 9 09:14:44.576710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041940243.mount: Deactivated successfully. Feb 9 09:14:46.281551 env[1251]: time="2024-02-09T09:14:46.281507346Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:46.282161 env[1251]: time="2024-02-09T09:14:46.282121519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:46.282780 env[1251]: time="2024-02-09T09:14:46.282738118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:46.283080 env[1251]: time="2024-02-09T09:14:46.283037739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 09:14:46.283460 env[1251]: time="2024-02-09T09:14:46.283447218Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:14:46.284357 env[1251]: time="2024-02-09T09:14:46.284321979Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:14:46.288324 env[1251]: time="2024-02-09T09:14:46.288308884Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\"" Feb 9 09:14:46.288535 env[1251]: time="2024-02-09T09:14:46.288523257Z" level=info msg="StartContainer for \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\"" Feb 9 09:14:46.332612 env[1251]: time="2024-02-09T09:14:46.332574476Z" level=info msg="StartContainer for \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\" returns successfully" Feb 9 09:14:47.292218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee-rootfs.mount: Deactivated successfully. Feb 9 09:14:48.475169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254545655.mount: Deactivated successfully. Feb 9 09:14:48.489570 env[1251]: time="2024-02-09T09:14:48.489533351Z" level=info msg="shim disconnected" id=12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee Feb 9 09:14:48.489807 env[1251]: time="2024-02-09T09:14:48.489575349Z" level=warning msg="cleaning up after shim disconnected" id=12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee namespace=k8s.io Feb 9 09:14:48.489807 env[1251]: time="2024-02-09T09:14:48.489585210Z" level=info msg="cleaning up dead shim" Feb 9 09:14:48.506523 env[1251]: time="2024-02-09T09:14:48.506497187Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:14:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2881 runtime=io.containerd.runc.v2\n" Feb 9 09:14:48.643299 env[1251]: time="2024-02-09T09:14:48.643275517Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:14:48.647488 env[1251]: time="2024-02-09T09:14:48.647466651Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\"" Feb 9 09:14:48.647758 env[1251]: time="2024-02-09T09:14:48.647736286Z" level=info msg="StartContainer for \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\"" Feb 9 09:14:48.648689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604048163.mount: Deactivated successfully. Feb 9 09:14:48.694477 env[1251]: time="2024-02-09T09:14:48.694420031Z" level=info msg="StartContainer for \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\" returns successfully" Feb 9 09:14:48.699733 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:14:48.699883 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:14:48.699964 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:14:48.700981 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:14:48.704762 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:14:48.838594 env[1251]: time="2024-02-09T09:14:48.838519806Z" level=info msg="shim disconnected" id=65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42 Feb 9 09:14:48.838594 env[1251]: time="2024-02-09T09:14:48.838547177Z" level=warning msg="cleaning up after shim disconnected" id=65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42 namespace=k8s.io Feb 9 09:14:48.838594 env[1251]: time="2024-02-09T09:14:48.838552973Z" level=info msg="cleaning up dead shim" Feb 9 09:14:48.854380 env[1251]: time="2024-02-09T09:14:48.854363063Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:14:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2946 runtime=io.containerd.runc.v2\n" Feb 9 09:14:48.918943 env[1251]: time="2024-02-09T09:14:48.918919939Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:48.919509 env[1251]: time="2024-02-09T09:14:48.919496168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:48.920142 env[1251]: time="2024-02-09T09:14:48.920117477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:14:48.920753 env[1251]: time="2024-02-09T09:14:48.920736406Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 09:14:48.921695 env[1251]: time="2024-02-09T09:14:48.921678866Z" level=info msg="CreateContainer within sandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:14:48.926167 env[1251]: time="2024-02-09T09:14:48.926125644Z" level=info msg="CreateContainer within sandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\"" Feb 9 09:14:48.926387 env[1251]: time="2024-02-09T09:14:48.926352823Z" level=info msg="StartContainer for \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\"" Feb 9 09:14:48.947506 env[1251]: time="2024-02-09T09:14:48.947454500Z" level=info msg="StartContainer for \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\" returns successfully" Feb 9 09:14:49.469283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42-rootfs.mount: Deactivated successfully. Feb 9 09:14:49.656306 env[1251]: time="2024-02-09T09:14:49.656201941Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:14:49.673859 env[1251]: time="2024-02-09T09:14:49.673725395Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\"" Feb 9 09:14:49.674852 env[1251]: time="2024-02-09T09:14:49.674743196Z" level=info msg="StartContainer for \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\"" Feb 9 09:14:49.700300 kubelet[2367]: I0209 09:14:49.700271 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-sfv7n" podStartSLOduration=-9.223372027154547e+09 pod.CreationTimestamp="2024-02-09 09:14:40 +0000 UTC" firstStartedPulling="2024-02-09 09:14:41.652867555 +0000 UTC m=+16.122159139" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:49.665778526 +0000 UTC m=+24.135070285" watchObservedRunningTime="2024-02-09 09:14:49.700230009 +0000 UTC m=+24.169521605" Feb 9 09:14:49.751946 env[1251]: time="2024-02-09T09:14:49.751815632Z" level=info msg="StartContainer for \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\" returns successfully" Feb 9 09:14:49.846220 env[1251]: time="2024-02-09T09:14:49.846086009Z" level=info msg="shim disconnected" id=874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68 Feb 9 09:14:49.846220 env[1251]: time="2024-02-09T09:14:49.846187465Z" level=warning msg="cleaning up after shim disconnected" id=874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68 namespace=k8s.io Feb 9 09:14:49.846220 env[1251]: time="2024-02-09T09:14:49.846217133Z" level=info msg="cleaning up dead shim" Feb 9 09:14:49.874977 env[1251]: time="2024-02-09T09:14:49.874876310Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:14:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3050 runtime=io.containerd.runc.v2\n" Feb 9 09:14:50.472651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68-rootfs.mount: Deactivated successfully. Feb 9 09:14:50.663343 env[1251]: time="2024-02-09T09:14:50.663238939Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:14:50.678104 env[1251]: time="2024-02-09T09:14:50.678013131Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\"" Feb 9 09:14:50.679002 env[1251]: time="2024-02-09T09:14:50.678917734Z" level=info msg="StartContainer for \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\"" Feb 9 09:14:50.786303 env[1251]: time="2024-02-09T09:14:50.786145434Z" level=info msg="StartContainer for \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\" returns successfully" Feb 9 09:14:50.822649 env[1251]: time="2024-02-09T09:14:50.822522436Z" level=info msg="shim disconnected" id=19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d Feb 9 09:14:50.822649 env[1251]: time="2024-02-09T09:14:50.822624736Z" level=warning msg="cleaning up after shim disconnected" id=19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d namespace=k8s.io Feb 9 09:14:50.822649 env[1251]: time="2024-02-09T09:14:50.822647584Z" level=info msg="cleaning up dead shim" Feb 9 09:14:50.846600 env[1251]: time="2024-02-09T09:14:50.846471945Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:14:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3104 runtime=io.containerd.runc.v2\n" Feb 9 09:14:51.473489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d-rootfs.mount: Deactivated successfully. Feb 9 09:14:51.672397 env[1251]: time="2024-02-09T09:14:51.672303087Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:14:51.683214 env[1251]: time="2024-02-09T09:14:51.683167048Z" level=info msg="CreateContainer within sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\"" Feb 9 09:14:51.683487 env[1251]: time="2024-02-09T09:14:51.683471849Z" level=info msg="StartContainer for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\"" Feb 9 09:14:51.721980 env[1251]: time="2024-02-09T09:14:51.721942068Z" level=info msg="StartContainer for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" returns successfully" Feb 9 09:14:51.788642 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:14:51.799134 kubelet[2367]: I0209 09:14:51.799120 2367 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:14:51.812204 kubelet[2367]: I0209 09:14:51.812187 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:51.813630 kubelet[2367]: I0209 09:14:51.813616 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:14:51.867009 kubelet[2367]: I0209 09:14:51.866992 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8q8t\" (UniqueName: \"kubernetes.io/projected/4b81d27c-52ef-40ae-be00-27d1356d15a9-kube-api-access-x8q8t\") pod \"coredns-787d4945fb-fn6qc\" (UID: \"4b81d27c-52ef-40ae-be00-27d1356d15a9\") " pod="kube-system/coredns-787d4945fb-fn6qc" Feb 9 09:14:51.867133 kubelet[2367]: I0209 09:14:51.867017 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7jj\" (UniqueName: \"kubernetes.io/projected/3c655601-ca8a-4b49-a6ae-a5dbe597b699-kube-api-access-mj7jj\") pod \"coredns-787d4945fb-jmkqg\" (UID: \"3c655601-ca8a-4b49-a6ae-a5dbe597b699\") " pod="kube-system/coredns-787d4945fb-jmkqg" Feb 9 09:14:51.867133 kubelet[2367]: I0209 09:14:51.867032 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b81d27c-52ef-40ae-be00-27d1356d15a9-config-volume\") pod \"coredns-787d4945fb-fn6qc\" (UID: \"4b81d27c-52ef-40ae-be00-27d1356d15a9\") " pod="kube-system/coredns-787d4945fb-fn6qc" Feb 9 09:14:51.867133 kubelet[2367]: I0209 09:14:51.867044 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c655601-ca8a-4b49-a6ae-a5dbe597b699-config-volume\") pod \"coredns-787d4945fb-jmkqg\" (UID: \"3c655601-ca8a-4b49-a6ae-a5dbe597b699\") " pod="kube-system/coredns-787d4945fb-jmkqg" Feb 9 09:14:51.926661 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:14:52.115433 env[1251]: time="2024-02-09T09:14:52.115193433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fn6qc,Uid:4b81d27c-52ef-40ae-be00-27d1356d15a9,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:52.115944 env[1251]: time="2024-02-09T09:14:52.115865003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jmkqg,Uid:3c655601-ca8a-4b49-a6ae-a5dbe597b699,Namespace:kube-system,Attempt:0,}" Feb 9 09:14:52.687227 kubelet[2367]: I0209 09:14:52.687180 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q8zz2" podStartSLOduration=-9.223372024167667e+09 pod.CreationTimestamp="2024-02-09 09:14:40 +0000 UTC" firstStartedPulling="2024-02-09 09:14:41.247192932 +0000 UTC m=+15.716484528" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:52.686935354 +0000 UTC m=+27.156227001" watchObservedRunningTime="2024-02-09 09:14:52.68710891 +0000 UTC m=+27.156400537" Feb 9 09:14:53.522291 systemd-networkd[1105]: cilium_host: Link UP Feb 9 09:14:53.522372 systemd-networkd[1105]: cilium_net: Link UP Feb 9 09:14:53.522374 systemd-networkd[1105]: cilium_net: Gained carrier Feb 9 09:14:53.522460 systemd-networkd[1105]: cilium_host: Gained carrier Feb 9 09:14:53.530273 systemd-networkd[1105]: cilium_host: Gained IPv6LL Feb 9 09:14:53.530575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:14:53.571156 systemd-networkd[1105]: cilium_vxlan: Link UP Feb 9 09:14:53.571159 systemd-networkd[1105]: cilium_vxlan: Gained carrier Feb 9 09:14:53.691817 systemd-networkd[1105]: cilium_net: Gained IPv6LL Feb 9 09:14:53.753586 kernel: NET: Registered PF_ALG protocol family Feb 9 09:14:54.267425 systemd-networkd[1105]: lxc_health: Link UP Feb 9 09:14:54.288539 systemd-networkd[1105]: lxc_health: Gained carrier Feb 9 09:14:54.288651 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:14:54.667668 systemd-networkd[1105]: cilium_vxlan: Gained IPv6LL Feb 9 09:14:54.684032 systemd-networkd[1105]: lxcb1c24d928c9d: Link UP Feb 9 09:14:54.684114 systemd-networkd[1105]: lxcc0b7172cd8b4: Link UP Feb 9 09:14:54.723573 kernel: eth0: renamed from tmp316c4 Feb 9 09:14:54.738626 kernel: eth0: renamed from tmp2ead5 Feb 9 09:14:54.769205 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:14:54.769325 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb1c24d928c9d: link becomes ready Feb 9 09:14:54.769605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:14:54.783538 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc0b7172cd8b4: link becomes ready Feb 9 09:14:54.783712 systemd-networkd[1105]: lxcb1c24d928c9d: Gained carrier Feb 9 09:14:54.783854 systemd-networkd[1105]: lxcc0b7172cd8b4: Gained carrier Feb 9 09:14:55.691686 systemd-networkd[1105]: lxc_health: Gained IPv6LL Feb 9 09:14:55.947729 systemd-networkd[1105]: lxcc0b7172cd8b4: Gained IPv6LL Feb 9 09:14:56.139669 systemd-networkd[1105]: lxcb1c24d928c9d: Gained IPv6LL Feb 9 09:14:57.136638 env[1251]: time="2024-02-09T09:14:57.136590012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:57.136638 env[1251]: time="2024-02-09T09:14:57.136616507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:57.136638 env[1251]: time="2024-02-09T09:14:57.136625039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:57.136638 env[1251]: time="2024-02-09T09:14:57.136589373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:14:57.136638 env[1251]: time="2024-02-09T09:14:57.136616567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:14:57.136638 env[1251]: time="2024-02-09T09:14:57.136626106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:14:57.137020 env[1251]: time="2024-02-09T09:14:57.136704684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ead5b5859115e3c92edbabf0b447f06838ea2733484c4ae9daabbaf0e35236d pid=3783 runtime=io.containerd.runc.v2 Feb 9 09:14:57.137020 env[1251]: time="2024-02-09T09:14:57.136712253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/316c45b12021f58bc41db786dbe496c16c6b5456e883067c77d99d82c36ca3e0 pid=3784 runtime=io.containerd.runc.v2 Feb 9 09:14:57.183441 env[1251]: time="2024-02-09T09:14:57.183406743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fn6qc,Uid:4b81d27c-52ef-40ae-be00-27d1356d15a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ead5b5859115e3c92edbabf0b447f06838ea2733484c4ae9daabbaf0e35236d\"" Feb 9 09:14:57.184852 env[1251]: time="2024-02-09T09:14:57.184832478Z" level=info msg="CreateContainer within sandbox \"2ead5b5859115e3c92edbabf0b447f06838ea2733484c4ae9daabbaf0e35236d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:14:57.188920 env[1251]: time="2024-02-09T09:14:57.188868723Z" level=info msg="CreateContainer within sandbox \"2ead5b5859115e3c92edbabf0b447f06838ea2733484c4ae9daabbaf0e35236d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9debc65664e76fb9676117dff304a1f5cbee670f8315ad7bf431a5946d74965f\"" Feb 9 09:14:57.189161 env[1251]: time="2024-02-09T09:14:57.189104491Z" level=info msg="StartContainer for \"9debc65664e76fb9676117dff304a1f5cbee670f8315ad7bf431a5946d74965f\"" Feb 9 09:14:57.196484 env[1251]: time="2024-02-09T09:14:57.196424501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jmkqg,Uid:3c655601-ca8a-4b49-a6ae-a5dbe597b699,Namespace:kube-system,Attempt:0,} returns sandbox id \"316c45b12021f58bc41db786dbe496c16c6b5456e883067c77d99d82c36ca3e0\"" Feb 9 09:14:57.197943 env[1251]: time="2024-02-09T09:14:57.197893863Z" level=info msg="CreateContainer within sandbox \"316c45b12021f58bc41db786dbe496c16c6b5456e883067c77d99d82c36ca3e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:14:57.201928 env[1251]: time="2024-02-09T09:14:57.201876189Z" level=info msg="CreateContainer within sandbox \"316c45b12021f58bc41db786dbe496c16c6b5456e883067c77d99d82c36ca3e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d479c0e344f131128a0898b392deb84cc00b5871681232b21680df661ee2014d\"" Feb 9 09:14:57.202187 env[1251]: time="2024-02-09T09:14:57.202139417Z" level=info msg="StartContainer for \"d479c0e344f131128a0898b392deb84cc00b5871681232b21680df661ee2014d\"" Feb 9 09:14:57.227232 env[1251]: time="2024-02-09T09:14:57.227172519Z" level=info msg="StartContainer for \"9debc65664e76fb9676117dff304a1f5cbee670f8315ad7bf431a5946d74965f\" returns successfully" Feb 9 09:14:57.264206 env[1251]: time="2024-02-09T09:14:57.264145589Z" level=info msg="StartContainer for \"d479c0e344f131128a0898b392deb84cc00b5871681232b21680df661ee2014d\" returns successfully" Feb 9 09:14:57.701931 kubelet[2367]: I0209 09:14:57.701872 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fn6qc" podStartSLOduration=17.701791335 pod.CreationTimestamp="2024-02-09 09:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:57.700064349 +0000 UTC m=+32.169356029" watchObservedRunningTime="2024-02-09 09:14:57.701791335 +0000 UTC m=+32.171082981" Feb 9 09:14:57.758091 kubelet[2367]: I0209 09:14:57.758048 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jmkqg" podStartSLOduration=17.757960965 pod.CreationTimestamp="2024-02-09 09:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:14:57.739874345 +0000 UTC m=+32.209166005" watchObservedRunningTime="2024-02-09 09:14:57.757960965 +0000 UTC m=+32.227252577" Feb 9 09:18:13.942125 systemd[1]: Started sshd@5-139.178.90.101:22-103.78.143.130:47782.service. Feb 9 09:18:15.176092 sshd[4020]: Invalid user hyosung from 103.78.143.130 port 47782 Feb 9 09:18:15.182250 sshd[4020]: pam_faillock(sshd:auth): User unknown Feb 9 09:18:15.183327 sshd[4020]: pam_unix(sshd:auth): check pass; user unknown Feb 9 09:18:15.183414 sshd[4020]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.78.143.130 Feb 9 09:18:15.184335 sshd[4020]: pam_faillock(sshd:auth): User unknown Feb 9 09:18:17.237763 sshd[4020]: Failed password for invalid user hyosung from 103.78.143.130 port 47782 ssh2 Feb 9 09:18:18.726957 sshd[4020]: Received disconnect from 103.78.143.130 port 47782:11: Bye Bye [preauth] Feb 9 09:18:18.726957 sshd[4020]: Disconnected from invalid user hyosung 103.78.143.130 port 47782 [preauth] Feb 9 09:18:18.729473 systemd[1]: sshd@5-139.178.90.101:22-103.78.143.130:47782.service: Deactivated successfully. Feb 9 09:20:25.861700 systemd[1]: Started sshd@6-139.178.90.101:22-147.75.109.163:51744.service. Feb 9 09:20:25.894989 sshd[4048]: Accepted publickey for core from 147.75.109.163 port 51744 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:25.897943 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:25.908857 systemd-logind[1237]: New session 8 of user core. Feb 9 09:20:25.911166 systemd[1]: Started session-8.scope. Feb 9 09:20:26.054355 sshd[4048]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:26.059709 systemd[1]: sshd@6-139.178.90.101:22-147.75.109.163:51744.service: Deactivated successfully. Feb 9 09:20:26.062185 systemd-logind[1237]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:20:26.062216 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:20:26.064547 systemd-logind[1237]: Removed session 8. Feb 9 09:20:31.056314 systemd[1]: Started sshd@7-139.178.90.101:22-147.75.109.163:51760.service. Feb 9 09:20:31.089957 sshd[4089]: Accepted publickey for core from 147.75.109.163 port 51760 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:31.090876 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:31.094243 systemd-logind[1237]: New session 9 of user core. Feb 9 09:20:31.094916 systemd[1]: Started session-9.scope. Feb 9 09:20:31.185295 sshd[4089]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:31.186795 systemd[1]: sshd@7-139.178.90.101:22-147.75.109.163:51760.service: Deactivated successfully. Feb 9 09:20:31.187367 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:20:31.187398 systemd-logind[1237]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:20:31.188019 systemd-logind[1237]: Removed session 9. Feb 9 09:20:35.834413 systemd[1]: Started sshd@8-139.178.90.101:22-85.209.11.27:40786.service. Feb 9 09:20:36.193723 systemd[1]: Started sshd@9-139.178.90.101:22-147.75.109.163:38098.service. Feb 9 09:20:36.229343 sshd[4119]: Accepted publickey for core from 147.75.109.163 port 38098 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:36.230224 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:36.233476 systemd-logind[1237]: New session 10 of user core. Feb 9 09:20:36.234074 systemd[1]: Started session-10.scope. Feb 9 09:20:36.325211 sshd[4119]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:36.326818 systemd[1]: sshd@9-139.178.90.101:22-147.75.109.163:38098.service: Deactivated successfully. Feb 9 09:20:36.327496 systemd-logind[1237]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:20:36.327516 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:20:36.328209 systemd-logind[1237]: Removed session 10. Feb 9 09:20:37.325989 sshd[4117]: Invalid user admin from 85.209.11.27 port 40786 Feb 9 09:20:37.523970 sshd[4117]: pam_faillock(sshd:auth): User unknown Feb 9 09:20:37.524979 sshd[4117]: pam_unix(sshd:auth): check pass; user unknown Feb 9 09:20:37.525068 sshd[4117]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.27 Feb 9 09:20:37.526007 sshd[4117]: pam_faillock(sshd:auth): User unknown Feb 9 09:20:39.739731 sshd[4117]: Failed password for invalid user admin from 85.209.11.27 port 40786 ssh2 Feb 9 09:20:41.332361 systemd[1]: Started sshd@10-139.178.90.101:22-147.75.109.163:38100.service. Feb 9 09:20:41.340671 sshd[4117]: Connection closed by invalid user admin 85.209.11.27 port 40786 [preauth] Feb 9 09:20:41.341148 systemd[1]: sshd@8-139.178.90.101:22-85.209.11.27:40786.service: Deactivated successfully. Feb 9 09:20:41.364815 sshd[4149]: Accepted publickey for core from 147.75.109.163 port 38100 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:41.365752 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:41.369172 systemd-logind[1237]: New session 11 of user core. Feb 9 09:20:41.369854 systemd[1]: Started session-11.scope. Feb 9 09:20:41.459792 sshd[4149]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:41.461766 systemd[1]: Started sshd@11-139.178.90.101:22-147.75.109.163:38106.service. Feb 9 09:20:41.462175 systemd[1]: sshd@10-139.178.90.101:22-147.75.109.163:38100.service: Deactivated successfully. Feb 9 09:20:41.462828 systemd-logind[1237]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:20:41.462877 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:20:41.463530 systemd-logind[1237]: Removed session 11. Feb 9 09:20:41.497581 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 38106 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:41.499260 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:41.506615 systemd-logind[1237]: New session 12 of user core. Feb 9 09:20:41.508750 systemd[1]: Started session-12.scope. Feb 9 09:20:42.017662 sshd[4177]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:42.019334 systemd[1]: Started sshd@12-139.178.90.101:22-147.75.109.163:38110.service. Feb 9 09:20:42.019835 systemd[1]: sshd@11-139.178.90.101:22-147.75.109.163:38106.service: Deactivated successfully. Feb 9 09:20:42.020371 systemd-logind[1237]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:20:42.020409 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:20:42.020908 systemd-logind[1237]: Removed session 12. Feb 9 09:20:42.052592 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 38110 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:42.055643 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:42.065334 systemd-logind[1237]: New session 13 of user core. Feb 9 09:20:42.067787 systemd[1]: Started session-13.scope. Feb 9 09:20:42.207910 sshd[4201]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:42.209473 systemd[1]: sshd@12-139.178.90.101:22-147.75.109.163:38110.service: Deactivated successfully. Feb 9 09:20:42.210225 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:20:42.210269 systemd-logind[1237]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:20:42.210934 systemd-logind[1237]: Removed session 13. Feb 9 09:20:47.214744 systemd[1]: Started sshd@13-139.178.90.101:22-147.75.109.163:54844.service. Feb 9 09:20:47.247527 sshd[4231]: Accepted publickey for core from 147.75.109.163 port 54844 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:47.248498 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:47.252017 systemd-logind[1237]: New session 14 of user core. Feb 9 09:20:47.252768 systemd[1]: Started session-14.scope. Feb 9 09:20:47.390354 sshd[4231]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:47.391813 systemd[1]: sshd@13-139.178.90.101:22-147.75.109.163:54844.service: Deactivated successfully. Feb 9 09:20:47.392378 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:20:47.392417 systemd-logind[1237]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:20:47.392990 systemd-logind[1237]: Removed session 14. Feb 9 09:20:52.397135 systemd[1]: Started sshd@14-139.178.90.101:22-147.75.109.163:54848.service. Feb 9 09:20:52.430631 sshd[4257]: Accepted publickey for core from 147.75.109.163 port 54848 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:52.433848 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:52.444809 systemd-logind[1237]: New session 15 of user core. Feb 9 09:20:52.447177 systemd[1]: Started session-15.scope. Feb 9 09:20:52.564158 sshd[4257]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:52.565621 systemd[1]: sshd@14-139.178.90.101:22-147.75.109.163:54848.service: Deactivated successfully. Feb 9 09:20:52.566247 systemd-logind[1237]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:20:52.566250 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:20:52.566676 systemd-logind[1237]: Removed session 15. Feb 9 09:20:57.570483 systemd[1]: Started sshd@15-139.178.90.101:22-147.75.109.163:38788.service. Feb 9 09:20:57.602981 sshd[4284]: Accepted publickey for core from 147.75.109.163 port 38788 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:20:57.603922 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:20:57.607522 systemd-logind[1237]: New session 16 of user core. Feb 9 09:20:57.608331 systemd[1]: Started session-16.scope. Feb 9 09:20:57.695097 sshd[4284]: pam_unix(sshd:session): session closed for user core Feb 9 09:20:57.696587 systemd[1]: sshd@15-139.178.90.101:22-147.75.109.163:38788.service: Deactivated successfully. Feb 9 09:20:57.697255 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:20:57.697286 systemd-logind[1237]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:20:57.697900 systemd-logind[1237]: Removed session 16. Feb 9 09:21:02.702247 systemd[1]: Started sshd@16-139.178.90.101:22-147.75.109.163:38794.service. Feb 9 09:21:02.734295 sshd[4308]: Accepted publickey for core from 147.75.109.163 port 38794 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:02.735246 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:02.738577 systemd-logind[1237]: New session 17 of user core. Feb 9 09:21:02.739310 systemd[1]: Started session-17.scope. Feb 9 09:21:02.829918 sshd[4308]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:02.831278 systemd[1]: sshd@16-139.178.90.101:22-147.75.109.163:38794.service: Deactivated successfully. Feb 9 09:21:02.831897 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:21:02.831941 systemd-logind[1237]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:21:02.832411 systemd-logind[1237]: Removed session 17. Feb 9 09:21:07.837233 systemd[1]: Started sshd@17-139.178.90.101:22-147.75.109.163:51432.service. Feb 9 09:21:07.869807 sshd[4334]: Accepted publickey for core from 147.75.109.163 port 51432 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:07.870547 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:07.873273 systemd-logind[1237]: New session 18 of user core. Feb 9 09:21:07.873725 systemd[1]: Started session-18.scope. Feb 9 09:21:07.962998 sshd[4334]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:07.964345 systemd[1]: sshd@17-139.178.90.101:22-147.75.109.163:51432.service: Deactivated successfully. Feb 9 09:21:07.965019 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:21:07.965084 systemd-logind[1237]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:21:07.965523 systemd-logind[1237]: Removed session 18. Feb 9 09:21:12.967231 systemd[1]: Started sshd@18-139.178.90.101:22-147.75.109.163:51446.service. Feb 9 09:21:12.999791 sshd[4362]: Accepted publickey for core from 147.75.109.163 port 51446 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:13.000826 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:13.004344 systemd-logind[1237]: New session 19 of user core. Feb 9 09:21:13.005126 systemd[1]: Started session-19.scope. Feb 9 09:21:13.094955 sshd[4362]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:13.096297 systemd[1]: sshd@18-139.178.90.101:22-147.75.109.163:51446.service: Deactivated successfully. Feb 9 09:21:13.096953 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:21:13.096976 systemd-logind[1237]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:21:13.097414 systemd-logind[1237]: Removed session 19. Feb 9 09:21:17.988888 update_engine[1239]: I0209 09:21:17.988780 1239 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 09:21:17.988888 update_engine[1239]: I0209 09:21:17.988857 1239 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 09:21:17.990869 update_engine[1239]: I0209 09:21:17.990797 1239 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 09:21:17.991786 update_engine[1239]: I0209 09:21:17.991714 1239 omaha_request_params.cc:62] Current group set to lts Feb 9 09:21:17.992092 update_engine[1239]: I0209 09:21:17.992012 1239 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 09:21:17.992092 update_engine[1239]: I0209 09:21:17.992030 1239 update_attempter.cc:643] Scheduling an action processor start. Feb 9 09:21:17.992092 update_engine[1239]: I0209 09:21:17.992063 1239 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:21:17.992611 update_engine[1239]: I0209 09:21:17.992123 1239 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 09:21:17.992611 update_engine[1239]: I0209 09:21:17.992259 1239 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:21:17.992611 update_engine[1239]: I0209 09:21:17.992275 1239 omaha_request_action.cc:271] Request: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: Feb 9 09:21:17.992611 update_engine[1239]: I0209 09:21:17.992286 1239 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:21:17.993748 locksmithd[1273]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 09:21:17.995520 update_engine[1239]: I0209 09:21:17.995446 1239 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:21:17.995756 update_engine[1239]: E0209 09:21:17.995709 1239 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:21:17.995911 update_engine[1239]: I0209 09:21:17.995867 1239 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 09:21:18.100393 systemd[1]: Started sshd@19-139.178.90.101:22-147.75.109.163:44032.service. Feb 9 09:21:18.136611 sshd[4386]: Accepted publickey for core from 147.75.109.163 port 44032 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:18.137587 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:18.140639 systemd-logind[1237]: New session 20 of user core. Feb 9 09:21:18.141271 systemd[1]: Started session-20.scope. Feb 9 09:21:18.229905 sshd[4386]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:18.231312 systemd[1]: sshd@19-139.178.90.101:22-147.75.109.163:44032.service: Deactivated successfully. Feb 9 09:21:18.231911 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:21:18.231953 systemd-logind[1237]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:21:18.232429 systemd-logind[1237]: Removed session 20. Feb 9 09:21:23.236191 systemd[1]: Started sshd@20-139.178.90.101:22-147.75.109.163:44046.service. Feb 9 09:21:23.268614 sshd[4410]: Accepted publickey for core from 147.75.109.163 port 44046 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:23.271802 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:23.282497 systemd-logind[1237]: New session 21 of user core. Feb 9 09:21:23.285226 systemd[1]: Started session-21.scope. Feb 9 09:21:23.375222 sshd[4410]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:23.376547 systemd[1]: sshd@20-139.178.90.101:22-147.75.109.163:44046.service: Deactivated successfully. Feb 9 09:21:23.377197 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:21:23.377241 systemd-logind[1237]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:21:23.377735 systemd-logind[1237]: Removed session 21. Feb 9 09:21:27.988865 update_engine[1239]: I0209 09:21:27.988744 1239 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:21:27.989879 update_engine[1239]: I0209 09:21:27.989251 1239 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:21:27.989879 update_engine[1239]: E0209 09:21:27.989454 1239 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:21:27.989879 update_engine[1239]: I0209 09:21:27.989657 1239 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 09:21:28.382851 systemd[1]: Started sshd@21-139.178.90.101:22-147.75.109.163:55398.service. Feb 9 09:21:28.418512 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 55398 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:28.419555 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:28.422780 systemd-logind[1237]: New session 22 of user core. Feb 9 09:21:28.423455 systemd[1]: Started session-22.scope. Feb 9 09:21:28.511655 sshd[4439]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:28.513314 systemd[1]: sshd@21-139.178.90.101:22-147.75.109.163:55398.service: Deactivated successfully. Feb 9 09:21:28.514002 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:21:28.514044 systemd-logind[1237]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:21:28.514573 systemd-logind[1237]: Removed session 22. Feb 9 09:21:33.519032 systemd[1]: Started sshd@22-139.178.90.101:22-147.75.109.163:55400.service. Feb 9 09:21:33.552081 sshd[4466]: Accepted publickey for core from 147.75.109.163 port 55400 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:33.555270 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:33.566001 systemd-logind[1237]: New session 23 of user core. Feb 9 09:21:33.569699 systemd[1]: Started session-23.scope. Feb 9 09:21:33.673257 sshd[4466]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:33.674637 systemd[1]: sshd@22-139.178.90.101:22-147.75.109.163:55400.service: Deactivated successfully. Feb 9 09:21:33.675246 systemd-logind[1237]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:21:33.675249 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:21:33.675874 systemd-logind[1237]: Removed session 23. Feb 9 09:21:37.988508 update_engine[1239]: I0209 09:21:37.988394 1239 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:21:37.989422 update_engine[1239]: I0209 09:21:37.988894 1239 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:21:37.989422 update_engine[1239]: E0209 09:21:37.989094 1239 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:21:37.989422 update_engine[1239]: I0209 09:21:37.989260 1239 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 09:21:38.680130 systemd[1]: Started sshd@23-139.178.90.101:22-147.75.109.163:50324.service. Feb 9 09:21:38.712363 sshd[4492]: Accepted publickey for core from 147.75.109.163 port 50324 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:38.713266 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:38.716521 systemd-logind[1237]: New session 24 of user core. Feb 9 09:21:38.717452 systemd[1]: Started session-24.scope. Feb 9 09:21:38.814242 sshd[4492]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:38.819754 systemd[1]: sshd@23-139.178.90.101:22-147.75.109.163:50324.service: Deactivated successfully. Feb 9 09:21:38.822510 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:21:38.822517 systemd-logind[1237]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:21:38.824907 systemd-logind[1237]: Removed session 24. Feb 9 09:21:43.820229 systemd[1]: Started sshd@24-139.178.90.101:22-147.75.109.163:50338.service. Feb 9 09:21:43.852759 sshd[4520]: Accepted publickey for core from 147.75.109.163 port 50338 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:43.853776 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:43.857325 systemd-logind[1237]: New session 25 of user core. Feb 9 09:21:43.858175 systemd[1]: Started session-25.scope. Feb 9 09:21:43.948108 sshd[4520]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:43.949696 systemd[1]: sshd@24-139.178.90.101:22-147.75.109.163:50338.service: Deactivated successfully. Feb 9 09:21:43.950368 systemd-logind[1237]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:21:43.950382 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:21:43.950946 systemd-logind[1237]: Removed session 25. Feb 9 09:21:47.988222 update_engine[1239]: I0209 09:21:47.988104 1239 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:21:47.989224 update_engine[1239]: I0209 09:21:47.988660 1239 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:21:47.989224 update_engine[1239]: E0209 09:21:47.988869 1239 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:21:47.989224 update_engine[1239]: I0209 09:21:47.989033 1239 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:21:47.989224 update_engine[1239]: I0209 09:21:47.989049 1239 omaha_request_action.cc:621] Omaha request response: Feb 9 09:21:47.989224 update_engine[1239]: E0209 09:21:47.989194 1239 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 09:21:47.989224 update_engine[1239]: I0209 09:21:47.989223 1239 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 09:21:47.989224 update_engine[1239]: I0209 09:21:47.989233 1239 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989243 1239 update_attempter.cc:306] Processing Done. Feb 9 09:21:47.990034 update_engine[1239]: E0209 09:21:47.989269 1239 update_attempter.cc:619] Update failed. Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989279 1239 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989287 1239 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989296 1239 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989451 1239 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989503 1239 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989513 1239 omaha_request_action.cc:271] Request: Feb 9 09:21:47.990034 update_engine[1239]: Feb 9 09:21:47.990034 update_engine[1239]: Feb 9 09:21:47.990034 update_engine[1239]: Feb 9 09:21:47.990034 update_engine[1239]: Feb 9 09:21:47.990034 update_engine[1239]: Feb 9 09:21:47.990034 update_engine[1239]: Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989524 1239 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:21:47.990034 update_engine[1239]: I0209 09:21:47.989861 1239 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:21:47.990034 update_engine[1239]: E0209 09:21:47.990022 1239 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990154 1239 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990170 1239 omaha_request_action.cc:621] Omaha request response: Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990180 1239 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990187 1239 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990195 1239 update_attempter.cc:306] Processing Done. Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990203 1239 update_attempter.cc:310] Error event sent. Feb 9 09:21:47.991559 update_engine[1239]: I0209 09:21:47.990223 1239 update_check_scheduler.cc:74] Next update check in 40m37s Feb 9 09:21:47.992231 locksmithd[1273]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 09:21:47.992231 locksmithd[1273]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 09:21:48.954543 systemd[1]: Started sshd@25-139.178.90.101:22-147.75.109.163:43586.service. Feb 9 09:21:48.987052 sshd[4547]: Accepted publickey for core from 147.75.109.163 port 43586 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:48.988118 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:48.991666 systemd-logind[1237]: New session 26 of user core. Feb 9 09:21:48.992642 systemd[1]: Started session-26.scope. Feb 9 09:21:49.076479 sshd[4547]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:49.077916 systemd[1]: sshd@25-139.178.90.101:22-147.75.109.163:43586.service: Deactivated successfully. Feb 9 09:21:49.078512 systemd-logind[1237]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:21:49.078521 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:21:49.079150 systemd-logind[1237]: Removed session 26. Feb 9 09:21:54.079890 systemd[1]: Started sshd@26-139.178.90.101:22-147.75.109.163:43588.service. Feb 9 09:21:54.114223 sshd[4574]: Accepted publickey for core from 147.75.109.163 port 43588 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:54.115028 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:54.117979 systemd-logind[1237]: New session 27 of user core. Feb 9 09:21:54.118819 systemd[1]: Started session-27.scope. Feb 9 09:21:54.214292 sshd[4574]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:54.219424 systemd[1]: sshd@26-139.178.90.101:22-147.75.109.163:43588.service: Deactivated successfully. Feb 9 09:21:54.221827 systemd-logind[1237]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:21:54.221953 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:21:54.224340 systemd-logind[1237]: Removed session 27. Feb 9 09:21:59.220139 systemd[1]: Started sshd@27-139.178.90.101:22-147.75.109.163:50826.service. Feb 9 09:21:59.252645 sshd[4599]: Accepted publickey for core from 147.75.109.163 port 50826 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:21:59.253531 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:21:59.256930 systemd-logind[1237]: New session 28 of user core. Feb 9 09:21:59.257673 systemd[1]: Started session-28.scope. Feb 9 09:21:59.347895 sshd[4599]: pam_unix(sshd:session): session closed for user core Feb 9 09:21:59.349349 systemd[1]: sshd@27-139.178.90.101:22-147.75.109.163:50826.service: Deactivated successfully. Feb 9 09:21:59.349962 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 09:21:59.349970 systemd-logind[1237]: Session 28 logged out. Waiting for processes to exit. Feb 9 09:21:59.350412 systemd-logind[1237]: Removed session 28. Feb 9 09:22:04.354546 systemd[1]: Started sshd@28-139.178.90.101:22-147.75.109.163:50834.service. Feb 9 09:22:04.387624 sshd[4625]: Accepted publickey for core from 147.75.109.163 port 50834 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:04.390804 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:04.401627 systemd-logind[1237]: New session 29 of user core. Feb 9 09:22:04.404046 systemd[1]: Started session-29.scope. Feb 9 09:22:04.522785 sshd[4625]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:04.524281 systemd[1]: sshd@28-139.178.90.101:22-147.75.109.163:50834.service: Deactivated successfully. Feb 9 09:22:04.524962 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 09:22:04.524979 systemd-logind[1237]: Session 29 logged out. Waiting for processes to exit. Feb 9 09:22:04.525469 systemd-logind[1237]: Removed session 29. Feb 9 09:22:09.529513 systemd[1]: Started sshd@29-139.178.90.101:22-147.75.109.163:54944.service. Feb 9 09:22:09.561927 sshd[4651]: Accepted publickey for core from 147.75.109.163 port 54944 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:09.562854 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:09.566236 systemd-logind[1237]: New session 30 of user core. Feb 9 09:22:09.567106 systemd[1]: Started session-30.scope. Feb 9 09:22:09.655481 sshd[4651]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:09.656962 systemd[1]: sshd@29-139.178.90.101:22-147.75.109.163:54944.service: Deactivated successfully. Feb 9 09:22:09.657569 systemd-logind[1237]: Session 30 logged out. Waiting for processes to exit. Feb 9 09:22:09.657576 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 09:22:09.658271 systemd-logind[1237]: Removed session 30. Feb 9 09:22:14.662513 systemd[1]: Started sshd@30-139.178.90.101:22-147.75.109.163:47902.service. Feb 9 09:22:14.694828 sshd[4679]: Accepted publickey for core from 147.75.109.163 port 47902 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:14.695831 sshd[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:14.699285 systemd-logind[1237]: New session 31 of user core. Feb 9 09:22:14.699964 systemd[1]: Started session-31.scope. Feb 9 09:22:14.787964 sshd[4679]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:14.789400 systemd[1]: sshd@30-139.178.90.101:22-147.75.109.163:47902.service: Deactivated successfully. Feb 9 09:22:14.790002 systemd-logind[1237]: Session 31 logged out. Waiting for processes to exit. Feb 9 09:22:14.790011 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 09:22:14.790473 systemd-logind[1237]: Removed session 31. Feb 9 09:22:14.923014 systemd[1]: Started sshd@31-139.178.90.101:22-2.144.235.103:43210.service. Feb 9 09:22:14.928157 sshd[4705]: kex_exchange_identification: Connection closed by remote host Feb 9 09:22:14.928157 sshd[4705]: Connection closed by 2.144.235.103 port 43210 Feb 9 09:22:14.928364 systemd[1]: sshd@31-139.178.90.101:22-2.144.235.103:43210.service: Deactivated successfully. Feb 9 09:22:19.794734 systemd[1]: Started sshd@32-139.178.90.101:22-147.75.109.163:47908.service. Feb 9 09:22:19.826955 sshd[4709]: Accepted publickey for core from 147.75.109.163 port 47908 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:19.827791 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:19.830885 systemd-logind[1237]: New session 32 of user core. Feb 9 09:22:19.831521 systemd[1]: Started session-32.scope. Feb 9 09:22:19.921195 sshd[4709]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:19.922617 systemd[1]: sshd@32-139.178.90.101:22-147.75.109.163:47908.service: Deactivated successfully. Feb 9 09:22:19.923209 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 09:22:19.923239 systemd-logind[1237]: Session 32 logged out. Waiting for processes to exit. Feb 9 09:22:19.923709 systemd-logind[1237]: Removed session 32. Feb 9 09:22:24.928087 systemd[1]: Started sshd@33-139.178.90.101:22-147.75.109.163:60260.service. Feb 9 09:22:24.960567 sshd[4736]: Accepted publickey for core from 147.75.109.163 port 60260 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:24.961396 sshd[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:24.964555 systemd-logind[1237]: New session 33 of user core. Feb 9 09:22:24.965232 systemd[1]: Started session-33.scope. Feb 9 09:22:25.056654 sshd[4736]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:25.058155 systemd[1]: sshd@33-139.178.90.101:22-147.75.109.163:60260.service: Deactivated successfully. Feb 9 09:22:25.058779 systemd-logind[1237]: Session 33 logged out. Waiting for processes to exit. Feb 9 09:22:25.058789 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 09:22:25.059308 systemd-logind[1237]: Removed session 33. Feb 9 09:22:30.058895 systemd[1]: Started sshd@34-139.178.90.101:22-147.75.109.163:60264.service. Feb 9 09:22:30.092437 sshd[4765]: Accepted publickey for core from 147.75.109.163 port 60264 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:30.093289 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:30.096343 systemd-logind[1237]: New session 34 of user core. Feb 9 09:22:30.097135 systemd[1]: Started session-34.scope. Feb 9 09:22:30.186868 sshd[4765]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:30.188315 systemd[1]: sshd@34-139.178.90.101:22-147.75.109.163:60264.service: Deactivated successfully. Feb 9 09:22:30.188970 systemd-logind[1237]: Session 34 logged out. Waiting for processes to exit. Feb 9 09:22:30.189023 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 09:22:30.189480 systemd-logind[1237]: Removed session 34. Feb 9 09:22:35.194401 systemd[1]: Started sshd@35-139.178.90.101:22-147.75.109.163:32814.service. Feb 9 09:22:35.226458 sshd[4792]: Accepted publickey for core from 147.75.109.163 port 32814 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:35.227421 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:35.230953 systemd-logind[1237]: New session 35 of user core. Feb 9 09:22:35.231961 systemd[1]: Started session-35.scope. Feb 9 09:22:35.317480 sshd[4792]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:35.318994 systemd[1]: sshd@35-139.178.90.101:22-147.75.109.163:32814.service: Deactivated successfully. Feb 9 09:22:35.319571 systemd-logind[1237]: Session 35 logged out. Waiting for processes to exit. Feb 9 09:22:35.319619 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 09:22:35.320289 systemd-logind[1237]: Removed session 35. Feb 9 09:22:40.324235 systemd[1]: Started sshd@36-139.178.90.101:22-147.75.109.163:32826.service. Feb 9 09:22:40.356880 sshd[4818]: Accepted publickey for core from 147.75.109.163 port 32826 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:40.357898 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:40.361835 systemd-logind[1237]: New session 36 of user core. Feb 9 09:22:40.362892 systemd[1]: Started session-36.scope. Feb 9 09:22:40.453208 sshd[4818]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:40.454608 systemd[1]: sshd@36-139.178.90.101:22-147.75.109.163:32826.service: Deactivated successfully. Feb 9 09:22:40.455231 systemd-logind[1237]: Session 36 logged out. Waiting for processes to exit. Feb 9 09:22:40.455242 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 09:22:40.455641 systemd-logind[1237]: Removed session 36. Feb 9 09:22:45.460134 systemd[1]: Started sshd@37-139.178.90.101:22-147.75.109.163:47622.service. Feb 9 09:22:45.492746 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 47622 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:45.493738 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:45.497316 systemd-logind[1237]: New session 37 of user core. Feb 9 09:22:45.498024 systemd[1]: Started session-37.scope. Feb 9 09:22:45.588548 sshd[4845]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:45.590098 systemd[1]: sshd@37-139.178.90.101:22-147.75.109.163:47622.service: Deactivated successfully. Feb 9 09:22:45.590732 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 09:22:45.590746 systemd-logind[1237]: Session 37 logged out. Waiting for processes to exit. Feb 9 09:22:45.591300 systemd-logind[1237]: Removed session 37. Feb 9 09:22:50.595170 systemd[1]: Started sshd@38-139.178.90.101:22-147.75.109.163:47630.service. Feb 9 09:22:50.627997 sshd[4869]: Accepted publickey for core from 147.75.109.163 port 47630 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:50.628908 sshd[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:50.632572 systemd-logind[1237]: New session 38 of user core. Feb 9 09:22:50.633306 systemd[1]: Started session-38.scope. Feb 9 09:22:50.725422 sshd[4869]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:50.727544 systemd[1]: sshd@38-139.178.90.101:22-147.75.109.163:47630.service: Deactivated successfully. Feb 9 09:22:50.728454 systemd-logind[1237]: Session 38 logged out. Waiting for processes to exit. Feb 9 09:22:50.728495 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 09:22:50.729332 systemd-logind[1237]: Removed session 38. Feb 9 09:22:55.731822 systemd[1]: Started sshd@39-139.178.90.101:22-147.75.109.163:49658.service. Feb 9 09:22:55.763981 sshd[4898]: Accepted publickey for core from 147.75.109.163 port 49658 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:22:55.765004 sshd[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:22:55.768862 systemd-logind[1237]: New session 39 of user core. Feb 9 09:22:55.769653 systemd[1]: Started session-39.scope. Feb 9 09:22:55.860845 sshd[4898]: pam_unix(sshd:session): session closed for user core Feb 9 09:22:55.862270 systemd[1]: sshd@39-139.178.90.101:22-147.75.109.163:49658.service: Deactivated successfully. Feb 9 09:22:55.862917 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 09:22:55.862962 systemd-logind[1237]: Session 39 logged out. Waiting for processes to exit. Feb 9 09:22:55.863461 systemd-logind[1237]: Removed session 39. Feb 9 09:23:00.866893 systemd[1]: Started sshd@40-139.178.90.101:22-147.75.109.163:49672.service. Feb 9 09:23:00.899977 sshd[4925]: Accepted publickey for core from 147.75.109.163 port 49672 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:00.900993 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:00.904857 systemd-logind[1237]: New session 40 of user core. Feb 9 09:23:00.905632 systemd[1]: Started session-40.scope. Feb 9 09:23:01.004163 sshd[4925]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:01.009578 systemd[1]: sshd@40-139.178.90.101:22-147.75.109.163:49672.service: Deactivated successfully. Feb 9 09:23:01.012047 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 09:23:01.012049 systemd-logind[1237]: Session 40 logged out. Waiting for processes to exit. Feb 9 09:23:01.014340 systemd-logind[1237]: Removed session 40. Feb 9 09:23:06.010202 systemd[1]: Started sshd@41-139.178.90.101:22-147.75.109.163:58662.service. Feb 9 09:23:06.042695 sshd[4952]: Accepted publickey for core from 147.75.109.163 port 58662 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:06.043696 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:06.047351 systemd-logind[1237]: New session 41 of user core. Feb 9 09:23:06.048142 systemd[1]: Started session-41.scope. Feb 9 09:23:06.138730 sshd[4952]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:06.140372 systemd[1]: sshd@41-139.178.90.101:22-147.75.109.163:58662.service: Deactivated successfully. Feb 9 09:23:06.141075 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 09:23:06.141086 systemd-logind[1237]: Session 41 logged out. Waiting for processes to exit. Feb 9 09:23:06.141560 systemd-logind[1237]: Removed session 41. Feb 9 09:23:11.145991 systemd[1]: Started sshd@42-139.178.90.101:22-147.75.109.163:58678.service. Feb 9 09:23:11.178935 sshd[4981]: Accepted publickey for core from 147.75.109.163 port 58678 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:11.179958 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:11.183714 systemd-logind[1237]: New session 42 of user core. Feb 9 09:23:11.184510 systemd[1]: Started session-42.scope. Feb 9 09:23:11.274497 sshd[4981]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:11.276011 systemd[1]: sshd@42-139.178.90.101:22-147.75.109.163:58678.service: Deactivated successfully. Feb 9 09:23:11.276565 systemd-logind[1237]: Session 42 logged out. Waiting for processes to exit. Feb 9 09:23:11.276572 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 09:23:11.277146 systemd-logind[1237]: Removed session 42. Feb 9 09:23:16.280901 systemd[1]: Started sshd@43-139.178.90.101:22-147.75.109.163:33646.service. Feb 9 09:23:16.313477 sshd[5007]: Accepted publickey for core from 147.75.109.163 port 33646 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:16.314435 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:16.317805 systemd-logind[1237]: New session 43 of user core. Feb 9 09:23:16.318639 systemd[1]: Started session-43.scope. Feb 9 09:23:16.409680 sshd[5007]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:16.411170 systemd[1]: sshd@43-139.178.90.101:22-147.75.109.163:33646.service: Deactivated successfully. Feb 9 09:23:16.411817 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 09:23:16.411831 systemd-logind[1237]: Session 43 logged out. Waiting for processes to exit. Feb 9 09:23:16.412411 systemd-logind[1237]: Removed session 43. Feb 9 09:23:21.417331 systemd[1]: Started sshd@44-139.178.90.101:22-147.75.109.163:33660.service. Feb 9 09:23:21.452838 sshd[5033]: Accepted publickey for core from 147.75.109.163 port 33660 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:21.453707 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:21.456845 systemd-logind[1237]: New session 44 of user core. Feb 9 09:23:21.457497 systemd[1]: Started session-44.scope. Feb 9 09:23:21.547090 sshd[5033]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:21.548505 systemd[1]: sshd@44-139.178.90.101:22-147.75.109.163:33660.service: Deactivated successfully. Feb 9 09:23:21.549176 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 09:23:21.549218 systemd-logind[1237]: Session 44 logged out. Waiting for processes to exit. Feb 9 09:23:21.549773 systemd-logind[1237]: Removed session 44. Feb 9 09:23:26.553571 systemd[1]: Started sshd@45-139.178.90.101:22-147.75.109.163:41402.service. Feb 9 09:23:26.585948 sshd[5063]: Accepted publickey for core from 147.75.109.163 port 41402 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:26.587067 sshd[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:26.590658 systemd-logind[1237]: New session 45 of user core. Feb 9 09:23:26.591523 systemd[1]: Started session-45.scope. Feb 9 09:23:26.678998 sshd[5063]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:26.680319 systemd[1]: sshd@45-139.178.90.101:22-147.75.109.163:41402.service: Deactivated successfully. Feb 9 09:23:26.680995 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 09:23:26.681037 systemd-logind[1237]: Session 45 logged out. Waiting for processes to exit. Feb 9 09:23:26.681477 systemd-logind[1237]: Removed session 45. Feb 9 09:23:31.686103 systemd[1]: Started sshd@46-139.178.90.101:22-147.75.109.163:41418.service. Feb 9 09:23:31.719036 sshd[5089]: Accepted publickey for core from 147.75.109.163 port 41418 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:31.720057 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:31.723530 systemd-logind[1237]: New session 46 of user core. Feb 9 09:23:31.724372 systemd[1]: Started session-46.scope. Feb 9 09:23:31.815727 sshd[5089]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:31.817208 systemd[1]: sshd@46-139.178.90.101:22-147.75.109.163:41418.service: Deactivated successfully. Feb 9 09:23:31.817830 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 09:23:31.817872 systemd-logind[1237]: Session 46 logged out. Waiting for processes to exit. Feb 9 09:23:31.818478 systemd-logind[1237]: Removed session 46. Feb 9 09:23:36.821826 systemd[1]: Started sshd@47-139.178.90.101:22-147.75.109.163:34078.service. Feb 9 09:23:36.855159 sshd[5114]: Accepted publickey for core from 147.75.109.163 port 34078 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:36.858424 sshd[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:36.869268 systemd-logind[1237]: New session 47 of user core. Feb 9 09:23:36.871670 systemd[1]: Started session-47.scope. Feb 9 09:23:36.963186 sshd[5114]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:36.964662 systemd[1]: sshd@47-139.178.90.101:22-147.75.109.163:34078.service: Deactivated successfully. Feb 9 09:23:36.965290 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 09:23:36.965322 systemd-logind[1237]: Session 47 logged out. Waiting for processes to exit. Feb 9 09:23:36.965813 systemd-logind[1237]: Removed session 47. Feb 9 09:23:41.969870 systemd[1]: Started sshd@48-139.178.90.101:22-147.75.109.163:34090.service. Feb 9 09:23:42.002286 sshd[5142]: Accepted publickey for core from 147.75.109.163 port 34090 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:42.003127 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:42.006266 systemd-logind[1237]: New session 48 of user core. Feb 9 09:23:42.006861 systemd[1]: Started session-48.scope. Feb 9 09:23:42.097947 sshd[5142]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:42.099715 systemd[1]: Started sshd@49-139.178.90.101:22-147.75.109.163:34092.service. Feb 9 09:23:42.100017 systemd[1]: sshd@48-139.178.90.101:22-147.75.109.163:34090.service: Deactivated successfully. Feb 9 09:23:42.100487 systemd-logind[1237]: Session 48 logged out. Waiting for processes to exit. Feb 9 09:23:42.100535 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 09:23:42.101106 systemd-logind[1237]: Removed session 48. Feb 9 09:23:42.131768 sshd[5167]: Accepted publickey for core from 147.75.109.163 port 34092 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:42.132767 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:42.136255 systemd-logind[1237]: New session 49 of user core. Feb 9 09:23:42.137035 systemd[1]: Started session-49.scope. Feb 9 09:23:43.159189 sshd[5167]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:43.161053 systemd[1]: Started sshd@50-139.178.90.101:22-147.75.109.163:34098.service. Feb 9 09:23:43.161307 systemd[1]: sshd@49-139.178.90.101:22-147.75.109.163:34092.service: Deactivated successfully. Feb 9 09:23:43.162018 systemd-logind[1237]: Session 49 logged out. Waiting for processes to exit. Feb 9 09:23:43.162035 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 09:23:43.162513 systemd-logind[1237]: Removed session 49. Feb 9 09:23:43.193894 sshd[5192]: Accepted publickey for core from 147.75.109.163 port 34098 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:43.194988 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:43.198611 systemd-logind[1237]: New session 50 of user core. Feb 9 09:23:43.199587 systemd[1]: Started session-50.scope. Feb 9 09:23:43.980614 sshd[5192]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:43.982641 systemd[1]: Started sshd@51-139.178.90.101:22-147.75.109.163:34100.service. Feb 9 09:23:43.983053 systemd[1]: sshd@50-139.178.90.101:22-147.75.109.163:34098.service: Deactivated successfully. Feb 9 09:23:43.983791 systemd-logind[1237]: Session 50 logged out. Waiting for processes to exit. Feb 9 09:23:43.983813 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 09:23:43.984584 systemd-logind[1237]: Removed session 50. Feb 9 09:23:44.018508 sshd[5238]: Accepted publickey for core from 147.75.109.163 port 34100 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:44.019548 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:44.023165 systemd-logind[1237]: New session 51 of user core. Feb 9 09:23:44.023916 systemd[1]: Started session-51.scope. Feb 9 09:23:44.207172 sshd[5238]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:44.208805 systemd[1]: Started sshd@52-139.178.90.101:22-147.75.109.163:34116.service. Feb 9 09:23:44.209083 systemd[1]: sshd@51-139.178.90.101:22-147.75.109.163:34100.service: Deactivated successfully. Feb 9 09:23:44.209597 systemd-logind[1237]: Session 51 logged out. Waiting for processes to exit. Feb 9 09:23:44.209642 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 09:23:44.210083 systemd-logind[1237]: Removed session 51. Feb 9 09:23:44.241694 sshd[5296]: Accepted publickey for core from 147.75.109.163 port 34116 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:44.244558 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:44.254032 systemd-logind[1237]: New session 52 of user core. Feb 9 09:23:44.256358 systemd[1]: Started session-52.scope. Feb 9 09:23:44.396065 sshd[5296]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:44.397570 systemd[1]: sshd@52-139.178.90.101:22-147.75.109.163:34116.service: Deactivated successfully. Feb 9 09:23:44.398286 systemd-logind[1237]: Session 52 logged out. Waiting for processes to exit. Feb 9 09:23:44.398296 systemd[1]: session-52.scope: Deactivated successfully. Feb 9 09:23:44.398864 systemd-logind[1237]: Removed session 52. Feb 9 09:23:49.402728 systemd[1]: Started sshd@53-139.178.90.101:22-147.75.109.163:42474.service. Feb 9 09:23:49.435973 sshd[5323]: Accepted publickey for core from 147.75.109.163 port 42474 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:49.439184 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:49.450068 systemd-logind[1237]: New session 53 of user core. Feb 9 09:23:49.452521 systemd[1]: Started session-53.scope. Feb 9 09:23:49.540795 sshd[5323]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:49.542236 systemd[1]: sshd@53-139.178.90.101:22-147.75.109.163:42474.service: Deactivated successfully. Feb 9 09:23:49.542875 systemd[1]: session-53.scope: Deactivated successfully. Feb 9 09:23:49.542915 systemd-logind[1237]: Session 53 logged out. Waiting for processes to exit. Feb 9 09:23:49.543397 systemd-logind[1237]: Removed session 53. Feb 9 09:23:54.548202 systemd[1]: Started sshd@54-139.178.90.101:22-147.75.109.163:36066.service. Feb 9 09:23:54.580727 sshd[5349]: Accepted publickey for core from 147.75.109.163 port 36066 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:54.581766 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:54.585500 systemd-logind[1237]: New session 54 of user core. Feb 9 09:23:54.586549 systemd[1]: Started session-54.scope. Feb 9 09:23:54.681612 sshd[5349]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:54.687053 systemd[1]: sshd@54-139.178.90.101:22-147.75.109.163:36066.service: Deactivated successfully. Feb 9 09:23:54.689618 systemd-logind[1237]: Session 54 logged out. Waiting for processes to exit. Feb 9 09:23:54.689707 systemd[1]: session-54.scope: Deactivated successfully. Feb 9 09:23:54.691998 systemd-logind[1237]: Removed session 54. Feb 9 09:23:59.688376 systemd[1]: Started sshd@55-139.178.90.101:22-147.75.109.163:36070.service. Feb 9 09:23:59.721600 sshd[5376]: Accepted publickey for core from 147.75.109.163 port 36070 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:23:59.724795 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:23:59.735653 systemd-logind[1237]: New session 55 of user core. Feb 9 09:23:59.738097 systemd[1]: Started session-55.scope. Feb 9 09:23:59.828723 sshd[5376]: pam_unix(sshd:session): session closed for user core Feb 9 09:23:59.830272 systemd[1]: sshd@55-139.178.90.101:22-147.75.109.163:36070.service: Deactivated successfully. Feb 9 09:23:59.830964 systemd[1]: session-55.scope: Deactivated successfully. Feb 9 09:23:59.830978 systemd-logind[1237]: Session 55 logged out. Waiting for processes to exit. Feb 9 09:23:59.831499 systemd-logind[1237]: Removed session 55. Feb 9 09:24:04.835200 systemd[1]: Started sshd@56-139.178.90.101:22-147.75.109.163:55280.service. Feb 9 09:24:04.867620 sshd[5402]: Accepted publickey for core from 147.75.109.163 port 55280 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:04.868588 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:04.871936 systemd-logind[1237]: New session 56 of user core. Feb 9 09:24:04.872857 systemd[1]: Started session-56.scope. Feb 9 09:24:04.958973 sshd[5402]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:04.960470 systemd[1]: sshd@56-139.178.90.101:22-147.75.109.163:55280.service: Deactivated successfully. Feb 9 09:24:04.961175 systemd[1]: session-56.scope: Deactivated successfully. Feb 9 09:24:04.961182 systemd-logind[1237]: Session 56 logged out. Waiting for processes to exit. Feb 9 09:24:04.961587 systemd-logind[1237]: Removed session 56. Feb 9 09:24:09.965512 systemd[1]: Started sshd@57-139.178.90.101:22-147.75.109.163:55288.service. Feb 9 09:24:09.997945 sshd[5429]: Accepted publickey for core from 147.75.109.163 port 55288 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:09.998842 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:10.002115 systemd-logind[1237]: New session 57 of user core. Feb 9 09:24:10.002798 systemd[1]: Started session-57.scope. Feb 9 09:24:10.086503 sshd[5429]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:10.088040 systemd[1]: sshd@57-139.178.90.101:22-147.75.109.163:55288.service: Deactivated successfully. Feb 9 09:24:10.088573 systemd-logind[1237]: Session 57 logged out. Waiting for processes to exit. Feb 9 09:24:10.088641 systemd[1]: session-57.scope: Deactivated successfully. Feb 9 09:24:10.089226 systemd-logind[1237]: Removed session 57. Feb 9 09:24:15.089868 systemd[1]: Started sshd@58-139.178.90.101:22-147.75.109.163:52032.service. Feb 9 09:24:15.125055 sshd[5458]: Accepted publickey for core from 147.75.109.163 port 52032 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:15.125967 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:15.129302 systemd-logind[1237]: New session 58 of user core. Feb 9 09:24:15.129927 systemd[1]: Started session-58.scope. Feb 9 09:24:15.217175 sshd[5458]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:15.218728 systemd[1]: sshd@58-139.178.90.101:22-147.75.109.163:52032.service: Deactivated successfully. Feb 9 09:24:15.219389 systemd-logind[1237]: Session 58 logged out. Waiting for processes to exit. Feb 9 09:24:15.219391 systemd[1]: session-58.scope: Deactivated successfully. Feb 9 09:24:15.220115 systemd-logind[1237]: Removed session 58. Feb 9 09:24:20.224107 systemd[1]: Started sshd@59-139.178.90.101:22-147.75.109.163:52044.service. Feb 9 09:24:20.256400 sshd[5484]: Accepted publickey for core from 147.75.109.163 port 52044 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:20.257406 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:20.260935 systemd-logind[1237]: New session 59 of user core. Feb 9 09:24:20.261638 systemd[1]: Started session-59.scope. Feb 9 09:24:20.353911 sshd[5484]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:20.355507 systemd[1]: sshd@59-139.178.90.101:22-147.75.109.163:52044.service: Deactivated successfully. Feb 9 09:24:20.356280 systemd[1]: session-59.scope: Deactivated successfully. Feb 9 09:24:20.356285 systemd-logind[1237]: Session 59 logged out. Waiting for processes to exit. Feb 9 09:24:20.356853 systemd-logind[1237]: Removed session 59. Feb 9 09:24:25.360335 systemd[1]: Started sshd@60-139.178.90.101:22-147.75.109.163:50546.service. Feb 9 09:24:25.392515 sshd[5508]: Accepted publickey for core from 147.75.109.163 port 50546 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:25.393511 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:25.396860 systemd-logind[1237]: New session 60 of user core. Feb 9 09:24:25.397606 systemd[1]: Started session-60.scope. Feb 9 09:24:25.484913 sshd[5508]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:25.486271 systemd[1]: sshd@60-139.178.90.101:22-147.75.109.163:50546.service: Deactivated successfully. Feb 9 09:24:25.486875 systemd[1]: session-60.scope: Deactivated successfully. Feb 9 09:24:25.486887 systemd-logind[1237]: Session 60 logged out. Waiting for processes to exit. Feb 9 09:24:25.487383 systemd-logind[1237]: Removed session 60. Feb 9 09:24:30.491270 systemd[1]: Started sshd@61-139.178.90.101:22-147.75.109.163:50554.service. Feb 9 09:24:30.523973 sshd[5535]: Accepted publickey for core from 147.75.109.163 port 50554 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:30.524970 sshd[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:30.528430 systemd-logind[1237]: New session 61 of user core. Feb 9 09:24:30.529135 systemd[1]: Started session-61.scope. Feb 9 09:24:30.615505 sshd[5535]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:30.617035 systemd[1]: sshd@61-139.178.90.101:22-147.75.109.163:50554.service: Deactivated successfully. Feb 9 09:24:30.617606 systemd-logind[1237]: Session 61 logged out. Waiting for processes to exit. Feb 9 09:24:30.617619 systemd[1]: session-61.scope: Deactivated successfully. Feb 9 09:24:30.618247 systemd-logind[1237]: Removed session 61. Feb 9 09:24:35.620858 systemd[1]: Started sshd@62-139.178.90.101:22-147.75.109.163:53588.service. Feb 9 09:24:35.653324 sshd[5561]: Accepted publickey for core from 147.75.109.163 port 53588 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:35.654208 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:35.657380 systemd-logind[1237]: New session 62 of user core. Feb 9 09:24:35.658080 systemd[1]: Started session-62.scope. Feb 9 09:24:35.744463 sshd[5561]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:35.746226 systemd[1]: sshd@62-139.178.90.101:22-147.75.109.163:53588.service: Deactivated successfully. Feb 9 09:24:35.746983 systemd[1]: session-62.scope: Deactivated successfully. Feb 9 09:24:35.747038 systemd-logind[1237]: Session 62 logged out. Waiting for processes to exit. Feb 9 09:24:35.747566 systemd-logind[1237]: Removed session 62. Feb 9 09:24:40.750788 systemd[1]: Started sshd@63-139.178.90.101:22-147.75.109.163:53596.service. Feb 9 09:24:40.783355 sshd[5586]: Accepted publickey for core from 147.75.109.163 port 53596 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:40.784260 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:40.787533 systemd-logind[1237]: New session 63 of user core. Feb 9 09:24:40.788286 systemd[1]: Started session-63.scope. Feb 9 09:24:40.873589 sshd[5586]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:40.875143 systemd[1]: sshd@63-139.178.90.101:22-147.75.109.163:53596.service: Deactivated successfully. Feb 9 09:24:40.875829 systemd[1]: session-63.scope: Deactivated successfully. Feb 9 09:24:40.875873 systemd-logind[1237]: Session 63 logged out. Waiting for processes to exit. Feb 9 09:24:40.876421 systemd-logind[1237]: Removed session 63. Feb 9 09:24:45.880194 systemd[1]: Started sshd@64-139.178.90.101:22-147.75.109.163:51070.service. Feb 9 09:24:45.912697 sshd[5613]: Accepted publickey for core from 147.75.109.163 port 51070 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:45.913677 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:45.917307 systemd-logind[1237]: New session 64 of user core. Feb 9 09:24:45.918011 systemd[1]: Started session-64.scope. Feb 9 09:24:46.007308 sshd[5613]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:46.008982 systemd[1]: sshd@64-139.178.90.101:22-147.75.109.163:51070.service: Deactivated successfully. Feb 9 09:24:46.009755 systemd[1]: session-64.scope: Deactivated successfully. Feb 9 09:24:46.009800 systemd-logind[1237]: Session 64 logged out. Waiting for processes to exit. Feb 9 09:24:46.010508 systemd-logind[1237]: Removed session 64. Feb 9 09:24:51.014374 systemd[1]: Started sshd@65-139.178.90.101:22-147.75.109.163:51078.service. Feb 9 09:24:51.046739 sshd[5638]: Accepted publickey for core from 147.75.109.163 port 51078 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:51.047776 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:51.051270 systemd-logind[1237]: New session 65 of user core. Feb 9 09:24:51.052057 systemd[1]: Started session-65.scope. Feb 9 09:24:51.140208 sshd[5638]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:51.141731 systemd[1]: sshd@65-139.178.90.101:22-147.75.109.163:51078.service: Deactivated successfully. Feb 9 09:24:51.142426 systemd[1]: session-65.scope: Deactivated successfully. Feb 9 09:24:51.142461 systemd-logind[1237]: Session 65 logged out. Waiting for processes to exit. Feb 9 09:24:51.143177 systemd-logind[1237]: Removed session 65. Feb 9 09:24:56.147707 systemd[1]: Started sshd@66-139.178.90.101:22-147.75.109.163:54378.service. Feb 9 09:24:56.183376 sshd[5664]: Accepted publickey for core from 147.75.109.163 port 54378 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:24:56.184235 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:24:56.187458 systemd-logind[1237]: New session 66 of user core. Feb 9 09:24:56.188065 systemd[1]: Started session-66.scope. Feb 9 09:24:56.272661 sshd[5664]: pam_unix(sshd:session): session closed for user core Feb 9 09:24:56.274279 systemd[1]: sshd@66-139.178.90.101:22-147.75.109.163:54378.service: Deactivated successfully. Feb 9 09:24:56.274946 systemd-logind[1237]: Session 66 logged out. Waiting for processes to exit. Feb 9 09:24:56.274959 systemd[1]: session-66.scope: Deactivated successfully. Feb 9 09:24:56.275461 systemd-logind[1237]: Removed session 66. Feb 9 09:25:01.279884 systemd[1]: Started sshd@67-139.178.90.101:22-147.75.109.163:54380.service. Feb 9 09:25:01.311894 sshd[5687]: Accepted publickey for core from 147.75.109.163 port 54380 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:01.312875 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:01.316553 systemd-logind[1237]: New session 67 of user core. Feb 9 09:25:01.317322 systemd[1]: Started session-67.scope. Feb 9 09:25:01.407408 sshd[5687]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:01.408927 systemd[1]: sshd@67-139.178.90.101:22-147.75.109.163:54380.service: Deactivated successfully. Feb 9 09:25:01.409535 systemd-logind[1237]: Session 67 logged out. Waiting for processes to exit. Feb 9 09:25:01.409579 systemd[1]: session-67.scope: Deactivated successfully. Feb 9 09:25:01.410184 systemd-logind[1237]: Removed session 67. Feb 9 09:25:06.413833 systemd[1]: Started sshd@68-139.178.90.101:22-147.75.109.163:47290.service. Feb 9 09:25:06.446105 sshd[5713]: Accepted publickey for core from 147.75.109.163 port 47290 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:06.446798 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:06.449077 systemd-logind[1237]: New session 68 of user core. Feb 9 09:25:06.449653 systemd[1]: Started session-68.scope. Feb 9 09:25:06.535737 sshd[5713]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:06.537324 systemd[1]: sshd@68-139.178.90.101:22-147.75.109.163:47290.service: Deactivated successfully. Feb 9 09:25:06.538107 systemd[1]: session-68.scope: Deactivated successfully. Feb 9 09:25:06.538121 systemd-logind[1237]: Session 68 logged out. Waiting for processes to exit. Feb 9 09:25:06.538803 systemd-logind[1237]: Removed session 68. Feb 9 09:25:11.542175 systemd[1]: Started sshd@69-139.178.90.101:22-147.75.109.163:47294.service. Feb 9 09:25:11.574692 sshd[5741]: Accepted publickey for core from 147.75.109.163 port 47294 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:11.575411 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:11.578005 systemd-logind[1237]: New session 69 of user core. Feb 9 09:25:11.578449 systemd[1]: Started session-69.scope. Feb 9 09:25:11.661334 sshd[5741]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:11.662946 systemd[1]: sshd@69-139.178.90.101:22-147.75.109.163:47294.service: Deactivated successfully. Feb 9 09:25:11.663573 systemd-logind[1237]: Session 69 logged out. Waiting for processes to exit. Feb 9 09:25:11.663613 systemd[1]: session-69.scope: Deactivated successfully. Feb 9 09:25:11.664352 systemd-logind[1237]: Removed session 69. Feb 9 09:25:16.668165 systemd[1]: Started sshd@70-139.178.90.101:22-147.75.109.163:51294.service. Feb 9 09:25:16.700301 sshd[5766]: Accepted publickey for core from 147.75.109.163 port 51294 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:16.701260 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:16.704505 systemd-logind[1237]: New session 70 of user core. Feb 9 09:25:16.705289 systemd[1]: Started session-70.scope. Feb 9 09:25:16.803298 sshd[5766]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:16.804792 systemd[1]: sshd@70-139.178.90.101:22-147.75.109.163:51294.service: Deactivated successfully. Feb 9 09:25:16.805423 systemd-logind[1237]: Session 70 logged out. Waiting for processes to exit. Feb 9 09:25:16.805423 systemd[1]: session-70.scope: Deactivated successfully. Feb 9 09:25:16.805957 systemd-logind[1237]: Removed session 70. Feb 9 09:25:21.810094 systemd[1]: Started sshd@71-139.178.90.101:22-147.75.109.163:51298.service. Feb 9 09:25:21.843061 sshd[5796]: Accepted publickey for core from 147.75.109.163 port 51298 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:21.844307 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:21.848279 systemd-logind[1237]: New session 71 of user core. Feb 9 09:25:21.849100 systemd[1]: Started session-71.scope. Feb 9 09:25:21.939623 sshd[5796]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:21.941114 systemd[1]: sshd@71-139.178.90.101:22-147.75.109.163:51298.service: Deactivated successfully. Feb 9 09:25:21.941715 systemd-logind[1237]: Session 71 logged out. Waiting for processes to exit. Feb 9 09:25:21.941766 systemd[1]: session-71.scope: Deactivated successfully. Feb 9 09:25:21.942310 systemd-logind[1237]: Removed session 71. Feb 9 09:25:26.946783 systemd[1]: Started sshd@72-139.178.90.101:22-147.75.109.163:58334.service. Feb 9 09:25:26.980266 sshd[5825]: Accepted publickey for core from 147.75.109.163 port 58334 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:26.983328 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:26.993742 systemd-logind[1237]: New session 72 of user core. Feb 9 09:25:26.996591 systemd[1]: Started session-72.scope. Feb 9 09:25:27.085864 sshd[5825]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:27.087263 systemd[1]: sshd@72-139.178.90.101:22-147.75.109.163:58334.service: Deactivated successfully. Feb 9 09:25:27.087883 systemd[1]: session-72.scope: Deactivated successfully. Feb 9 09:25:27.087899 systemd-logind[1237]: Session 72 logged out. Waiting for processes to exit. Feb 9 09:25:27.088396 systemd-logind[1237]: Removed session 72. Feb 9 09:25:32.092707 systemd[1]: Started sshd@73-139.178.90.101:22-147.75.109.163:58348.service. Feb 9 09:25:32.126032 sshd[5849]: Accepted publickey for core from 147.75.109.163 port 58348 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:32.128990 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:32.138863 systemd-logind[1237]: New session 73 of user core. Feb 9 09:25:32.141117 systemd[1]: Started session-73.scope. Feb 9 09:25:32.272206 sshd[5849]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:32.274016 systemd[1]: sshd@73-139.178.90.101:22-147.75.109.163:58348.service: Deactivated successfully. Feb 9 09:25:32.274854 systemd[1]: session-73.scope: Deactivated successfully. Feb 9 09:25:32.274858 systemd-logind[1237]: Session 73 logged out. Waiting for processes to exit. Feb 9 09:25:32.275482 systemd-logind[1237]: Removed session 73. Feb 9 09:25:37.279804 systemd[1]: Started sshd@74-139.178.90.101:22-147.75.109.163:42532.service. Feb 9 09:25:37.312316 sshd[5875]: Accepted publickey for core from 147.75.109.163 port 42532 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:37.313349 sshd[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:37.316577 systemd-logind[1237]: New session 74 of user core. Feb 9 09:25:37.317591 systemd[1]: Started session-74.scope. Feb 9 09:25:37.407463 sshd[5875]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:37.409038 systemd[1]: sshd@74-139.178.90.101:22-147.75.109.163:42532.service: Deactivated successfully. Feb 9 09:25:37.409624 systemd[1]: session-74.scope: Deactivated successfully. Feb 9 09:25:37.409651 systemd-logind[1237]: Session 74 logged out. Waiting for processes to exit. Feb 9 09:25:37.410284 systemd-logind[1237]: Removed session 74. Feb 9 09:25:42.414841 systemd[1]: Started sshd@75-139.178.90.101:22-147.75.109.163:42544.service. Feb 9 09:25:42.446954 sshd[5903]: Accepted publickey for core from 147.75.109.163 port 42544 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:42.447869 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:42.451518 systemd-logind[1237]: New session 75 of user core. Feb 9 09:25:42.452240 systemd[1]: Started session-75.scope. Feb 9 09:25:42.539592 sshd[5903]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:42.541162 systemd[1]: sshd@75-139.178.90.101:22-147.75.109.163:42544.service: Deactivated successfully. Feb 9 09:25:42.541836 systemd[1]: session-75.scope: Deactivated successfully. Feb 9 09:25:42.541850 systemd-logind[1237]: Session 75 logged out. Waiting for processes to exit. Feb 9 09:25:42.542367 systemd-logind[1237]: Removed session 75. Feb 9 09:25:47.546643 systemd[1]: Started sshd@76-139.178.90.101:22-147.75.109.163:32820.service. Feb 9 09:25:47.579028 sshd[5929]: Accepted publickey for core from 147.75.109.163 port 32820 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:47.579940 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:47.583345 systemd-logind[1237]: New session 76 of user core. Feb 9 09:25:47.584257 systemd[1]: Started session-76.scope. Feb 9 09:25:47.670263 sshd[5929]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:47.671745 systemd[1]: sshd@76-139.178.90.101:22-147.75.109.163:32820.service: Deactivated successfully. Feb 9 09:25:47.672390 systemd-logind[1237]: Session 76 logged out. Waiting for processes to exit. Feb 9 09:25:47.672460 systemd[1]: session-76.scope: Deactivated successfully. Feb 9 09:25:47.673228 systemd-logind[1237]: Removed session 76. Feb 9 09:25:52.674351 systemd[1]: Started sshd@77-139.178.90.101:22-147.75.109.163:32822.service. Feb 9 09:25:52.709971 sshd[5955]: Accepted publickey for core from 147.75.109.163 port 32822 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:52.710814 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:52.714086 systemd-logind[1237]: New session 77 of user core. Feb 9 09:25:52.714873 systemd[1]: Started session-77.scope. Feb 9 09:25:52.799901 sshd[5955]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:52.801403 systemd[1]: sshd@77-139.178.90.101:22-147.75.109.163:32822.service: Deactivated successfully. Feb 9 09:25:52.802096 systemd[1]: session-77.scope: Deactivated successfully. Feb 9 09:25:52.802117 systemd-logind[1237]: Session 77 logged out. Waiting for processes to exit. Feb 9 09:25:52.802656 systemd-logind[1237]: Removed session 77. Feb 9 09:25:53.579223 systemd[1]: Started sshd@78-139.178.90.101:22-218.92.0.43:19322.service. Feb 9 09:25:53.749634 sshd[5981]: Unable to negotiate with 218.92.0.43 port 19322: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 09:25:53.751424 systemd[1]: sshd@78-139.178.90.101:22-218.92.0.43:19322.service: Deactivated successfully. Feb 9 09:25:56.527393 systemd[1]: Started sshd@79-139.178.90.101:22-103.78.143.130:50768.service. Feb 9 09:25:57.806880 systemd[1]: Started sshd@80-139.178.90.101:22-147.75.109.163:57482.service. Feb 9 09:25:57.839717 sshd[5987]: Accepted publickey for core from 147.75.109.163 port 57482 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:25:57.842927 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:25:57.853695 systemd-logind[1237]: New session 78 of user core. Feb 9 09:25:57.856747 systemd[1]: Started session-78.scope. Feb 9 09:25:57.862697 sshd[5985]: Invalid user github from 103.78.143.130 port 50768 Feb 9 09:25:57.864343 sshd[5985]: pam_faillock(sshd:auth): User unknown Feb 9 09:25:57.864538 sshd[5985]: pam_unix(sshd:auth): check pass; user unknown Feb 9 09:25:57.864553 sshd[5985]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.78.143.130 Feb 9 09:25:57.864822 sshd[5985]: pam_faillock(sshd:auth): User unknown Feb 9 09:25:57.950518 sshd[5987]: pam_unix(sshd:session): session closed for user core Feb 9 09:25:57.951942 systemd[1]: sshd@80-139.178.90.101:22-147.75.109.163:57482.service: Deactivated successfully. Feb 9 09:25:57.952516 systemd-logind[1237]: Session 78 logged out. Waiting for processes to exit. Feb 9 09:25:57.952521 systemd[1]: session-78.scope: Deactivated successfully. Feb 9 09:25:57.953088 systemd-logind[1237]: Removed session 78. Feb 9 09:26:00.007857 sshd[5985]: Failed password for invalid user github from 103.78.143.130 port 50768 ssh2 Feb 9 09:26:01.951203 sshd[5985]: Received disconnect from 103.78.143.130 port 50768:11: Bye Bye [preauth] Feb 9 09:26:01.951203 sshd[5985]: Disconnected from invalid user github 103.78.143.130 port 50768 [preauth] Feb 9 09:26:01.953651 systemd[1]: sshd@79-139.178.90.101:22-103.78.143.130:50768.service: Deactivated successfully. Feb 9 09:26:02.958364 systemd[1]: Started sshd@81-139.178.90.101:22-147.75.109.163:57494.service. Feb 9 09:26:02.996067 sshd[6016]: Accepted publickey for core from 147.75.109.163 port 57494 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:02.996957 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:03.000382 systemd-logind[1237]: New session 79 of user core. Feb 9 09:26:03.001018 systemd[1]: Started session-79.scope. Feb 9 09:26:03.086634 sshd[6016]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:03.088355 systemd[1]: sshd@81-139.178.90.101:22-147.75.109.163:57494.service: Deactivated successfully. Feb 9 09:26:03.089130 systemd[1]: session-79.scope: Deactivated successfully. Feb 9 09:26:03.089173 systemd-logind[1237]: Session 79 logged out. Waiting for processes to exit. Feb 9 09:26:03.089796 systemd-logind[1237]: Removed session 79. Feb 9 09:26:08.090742 systemd[1]: Started sshd@82-139.178.90.101:22-147.75.109.163:39698.service. Feb 9 09:26:08.126962 sshd[6042]: Accepted publickey for core from 147.75.109.163 port 39698 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:08.127810 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:08.130968 systemd-logind[1237]: New session 80 of user core. Feb 9 09:26:08.131592 systemd[1]: Started session-80.scope. Feb 9 09:26:08.216538 sshd[6042]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:08.218084 systemd[1]: sshd@82-139.178.90.101:22-147.75.109.163:39698.service: Deactivated successfully. Feb 9 09:26:08.218680 systemd-logind[1237]: Session 80 logged out. Waiting for processes to exit. Feb 9 09:26:08.218691 systemd[1]: session-80.scope: Deactivated successfully. Feb 9 09:26:08.219353 systemd-logind[1237]: Removed session 80. Feb 9 09:26:13.223836 systemd[1]: Started sshd@83-139.178.90.101:22-147.75.109.163:39704.service. Feb 9 09:26:13.256137 sshd[6071]: Accepted publickey for core from 147.75.109.163 port 39704 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:13.257140 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:13.260572 systemd-logind[1237]: New session 81 of user core. Feb 9 09:26:13.261358 systemd[1]: Started session-81.scope. Feb 9 09:26:13.350628 sshd[6071]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:13.352151 systemd[1]: sshd@83-139.178.90.101:22-147.75.109.163:39704.service: Deactivated successfully. Feb 9 09:26:13.352838 systemd[1]: session-81.scope: Deactivated successfully. Feb 9 09:26:13.352845 systemd-logind[1237]: Session 81 logged out. Waiting for processes to exit. Feb 9 09:26:13.353371 systemd-logind[1237]: Removed session 81. Feb 9 09:26:18.357358 systemd[1]: Started sshd@84-139.178.90.101:22-147.75.109.163:37768.service. Feb 9 09:26:18.389937 sshd[6096]: Accepted publickey for core from 147.75.109.163 port 37768 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:18.390772 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:18.393961 systemd-logind[1237]: New session 82 of user core. Feb 9 09:26:18.394573 systemd[1]: Started session-82.scope. Feb 9 09:26:18.479728 sshd[6096]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:18.481267 systemd[1]: sshd@84-139.178.90.101:22-147.75.109.163:37768.service: Deactivated successfully. Feb 9 09:26:18.481931 systemd[1]: session-82.scope: Deactivated successfully. Feb 9 09:26:18.481939 systemd-logind[1237]: Session 82 logged out. Waiting for processes to exit. Feb 9 09:26:18.482472 systemd-logind[1237]: Removed session 82. Feb 9 09:26:23.486358 systemd[1]: Started sshd@85-139.178.90.101:22-147.75.109.163:37774.service. Feb 9 09:26:23.518418 sshd[6122]: Accepted publickey for core from 147.75.109.163 port 37774 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:23.519374 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:23.522776 systemd-logind[1237]: New session 83 of user core. Feb 9 09:26:23.523546 systemd[1]: Started session-83.scope. Feb 9 09:26:23.611148 sshd[6122]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:23.612543 systemd[1]: sshd@85-139.178.90.101:22-147.75.109.163:37774.service: Deactivated successfully. Feb 9 09:26:23.613221 systemd[1]: session-83.scope: Deactivated successfully. Feb 9 09:26:23.613255 systemd-logind[1237]: Session 83 logged out. Waiting for processes to exit. Feb 9 09:26:23.613852 systemd-logind[1237]: Removed session 83. Feb 9 09:26:28.618361 systemd[1]: Started sshd@86-139.178.90.101:22-147.75.109.163:39656.service. Feb 9 09:26:28.651544 sshd[6150]: Accepted publickey for core from 147.75.109.163 port 39656 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:28.652615 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:28.656426 systemd-logind[1237]: New session 84 of user core. Feb 9 09:26:28.657311 systemd[1]: Started session-84.scope. Feb 9 09:26:28.746082 sshd[6150]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:28.747886 systemd[1]: sshd@86-139.178.90.101:22-147.75.109.163:39656.service: Deactivated successfully. Feb 9 09:26:28.748629 systemd-logind[1237]: Session 84 logged out. Waiting for processes to exit. Feb 9 09:26:28.748642 systemd[1]: session-84.scope: Deactivated successfully. Feb 9 09:26:28.749363 systemd-logind[1237]: Removed session 84. Feb 9 09:26:33.753487 systemd[1]: Started sshd@87-139.178.90.101:22-147.75.109.163:39662.service. Feb 9 09:26:33.785953 sshd[6177]: Accepted publickey for core from 147.75.109.163 port 39662 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:33.786847 sshd[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:33.790264 systemd-logind[1237]: New session 85 of user core. Feb 9 09:26:33.790944 systemd[1]: Started session-85.scope. Feb 9 09:26:33.876908 sshd[6177]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:33.878397 systemd[1]: sshd@87-139.178.90.101:22-147.75.109.163:39662.service: Deactivated successfully. Feb 9 09:26:33.879119 systemd[1]: session-85.scope: Deactivated successfully. Feb 9 09:26:33.879157 systemd-logind[1237]: Session 85 logged out. Waiting for processes to exit. Feb 9 09:26:33.879662 systemd-logind[1237]: Removed session 85. Feb 9 09:26:38.883134 systemd[1]: Started sshd@88-139.178.90.101:22-147.75.109.163:38988.service. Feb 9 09:26:38.915273 sshd[6203]: Accepted publickey for core from 147.75.109.163 port 38988 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:38.916182 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:38.919508 systemd-logind[1237]: New session 86 of user core. Feb 9 09:26:38.920191 systemd[1]: Started session-86.scope. Feb 9 09:26:39.008543 sshd[6203]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:39.010018 systemd[1]: sshd@88-139.178.90.101:22-147.75.109.163:38988.service: Deactivated successfully. Feb 9 09:26:39.010578 systemd-logind[1237]: Session 86 logged out. Waiting for processes to exit. Feb 9 09:26:39.010618 systemd[1]: session-86.scope: Deactivated successfully. Feb 9 09:26:39.011274 systemd-logind[1237]: Removed session 86. Feb 9 09:26:44.017002 systemd[1]: Started sshd@89-139.178.90.101:22-147.75.109.163:38990.service. Feb 9 09:26:44.052898 sshd[6231]: Accepted publickey for core from 147.75.109.163 port 38990 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:44.053843 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:44.057041 systemd-logind[1237]: New session 87 of user core. Feb 9 09:26:44.057717 systemd[1]: Started session-87.scope. Feb 9 09:26:44.145751 sshd[6231]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:44.147836 systemd[1]: Started sshd@90-139.178.90.101:22-147.75.109.163:39000.service. Feb 9 09:26:44.148276 systemd[1]: sshd@89-139.178.90.101:22-147.75.109.163:38990.service: Deactivated successfully. Feb 9 09:26:44.149061 systemd-logind[1237]: Session 87 logged out. Waiting for processes to exit. Feb 9 09:26:44.149062 systemd[1]: session-87.scope: Deactivated successfully. Feb 9 09:26:44.149820 systemd-logind[1237]: Removed session 87. Feb 9 09:26:44.185214 sshd[6255]: Accepted publickey for core from 147.75.109.163 port 39000 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:44.188118 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:44.198176 systemd-logind[1237]: New session 88 of user core. Feb 9 09:26:44.200361 systemd[1]: Started session-88.scope. Feb 9 09:26:45.579654 env[1251]: time="2024-02-09T09:26:45.579628315Z" level=info msg="StopContainer for \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\" with timeout 30 (s)" Feb 9 09:26:45.579902 env[1251]: time="2024-02-09T09:26:45.579820854Z" level=info msg="Stop container \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\" with signal terminated" Feb 9 09:26:45.602689 env[1251]: time="2024-02-09T09:26:45.602626545Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:26:45.605543 env[1251]: time="2024-02-09T09:26:45.605529045Z" level=info msg="StopContainer for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" with timeout 1 (s)" Feb 9 09:26:45.605684 env[1251]: time="2024-02-09T09:26:45.605643549Z" level=info msg="Stop container \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" with signal terminated" Feb 9 09:26:45.608810 systemd-networkd[1105]: lxc_health: Link DOWN Feb 9 09:26:45.608813 systemd-networkd[1105]: lxc_health: Lost carrier Feb 9 09:26:45.629782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b-rootfs.mount: Deactivated successfully. Feb 9 09:26:45.632360 env[1251]: time="2024-02-09T09:26:45.632299828Z" level=info msg="shim disconnected" id=c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b Feb 9 09:26:45.632360 env[1251]: time="2024-02-09T09:26:45.632333659Z" level=warning msg="cleaning up after shim disconnected" id=c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b namespace=k8s.io Feb 9 09:26:45.632360 env[1251]: time="2024-02-09T09:26:45.632343186Z" level=info msg="cleaning up dead shim" Feb 9 09:26:45.637582 env[1251]: time="2024-02-09T09:26:45.637528231Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6322 runtime=io.containerd.runc.v2\n" Feb 9 09:26:45.638523 env[1251]: time="2024-02-09T09:26:45.638478729Z" level=info msg="StopContainer for \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\" returns successfully" Feb 9 09:26:45.638902 env[1251]: time="2024-02-09T09:26:45.638875423Z" level=info msg="StopPodSandbox for \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\"" Feb 9 09:26:45.638966 env[1251]: time="2024-02-09T09:26:45.638924502Z" level=info msg="Container to stop \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:45.640726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5-shm.mount: Deactivated successfully. Feb 9 09:26:45.714188 env[1251]: time="2024-02-09T09:26:45.714081623Z" level=info msg="shim disconnected" id=959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5 Feb 9 09:26:45.714630 env[1251]: time="2024-02-09T09:26:45.714198245Z" level=warning msg="cleaning up after shim disconnected" id=959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5 namespace=k8s.io Feb 9 09:26:45.714630 env[1251]: time="2024-02-09T09:26:45.714230286Z" level=info msg="cleaning up dead shim" Feb 9 09:26:45.716387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5-rootfs.mount: Deactivated successfully. Feb 9 09:26:45.727964 env[1251]: time="2024-02-09T09:26:45.727853899Z" level=info msg="shim disconnected" id=f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f Feb 9 09:26:45.728418 env[1251]: time="2024-02-09T09:26:45.727968697Z" level=warning msg="cleaning up after shim disconnected" id=f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f namespace=k8s.io Feb 9 09:26:45.728418 env[1251]: time="2024-02-09T09:26:45.728007259Z" level=info msg="cleaning up dead shim" Feb 9 09:26:45.730098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f-rootfs.mount: Deactivated successfully. Feb 9 09:26:45.732714 env[1251]: time="2024-02-09T09:26:45.732643944Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6370 runtime=io.containerd.runc.v2\n" Feb 9 09:26:45.733328 env[1251]: time="2024-02-09T09:26:45.733259593Z" level=info msg="TearDown network for sandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" successfully" Feb 9 09:26:45.733328 env[1251]: time="2024-02-09T09:26:45.733317082Z" level=info msg="StopPodSandbox for \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" returns successfully" Feb 9 09:26:45.756813 env[1251]: time="2024-02-09T09:26:45.756696706Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6382 runtime=io.containerd.runc.v2\n" Feb 9 09:26:45.758831 env[1251]: time="2024-02-09T09:26:45.758727193Z" level=info msg="StopContainer for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" returns successfully" Feb 9 09:26:45.759602 env[1251]: time="2024-02-09T09:26:45.759515426Z" level=info msg="StopPodSandbox for \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\"" Feb 9 09:26:45.759936 env[1251]: time="2024-02-09T09:26:45.759866391Z" level=info msg="Container to stop \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:45.760159 env[1251]: time="2024-02-09T09:26:45.759932941Z" level=info msg="Container to stop \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:45.760159 env[1251]: time="2024-02-09T09:26:45.759988642Z" level=info msg="Container to stop \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:45.760159 env[1251]: time="2024-02-09T09:26:45.760043797Z" level=info msg="Container to stop \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:45.760159 env[1251]: time="2024-02-09T09:26:45.760095985Z" level=info msg="Container to stop \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:45.799336 env[1251]: time="2024-02-09T09:26:45.799283940Z" level=info msg="shim disconnected" id=79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71 Feb 9 09:26:45.799336 env[1251]: time="2024-02-09T09:26:45.799332811Z" level=warning msg="cleaning up after shim disconnected" id=79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71 namespace=k8s.io Feb 9 09:26:45.799622 env[1251]: time="2024-02-09T09:26:45.799350365Z" level=info msg="cleaning up dead shim" Feb 9 09:26:45.803114 kubelet[2367]: I0209 09:26:45.803063 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12aa12f1-94c7-48f3-a090-87adf7e0a891-cilium-config-path\") pod \"12aa12f1-94c7-48f3-a090-87adf7e0a891\" (UID: \"12aa12f1-94c7-48f3-a090-87adf7e0a891\") " Feb 9 09:26:45.803114 kubelet[2367]: I0209 09:26:45.803113 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9lb\" (UniqueName: \"kubernetes.io/projected/12aa12f1-94c7-48f3-a090-87adf7e0a891-kube-api-access-4g9lb\") pod \"12aa12f1-94c7-48f3-a090-87adf7e0a891\" (UID: \"12aa12f1-94c7-48f3-a090-87adf7e0a891\") " Feb 9 09:26:45.803509 kubelet[2367]: W0209 09:26:45.803347 2367 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/12aa12f1-94c7-48f3-a090-87adf7e0a891/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:26:45.805994 kubelet[2367]: I0209 09:26:45.805938 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12aa12f1-94c7-48f3-a090-87adf7e0a891-kube-api-access-4g9lb" (OuterVolumeSpecName: "kube-api-access-4g9lb") pod "12aa12f1-94c7-48f3-a090-87adf7e0a891" (UID: "12aa12f1-94c7-48f3-a090-87adf7e0a891"). InnerVolumeSpecName "kube-api-access-4g9lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:26:45.806546 kubelet[2367]: I0209 09:26:45.806483 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12aa12f1-94c7-48f3-a090-87adf7e0a891-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12aa12f1-94c7-48f3-a090-87adf7e0a891" (UID: "12aa12f1-94c7-48f3-a090-87adf7e0a891"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:26:45.807607 env[1251]: time="2024-02-09T09:26:45.807561136Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6416 runtime=io.containerd.runc.v2\n" Feb 9 09:26:45.807909 env[1251]: time="2024-02-09T09:26:45.807879903Z" level=info msg="TearDown network for sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" successfully" Feb 9 09:26:45.808004 env[1251]: time="2024-02-09T09:26:45.807907027Z" level=info msg="StopPodSandbox for \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" returns successfully" Feb 9 09:26:45.826227 kubelet[2367]: E0209 09:26:45.826144 2367 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:26:45.904305 kubelet[2367]: I0209 09:26:45.904074 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hubble-tls\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.904305 kubelet[2367]: I0209 09:26:45.904175 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-cgroup\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.904305 kubelet[2367]: I0209 09:26:45.904257 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hostproc\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.904914 kubelet[2367]: I0209 09:26:45.904296 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.904914 kubelet[2367]: I0209 09:26:45.904349 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-kernel\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.904914 kubelet[2367]: I0209 09:26:45.904420 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cni-path\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.904914 kubelet[2367]: I0209 09:26:45.904406 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.904914 kubelet[2367]: I0209 09:26:45.904422 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.905700 kubelet[2367]: I0209 09:26:45.904489 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-xtables-lock\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.905700 kubelet[2367]: I0209 09:26:45.904504 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.905700 kubelet[2367]: I0209 09:26:45.904591 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-config-path\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.905700 kubelet[2367]: I0209 09:26:45.904556 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.905700 kubelet[2367]: I0209 09:26:45.904660 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-net\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.906465 kubelet[2367]: I0209 09:26:45.904733 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-clustermesh-secrets\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.906465 kubelet[2367]: I0209 09:26:45.904729 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.906465 kubelet[2367]: I0209 09:26:45.904799 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-bpf-maps\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.906465 kubelet[2367]: I0209 09:26:45.904882 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m999\" (UniqueName: \"kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-kube-api-access-7m999\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.906465 kubelet[2367]: I0209 09:26:45.904919 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.906465 kubelet[2367]: W0209 09:26:45.904922 2367 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ed5b5afd-8a00-42ba-ba02-d61dce4e997c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:26:45.907437 kubelet[2367]: I0209 09:26:45.904953 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-lib-modules\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.907437 kubelet[2367]: I0209 09:26:45.905000 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.907437 kubelet[2367]: I0209 09:26:45.905089 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-run\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.907437 kubelet[2367]: I0209 09:26:45.905157 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-etc-cni-netd\") pod \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\" (UID: \"ed5b5afd-8a00-42ba-ba02-d61dce4e997c\") " Feb 9 09:26:45.907437 kubelet[2367]: I0209 09:26:45.905179 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.907437 kubelet[2367]: I0209 09:26:45.905282 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-run\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905238 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905337 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-cgroup\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905372 2367 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hostproc\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905445 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12aa12f1-94c7-48f3-a090-87adf7e0a891-cilium-config-path\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905504 2367 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905539 2367 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cni-path\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.908434 kubelet[2367]: I0209 09:26:45.905600 2367 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-xtables-lock\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.909581 kubelet[2367]: I0209 09:26:45.905638 2367 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-host-proc-sys-net\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.909581 kubelet[2367]: I0209 09:26:45.905681 2367 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-bpf-maps\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.909581 kubelet[2367]: I0209 09:26:45.905717 2367 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4g9lb\" (UniqueName: \"kubernetes.io/projected/12aa12f1-94c7-48f3-a090-87adf7e0a891-kube-api-access-4g9lb\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.909581 kubelet[2367]: I0209 09:26:45.905758 2367 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-lib-modules\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:45.910266 kubelet[2367]: I0209 09:26:45.909943 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:26:45.911067 kubelet[2367]: I0209 09:26:45.911000 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:26:45.911421 kubelet[2367]: I0209 09:26:45.911352 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-kube-api-access-7m999" (OuterVolumeSpecName: "kube-api-access-7m999") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "kube-api-access-7m999". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:26:45.911596 kubelet[2367]: I0209 09:26:45.911471 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed5b5afd-8a00-42ba-ba02-d61dce4e997c" (UID: "ed5b5afd-8a00-42ba-ba02-d61dce4e997c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:26:46.006220 kubelet[2367]: I0209 09:26:46.006108 2367 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-clustermesh-secrets\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:46.006220 kubelet[2367]: I0209 09:26:46.006178 2367 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-7m999\" (UniqueName: \"kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-kube-api-access-7m999\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:46.006220 kubelet[2367]: I0209 09:26:46.006214 2367 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-etc-cni-netd\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:46.006220 kubelet[2367]: I0209 09:26:46.006245 2367 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-hubble-tls\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:46.006899 kubelet[2367]: I0209 09:26:46.006276 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed5b5afd-8a00-42ba-ba02-d61dce4e997c-cilium-config-path\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:46.599733 systemd[1]: var-lib-kubelet-pods-12aa12f1\x2d94c7\x2d48f3\x2da090\x2d87adf7e0a891-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4g9lb.mount: Deactivated successfully. Feb 9 09:26:46.599819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71-rootfs.mount: Deactivated successfully. Feb 9 09:26:46.599874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71-shm.mount: Deactivated successfully. Feb 9 09:26:46.599921 systemd[1]: var-lib-kubelet-pods-ed5b5afd\x2d8a00\x2d42ba\x2dba02\x2dd61dce4e997c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7m999.mount: Deactivated successfully. Feb 9 09:26:46.599972 systemd[1]: var-lib-kubelet-pods-ed5b5afd\x2d8a00\x2d42ba\x2dba02\x2dd61dce4e997c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:26:46.600024 systemd[1]: var-lib-kubelet-pods-ed5b5afd\x2d8a00\x2d42ba\x2dba02\x2dd61dce4e997c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:26:46.684023 kubelet[2367]: I0209 09:26:46.683955 2367 scope.go:115] "RemoveContainer" containerID="f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f" Feb 9 09:26:46.686794 env[1251]: time="2024-02-09T09:26:46.686679272Z" level=info msg="RemoveContainer for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\"" Feb 9 09:26:46.690089 env[1251]: time="2024-02-09T09:26:46.690072968Z" level=info msg="RemoveContainer for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" returns successfully" Feb 9 09:26:46.690171 kubelet[2367]: I0209 09:26:46.690161 2367 scope.go:115] "RemoveContainer" containerID="19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d" Feb 9 09:26:46.690573 env[1251]: time="2024-02-09T09:26:46.690556470Z" level=info msg="RemoveContainer for \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\"" Feb 9 09:26:46.692509 env[1251]: time="2024-02-09T09:26:46.692494882Z" level=info msg="RemoveContainer for \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\" returns successfully" Feb 9 09:26:46.692593 kubelet[2367]: I0209 09:26:46.692585 2367 scope.go:115] "RemoveContainer" containerID="874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68" Feb 9 09:26:46.693033 env[1251]: time="2024-02-09T09:26:46.693019641Z" level=info msg="RemoveContainer for \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\"" Feb 9 09:26:46.694026 env[1251]: time="2024-02-09T09:26:46.694012188Z" level=info msg="RemoveContainer for \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\" returns successfully" Feb 9 09:26:46.694076 kubelet[2367]: I0209 09:26:46.694062 2367 scope.go:115] "RemoveContainer" containerID="65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42" Feb 9 09:26:46.694427 env[1251]: time="2024-02-09T09:26:46.694415216Z" level=info msg="RemoveContainer for \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\"" Feb 9 09:26:46.695428 env[1251]: time="2024-02-09T09:26:46.695385692Z" level=info msg="RemoveContainer for \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\" returns successfully" Feb 9 09:26:46.695479 kubelet[2367]: I0209 09:26:46.695472 2367 scope.go:115] "RemoveContainer" containerID="12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee" Feb 9 09:26:46.695902 env[1251]: time="2024-02-09T09:26:46.695869860Z" level=info msg="RemoveContainer for \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\"" Feb 9 09:26:46.696883 env[1251]: time="2024-02-09T09:26:46.696870883Z" level=info msg="RemoveContainer for \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\" returns successfully" Feb 9 09:26:46.696944 kubelet[2367]: I0209 09:26:46.696936 2367 scope.go:115] "RemoveContainer" containerID="f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f" Feb 9 09:26:46.697084 env[1251]: time="2024-02-09T09:26:46.697044042Z" level=error msg="ContainerStatus for \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\": not found" Feb 9 09:26:46.697138 kubelet[2367]: E0209 09:26:46.697131 2367 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\": not found" containerID="f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f" Feb 9 09:26:46.697171 kubelet[2367]: I0209 09:26:46.697150 2367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f} err="failed to get container status \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3224c70c1af574a35d94077eb659c935284cfaf2f994f5566848978cbf2753f\": not found" Feb 9 09:26:46.697171 kubelet[2367]: I0209 09:26:46.697156 2367 scope.go:115] "RemoveContainer" containerID="19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d" Feb 9 09:26:46.697246 env[1251]: time="2024-02-09T09:26:46.697219362Z" level=error msg="ContainerStatus for \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\": not found" Feb 9 09:26:46.697296 kubelet[2367]: E0209 09:26:46.697289 2367 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\": not found" containerID="19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d" Feb 9 09:26:46.697318 kubelet[2367]: I0209 09:26:46.697306 2367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d} err="failed to get container status \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"19b021dabdbe774e75e6d4fc5a6eaf5bf7b5733a1d5a0b16e0f3bb4602b93b0d\": not found" Feb 9 09:26:46.697318 kubelet[2367]: I0209 09:26:46.697312 2367 scope.go:115] "RemoveContainer" containerID="874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68" Feb 9 09:26:46.697408 env[1251]: time="2024-02-09T09:26:46.697383031Z" level=error msg="ContainerStatus for \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\": not found" Feb 9 09:26:46.697451 kubelet[2367]: E0209 09:26:46.697446 2367 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\": not found" containerID="874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68" Feb 9 09:26:46.697476 kubelet[2367]: I0209 09:26:46.697462 2367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68} err="failed to get container status \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\": rpc error: code = NotFound desc = an error occurred when try to find container \"874478afc5e771f057cee765fe574f89891527e5ffbe100438b4561739102b68\": not found" Feb 9 09:26:46.697476 kubelet[2367]: I0209 09:26:46.697467 2367 scope.go:115] "RemoveContainer" containerID="65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42" Feb 9 09:26:46.697570 env[1251]: time="2024-02-09T09:26:46.697540734Z" level=error msg="ContainerStatus for \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\": not found" Feb 9 09:26:46.697610 kubelet[2367]: E0209 09:26:46.697605 2367 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\": not found" containerID="65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42" Feb 9 09:26:46.697633 kubelet[2367]: I0209 09:26:46.697616 2367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42} err="failed to get container status \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\": rpc error: code = NotFound desc = an error occurred when try to find container \"65cbd593385a3608a5460f8093d12c723fa8a6f7670f74b01ed11573002f0f42\": not found" Feb 9 09:26:46.697633 kubelet[2367]: I0209 09:26:46.697621 2367 scope.go:115] "RemoveContainer" containerID="12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee" Feb 9 09:26:46.697729 env[1251]: time="2024-02-09T09:26:46.697705608Z" level=error msg="ContainerStatus for \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\": not found" Feb 9 09:26:46.697782 kubelet[2367]: E0209 09:26:46.697776 2367 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\": not found" containerID="12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee" Feb 9 09:26:46.697808 kubelet[2367]: I0209 09:26:46.697791 2367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee} err="failed to get container status \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\": rpc error: code = NotFound desc = an error occurred when try to find container \"12c29ae47502759ad1adfff08c726c632f3ca50b9e14b6916ea848e8da23eeee\": not found" Feb 9 09:26:46.697808 kubelet[2367]: I0209 09:26:46.697796 2367 scope.go:115] "RemoveContainer" containerID="c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b" Feb 9 09:26:46.698223 env[1251]: time="2024-02-09T09:26:46.698212219Z" level=info msg="RemoveContainer for \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\"" Feb 9 09:26:46.699260 env[1251]: time="2024-02-09T09:26:46.699243478Z" level=info msg="RemoveContainer for \"c66fce7f6067428c7c5383f4cff5e85241085e0b66070bf5f8874b6fa32fb40b\" returns successfully" Feb 9 09:26:47.533325 sshd[6255]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:47.538042 systemd[1]: Started sshd@91-139.178.90.101:22-147.75.109.163:37426.service. Feb 9 09:26:47.538407 systemd[1]: sshd@90-139.178.90.101:22-147.75.109.163:39000.service: Deactivated successfully. Feb 9 09:26:47.539164 systemd-logind[1237]: Session 88 logged out. Waiting for processes to exit. Feb 9 09:26:47.539203 systemd[1]: session-88.scope: Deactivated successfully. Feb 9 09:26:47.539629 systemd-logind[1237]: Removed session 88. Feb 9 09:26:47.571390 sshd[6433]: Accepted publickey for core from 147.75.109.163 port 37426 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:47.574297 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:47.584857 systemd-logind[1237]: New session 89 of user core. Feb 9 09:26:47.587313 systemd[1]: Started session-89.scope. Feb 9 09:26:47.594996 kubelet[2367]: I0209 09:26:47.594984 2367 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=12aa12f1-94c7-48f3-a090-87adf7e0a891 path="/var/lib/kubelet/pods/12aa12f1-94c7-48f3-a090-87adf7e0a891/volumes" Feb 9 09:26:47.595192 kubelet[2367]: I0209 09:26:47.595187 2367 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ed5b5afd-8a00-42ba-ba02-d61dce4e997c path="/var/lib/kubelet/pods/ed5b5afd-8a00-42ba-ba02-d61dce4e997c/volumes" Feb 9 09:26:47.933084 sshd[6433]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:47.934815 systemd[1]: Started sshd@92-139.178.90.101:22-147.75.109.163:37430.service. Feb 9 09:26:47.935153 systemd[1]: sshd@91-139.178.90.101:22-147.75.109.163:37426.service: Deactivated successfully. Feb 9 09:26:47.935754 systemd-logind[1237]: Session 89 logged out. Waiting for processes to exit. Feb 9 09:26:47.935764 systemd[1]: session-89.scope: Deactivated successfully. Feb 9 09:26:47.936292 systemd-logind[1237]: Removed session 89. Feb 9 09:26:47.939466 kubelet[2367]: I0209 09:26:47.939448 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:26:47.939549 kubelet[2367]: E0209 09:26:47.939480 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed5b5afd-8a00-42ba-ba02-d61dce4e997c" containerName="clean-cilium-state" Feb 9 09:26:47.939549 kubelet[2367]: E0209 09:26:47.939486 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed5b5afd-8a00-42ba-ba02-d61dce4e997c" containerName="cilium-agent" Feb 9 09:26:47.939549 kubelet[2367]: E0209 09:26:47.939490 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed5b5afd-8a00-42ba-ba02-d61dce4e997c" containerName="mount-bpf-fs" Feb 9 09:26:47.939549 kubelet[2367]: E0209 09:26:47.939496 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed5b5afd-8a00-42ba-ba02-d61dce4e997c" containerName="mount-cgroup" Feb 9 09:26:47.939549 kubelet[2367]: E0209 09:26:47.939502 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed5b5afd-8a00-42ba-ba02-d61dce4e997c" containerName="apply-sysctl-overwrites" Feb 9 09:26:47.939549 kubelet[2367]: E0209 09:26:47.939507 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12aa12f1-94c7-48f3-a090-87adf7e0a891" containerName="cilium-operator" Feb 9 09:26:47.939549 kubelet[2367]: I0209 09:26:47.939521 2367 memory_manager.go:346] "RemoveStaleState removing state" podUID="12aa12f1-94c7-48f3-a090-87adf7e0a891" containerName="cilium-operator" Feb 9 09:26:47.939549 kubelet[2367]: I0209 09:26:47.939524 2367 memory_manager.go:346] "RemoveStaleState removing state" podUID="ed5b5afd-8a00-42ba-ba02-d61dce4e997c" containerName="cilium-agent" Feb 9 09:26:47.968110 sshd[6458]: Accepted publickey for core from 147.75.109.163 port 37430 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:47.968931 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:47.971459 systemd-logind[1237]: New session 90 of user core. Feb 9 09:26:47.972057 systemd[1]: Started session-90.scope. Feb 9 09:26:48.020044 kubelet[2367]: I0209 09:26:48.019988 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cni-path\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020044 kubelet[2367]: I0209 09:26:48.020029 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-lib-modules\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020044 kubelet[2367]: I0209 09:26:48.020056 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-etc-cni-netd\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020374 kubelet[2367]: I0209 09:26:48.020079 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-config-path\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020374 kubelet[2367]: I0209 09:26:48.020139 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-kernel\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020374 kubelet[2367]: I0209 09:26:48.020215 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-bpf-maps\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020374 kubelet[2367]: I0209 09:26:48.020291 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-clustermesh-secrets\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020374 kubelet[2367]: I0209 09:26:48.020364 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-run\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020714 kubelet[2367]: I0209 09:26:48.020398 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-net\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020714 kubelet[2367]: I0209 09:26:48.020422 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hubble-tls\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020714 kubelet[2367]: I0209 09:26:48.020508 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hostproc\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020714 kubelet[2367]: I0209 09:26:48.020558 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-ipsec-secrets\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020714 kubelet[2367]: I0209 09:26:48.020622 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-xtables-lock\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020714 kubelet[2367]: I0209 09:26:48.020666 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4mt\" (UniqueName: \"kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-kube-api-access-cm4mt\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.020989 kubelet[2367]: I0209 09:26:48.020743 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-cgroup\") pod \"cilium-mqsw4\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " pod="kube-system/cilium-mqsw4" Feb 9 09:26:48.095598 sshd[6458]: pam_unix(sshd:session): session closed for user core Feb 9 09:26:48.097935 systemd[1]: Started sshd@93-139.178.90.101:22-147.75.109.163:37434.service. Feb 9 09:26:48.098626 systemd[1]: sshd@92-139.178.90.101:22-147.75.109.163:37430.service: Deactivated successfully. Feb 9 09:26:48.099488 systemd-logind[1237]: Session 90 logged out. Waiting for processes to exit. Feb 9 09:26:48.099534 systemd[1]: session-90.scope: Deactivated successfully. Feb 9 09:26:48.100232 systemd-logind[1237]: Removed session 90. Feb 9 09:26:48.134779 sshd[6484]: Accepted publickey for core from 147.75.109.163 port 37434 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 09:26:48.135898 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:26:48.139276 systemd-logind[1237]: New session 91 of user core. Feb 9 09:26:48.140073 systemd[1]: Started session-91.scope. Feb 9 09:26:48.242087 env[1251]: time="2024-02-09T09:26:48.242006238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqsw4,Uid:1c40e2d1-0177-42e6-97a7-65b0f1e98003,Namespace:kube-system,Attempt:0,}" Feb 9 09:26:48.249820 env[1251]: time="2024-02-09T09:26:48.249777547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:26:48.249820 env[1251]: time="2024-02-09T09:26:48.249806974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:26:48.249820 env[1251]: time="2024-02-09T09:26:48.249816968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:26:48.249977 env[1251]: time="2024-02-09T09:26:48.249905058Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f pid=6516 runtime=io.containerd.runc.v2 Feb 9 09:26:48.290526 env[1251]: time="2024-02-09T09:26:48.290477165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqsw4,Uid:1c40e2d1-0177-42e6-97a7-65b0f1e98003,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\"" Feb 9 09:26:48.291637 env[1251]: time="2024-02-09T09:26:48.291620916Z" level=info msg="CreateContainer within sandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:26:48.296134 env[1251]: time="2024-02-09T09:26:48.296091072Z" level=info msg="CreateContainer within sandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d\"" Feb 9 09:26:48.296309 env[1251]: time="2024-02-09T09:26:48.296291992Z" level=info msg="StartContainer for \"c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d\"" Feb 9 09:26:48.341968 env[1251]: time="2024-02-09T09:26:48.341915332Z" level=info msg="StartContainer for \"c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d\" returns successfully" Feb 9 09:26:48.379100 env[1251]: time="2024-02-09T09:26:48.378998480Z" level=info msg="shim disconnected" id=c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d Feb 9 09:26:48.379100 env[1251]: time="2024-02-09T09:26:48.379071857Z" level=warning msg="cleaning up after shim disconnected" id=c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d namespace=k8s.io Feb 9 09:26:48.379100 env[1251]: time="2024-02-09T09:26:48.379091612Z" level=info msg="cleaning up dead shim" Feb 9 09:26:48.402079 env[1251]: time="2024-02-09T09:26:48.402013374Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6599 runtime=io.containerd.runc.v2\n" Feb 9 09:26:48.694217 env[1251]: time="2024-02-09T09:26:48.694077108Z" level=info msg="StopPodSandbox for \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\"" Feb 9 09:26:48.694501 env[1251]: time="2024-02-09T09:26:48.694234690Z" level=info msg="Container to stop \"c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:26:48.752376 env[1251]: time="2024-02-09T09:26:48.752341560Z" level=info msg="shim disconnected" id=ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f Feb 9 09:26:48.752376 env[1251]: time="2024-02-09T09:26:48.752375522Z" level=warning msg="cleaning up after shim disconnected" id=ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f namespace=k8s.io Feb 9 09:26:48.752500 env[1251]: time="2024-02-09T09:26:48.752382510Z" level=info msg="cleaning up dead shim" Feb 9 09:26:48.756467 env[1251]: time="2024-02-09T09:26:48.756426521Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6631 runtime=io.containerd.runc.v2\n" Feb 9 09:26:48.756677 env[1251]: time="2024-02-09T09:26:48.756600461Z" level=info msg="TearDown network for sandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" successfully" Feb 9 09:26:48.756677 env[1251]: time="2024-02-09T09:26:48.756613384Z" level=info msg="StopPodSandbox for \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" returns successfully" Feb 9 09:26:48.826618 kubelet[2367]: I0209 09:26:48.826501 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-xtables-lock\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.826618 kubelet[2367]: I0209 09:26:48.826623 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-cgroup\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.827699 kubelet[2367]: I0209 09:26:48.826685 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.827699 kubelet[2367]: I0209 09:26:48.826719 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-config-path\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.827699 kubelet[2367]: I0209 09:26:48.826791 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.827699 kubelet[2367]: I0209 09:26:48.826851 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-run\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.827699 kubelet[2367]: I0209 09:26:48.826937 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-net\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.828399 kubelet[2367]: I0209 09:26:48.826954 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.828399 kubelet[2367]: I0209 09:26:48.827006 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cni-path\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.828399 kubelet[2367]: I0209 09:26:48.827027 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.828399 kubelet[2367]: I0209 09:26:48.827074 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-etc-cni-netd\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.828399 kubelet[2367]: I0209 09:26:48.827077 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.828994 kubelet[2367]: I0209 09:26:48.827144 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-kernel\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.828994 kubelet[2367]: I0209 09:26:48.827158 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.828994 kubelet[2367]: I0209 09:26:48.827205 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hostproc\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.828994 kubelet[2367]: I0209 09:26:48.827207 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.828994 kubelet[2367]: W0209 09:26:48.827181 2367 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1c40e2d1-0177-42e6-97a7-65b0f1e98003/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:26:48.828994 kubelet[2367]: I0209 09:26:48.827274 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-lib-modules\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.829727 kubelet[2367]: I0209 09:26:48.827258 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.829727 kubelet[2367]: I0209 09:26:48.827359 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-clustermesh-secrets\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.829727 kubelet[2367]: I0209 09:26:48.827358 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.829727 kubelet[2367]: I0209 09:26:48.827441 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hubble-tls\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.829727 kubelet[2367]: I0209 09:26:48.827522 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-ipsec-secrets\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827622 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm4mt\" (UniqueName: \"kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-kube-api-access-cm4mt\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827702 2367 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-bpf-maps\") pod \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\" (UID: \"1c40e2d1-0177-42e6-97a7-65b0f1e98003\") " Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827806 2367 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-xtables-lock\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827855 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-run\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827889 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-cgroup\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827933 2367 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-net\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830253 kubelet[2367]: I0209 09:26:48.827925 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:26:48.830977 kubelet[2367]: I0209 09:26:48.827966 2367 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cni-path\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830977 kubelet[2367]: I0209 09:26:48.828022 2367 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-etc-cni-netd\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830977 kubelet[2367]: I0209 09:26:48.828056 2367 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830977 kubelet[2367]: I0209 09:26:48.828098 2367 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hostproc\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.830977 kubelet[2367]: I0209 09:26:48.828133 2367 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-lib-modules\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.832751 kubelet[2367]: I0209 09:26:48.832656 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:26:48.834267 kubelet[2367]: I0209 09:26:48.834167 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:26:48.834489 kubelet[2367]: I0209 09:26:48.834288 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:26:48.835015 kubelet[2367]: I0209 09:26:48.834916 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:26:48.835247 kubelet[2367]: I0209 09:26:48.835018 2367 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-kube-api-access-cm4mt" (OuterVolumeSpecName: "kube-api-access-cm4mt") pod "1c40e2d1-0177-42e6-97a7-65b0f1e98003" (UID: "1c40e2d1-0177-42e6-97a7-65b0f1e98003"). InnerVolumeSpecName "kube-api-access-cm4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:26:48.928761 kubelet[2367]: I0209 09:26:48.928667 2367 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-clustermesh-secrets\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.928761 kubelet[2367]: I0209 09:26:48.928736 2367 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-hubble-tls\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.928761 kubelet[2367]: I0209 09:26:48.928780 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.929267 kubelet[2367]: I0209 09:26:48.928815 2367 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cm4mt\" (UniqueName: \"kubernetes.io/projected/1c40e2d1-0177-42e6-97a7-65b0f1e98003-kube-api-access-cm4mt\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.929267 kubelet[2367]: I0209 09:26:48.928846 2367 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c40e2d1-0177-42e6-97a7-65b0f1e98003-bpf-maps\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:48.929267 kubelet[2367]: I0209 09:26:48.928880 2367 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c40e2d1-0177-42e6-97a7-65b0f1e98003-cilium-config-path\") on node \"ci-3510.3.2-a-afd9ebe59c\" DevicePath \"\"" Feb 9 09:26:49.127485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f-rootfs.mount: Deactivated successfully. Feb 9 09:26:49.127590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f-shm.mount: Deactivated successfully. Feb 9 09:26:49.127646 systemd[1]: var-lib-kubelet-pods-1c40e2d1\x2d0177\x2d42e6\x2d97a7\x2d65b0f1e98003-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcm4mt.mount: Deactivated successfully. Feb 9 09:26:49.127693 systemd[1]: var-lib-kubelet-pods-1c40e2d1\x2d0177\x2d42e6\x2d97a7\x2d65b0f1e98003-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:26:49.127737 systemd[1]: var-lib-kubelet-pods-1c40e2d1\x2d0177\x2d42e6\x2d97a7\x2d65b0f1e98003-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:26:49.127781 systemd[1]: var-lib-kubelet-pods-1c40e2d1\x2d0177\x2d42e6\x2d97a7\x2d65b0f1e98003-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:26:49.699127 kubelet[2367]: I0209 09:26:49.699072 2367 scope.go:115] "RemoveContainer" containerID="c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d" Feb 9 09:26:49.702053 env[1251]: time="2024-02-09T09:26:49.701801542Z" level=info msg="RemoveContainer for \"c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d\"" Feb 9 09:26:49.706030 env[1251]: time="2024-02-09T09:26:49.705963254Z" level=info msg="RemoveContainer for \"c2889c0799f6beabb89b2312258e7cbf7127e9a4d5a47b292dc1c261dfeb5b2d\" returns successfully" Feb 9 09:26:49.741666 kubelet[2367]: I0209 09:26:49.741588 2367 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:26:49.742065 kubelet[2367]: E0209 09:26:49.741785 2367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c40e2d1-0177-42e6-97a7-65b0f1e98003" containerName="mount-cgroup" Feb 9 09:26:49.742065 kubelet[2367]: I0209 09:26:49.741908 2367 memory_manager.go:346] "RemoveStaleState removing state" podUID="1c40e2d1-0177-42e6-97a7-65b0f1e98003" containerName="mount-cgroup" Feb 9 09:26:49.834371 kubelet[2367]: I0209 09:26:49.834271 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-cilium-run\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.834371 kubelet[2367]: I0209 09:26:49.834375 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-etc-cni-netd\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.835482 kubelet[2367]: I0209 09:26:49.834442 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-xtables-lock\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.835482 kubelet[2367]: I0209 09:26:49.834504 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-hubble-tls\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.835482 kubelet[2367]: I0209 09:26:49.834688 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-cilium-cgroup\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.835482 kubelet[2367]: I0209 09:26:49.834777 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-bpf-maps\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.835482 kubelet[2367]: I0209 09:26:49.834842 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-cilium-ipsec-secrets\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.835482 kubelet[2367]: I0209 09:26:49.835001 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-host-proc-sys-net\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836197 kubelet[2367]: I0209 09:26:49.835156 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-clustermesh-secrets\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836197 kubelet[2367]: I0209 09:26:49.835276 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdpz\" (UniqueName: \"kubernetes.io/projected/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-kube-api-access-wfdpz\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836197 kubelet[2367]: I0209 09:26:49.835338 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-hostproc\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836197 kubelet[2367]: I0209 09:26:49.835467 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-cni-path\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836197 kubelet[2367]: I0209 09:26:49.835624 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-lib-modules\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836197 kubelet[2367]: I0209 09:26:49.835718 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-cilium-config-path\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:49.836874 kubelet[2367]: I0209 09:26:49.835819 2367 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a2e3f73-4725-47d1-bc4a-f6ea6a87351b-host-proc-sys-kernel\") pod \"cilium-bhs47\" (UID: \"3a2e3f73-4725-47d1-bc4a-f6ea6a87351b\") " pod="kube-system/cilium-bhs47" Feb 9 09:26:50.357604 env[1251]: time="2024-02-09T09:26:50.357463444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bhs47,Uid:3a2e3f73-4725-47d1-bc4a-f6ea6a87351b,Namespace:kube-system,Attempt:0,}" Feb 9 09:26:50.371921 env[1251]: time="2024-02-09T09:26:50.371831902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:26:50.371921 env[1251]: time="2024-02-09T09:26:50.371856355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:26:50.371921 env[1251]: time="2024-02-09T09:26:50.371864345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:26:50.372070 env[1251]: time="2024-02-09T09:26:50.372013216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200 pid=6659 runtime=io.containerd.runc.v2 Feb 9 09:26:50.417174 env[1251]: time="2024-02-09T09:26:50.417092869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bhs47,Uid:3a2e3f73-4725-47d1-bc4a-f6ea6a87351b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\"" Feb 9 09:26:50.419673 env[1251]: time="2024-02-09T09:26:50.419626803Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:26:50.427436 env[1251]: time="2024-02-09T09:26:50.427336976Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86a773230f154c6b27e7d4dfca059b68316aa6aa17c8fb40d88cbcff9bc5c5e2\"" Feb 9 09:26:50.428016 env[1251]: time="2024-02-09T09:26:50.427947763Z" level=info msg="StartContainer for \"86a773230f154c6b27e7d4dfca059b68316aa6aa17c8fb40d88cbcff9bc5c5e2\"" Feb 9 09:26:50.478835 kubelet[2367]: I0209 09:26:50.478783 2367 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-afd9ebe59c" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:26:50.478683733 +0000 UTC m=+744.947975363 LastTransitionTime:2024-02-09 09:26:50.478683733 +0000 UTC m=+744.947975363 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:26:50.537542 env[1251]: time="2024-02-09T09:26:50.537447512Z" level=info msg="StartContainer for \"86a773230f154c6b27e7d4dfca059b68316aa6aa17c8fb40d88cbcff9bc5c5e2\" returns successfully" Feb 9 09:26:50.604333 env[1251]: time="2024-02-09T09:26:50.604206961Z" level=info msg="shim disconnected" id=86a773230f154c6b27e7d4dfca059b68316aa6aa17c8fb40d88cbcff9bc5c5e2 Feb 9 09:26:50.604333 env[1251]: time="2024-02-09T09:26:50.604321199Z" level=warning msg="cleaning up after shim disconnected" id=86a773230f154c6b27e7d4dfca059b68316aa6aa17c8fb40d88cbcff9bc5c5e2 namespace=k8s.io Feb 9 09:26:50.604832 env[1251]: time="2024-02-09T09:26:50.604353337Z" level=info msg="cleaning up dead shim" Feb 9 09:26:50.632957 env[1251]: time="2024-02-09T09:26:50.632724419Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6742 runtime=io.containerd.runc.v2\n" Feb 9 09:26:50.712466 env[1251]: time="2024-02-09T09:26:50.712379807Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:26:50.724819 env[1251]: time="2024-02-09T09:26:50.724687043Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4cbb84e71c3d93de85894bea2ddf587f81aad47008f22dce6cca9f07c72e0a91\"" Feb 9 09:26:50.725598 env[1251]: time="2024-02-09T09:26:50.725499691Z" level=info msg="StartContainer for \"4cbb84e71c3d93de85894bea2ddf587f81aad47008f22dce6cca9f07c72e0a91\"" Feb 9 09:26:50.827027 env[1251]: time="2024-02-09T09:26:50.826916480Z" level=info msg="StartContainer for \"4cbb84e71c3d93de85894bea2ddf587f81aad47008f22dce6cca9f07c72e0a91\" returns successfully" Feb 9 09:26:50.827668 kubelet[2367]: E0209 09:26:50.827622 2367 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:26:50.874046 env[1251]: time="2024-02-09T09:26:50.873949776Z" level=info msg="shim disconnected" id=4cbb84e71c3d93de85894bea2ddf587f81aad47008f22dce6cca9f07c72e0a91 Feb 9 09:26:50.874046 env[1251]: time="2024-02-09T09:26:50.874039580Z" level=warning msg="cleaning up after shim disconnected" id=4cbb84e71c3d93de85894bea2ddf587f81aad47008f22dce6cca9f07c72e0a91 namespace=k8s.io Feb 9 09:26:50.874604 env[1251]: time="2024-02-09T09:26:50.874068512Z" level=info msg="cleaning up dead shim" Feb 9 09:26:50.890378 env[1251]: time="2024-02-09T09:26:50.890190273Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6803 runtime=io.containerd.runc.v2\n" Feb 9 09:26:51.369814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434152537.mount: Deactivated successfully. Feb 9 09:26:51.600691 kubelet[2367]: I0209 09:26:51.600602 2367 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1c40e2d1-0177-42e6-97a7-65b0f1e98003 path="/var/lib/kubelet/pods/1c40e2d1-0177-42e6-97a7-65b0f1e98003/volumes" Feb 9 09:26:51.719393 env[1251]: time="2024-02-09T09:26:51.719264128Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:26:51.729741 env[1251]: time="2024-02-09T09:26:51.729641408Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f334eb385d286d27768758184dbdbd3f61f1e21803468d1f38975696a105a846\"" Feb 9 09:26:51.730176 env[1251]: time="2024-02-09T09:26:51.730119810Z" level=info msg="StartContainer for \"f334eb385d286d27768758184dbdbd3f61f1e21803468d1f38975696a105a846\"" Feb 9 09:26:51.754338 env[1251]: time="2024-02-09T09:26:51.754274076Z" level=info msg="StartContainer for \"f334eb385d286d27768758184dbdbd3f61f1e21803468d1f38975696a105a846\" returns successfully" Feb 9 09:26:51.797356 env[1251]: time="2024-02-09T09:26:51.797259071Z" level=info msg="shim disconnected" id=f334eb385d286d27768758184dbdbd3f61f1e21803468d1f38975696a105a846 Feb 9 09:26:51.797745 env[1251]: time="2024-02-09T09:26:51.797360308Z" level=warning msg="cleaning up after shim disconnected" id=f334eb385d286d27768758184dbdbd3f61f1e21803468d1f38975696a105a846 namespace=k8s.io Feb 9 09:26:51.797745 env[1251]: time="2024-02-09T09:26:51.797389691Z" level=info msg="cleaning up dead shim" Feb 9 09:26:51.826830 env[1251]: time="2024-02-09T09:26:51.826717591Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6859 runtime=io.containerd.runc.v2\n" Feb 9 09:26:52.366712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f334eb385d286d27768758184dbdbd3f61f1e21803468d1f38975696a105a846-rootfs.mount: Deactivated successfully. Feb 9 09:26:52.726497 env[1251]: time="2024-02-09T09:26:52.726364206Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:26:52.739093 env[1251]: time="2024-02-09T09:26:52.738967630Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c5836e7076aa93f06c8d9d26e863d442db41a60f0dfd38576323039da8671803\"" Feb 9 09:26:52.739915 env[1251]: time="2024-02-09T09:26:52.739810465Z" level=info msg="StartContainer for \"c5836e7076aa93f06c8d9d26e863d442db41a60f0dfd38576323039da8671803\"" Feb 9 09:26:52.749076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838194378.mount: Deactivated successfully. Feb 9 09:26:52.838913 env[1251]: time="2024-02-09T09:26:52.838807909Z" level=info msg="StartContainer for \"c5836e7076aa93f06c8d9d26e863d442db41a60f0dfd38576323039da8671803\" returns successfully" Feb 9 09:26:52.894263 env[1251]: time="2024-02-09T09:26:52.894127715Z" level=info msg="shim disconnected" id=c5836e7076aa93f06c8d9d26e863d442db41a60f0dfd38576323039da8671803 Feb 9 09:26:52.894263 env[1251]: time="2024-02-09T09:26:52.894226817Z" level=warning msg="cleaning up after shim disconnected" id=c5836e7076aa93f06c8d9d26e863d442db41a60f0dfd38576323039da8671803 namespace=k8s.io Feb 9 09:26:52.894263 env[1251]: time="2024-02-09T09:26:52.894254366Z" level=info msg="cleaning up dead shim" Feb 9 09:26:52.921833 env[1251]: time="2024-02-09T09:26:52.921749804Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:26:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6913 runtime=io.containerd.runc.v2\n" Feb 9 09:26:53.370131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5836e7076aa93f06c8d9d26e863d442db41a60f0dfd38576323039da8671803-rootfs.mount: Deactivated successfully. Feb 9 09:26:53.736180 env[1251]: time="2024-02-09T09:26:53.736072224Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:26:53.753658 env[1251]: time="2024-02-09T09:26:53.753524296Z" level=info msg="CreateContainer within sandbox \"2d32de81927795b04ecd0ef4ef30b4a0cf4bcf4b65d859394281bf9aafcd1200\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1daa306a0c8ac12ead95685c55f7dbea62f90dd44881f06d4044975982d2e813\"" Feb 9 09:26:53.754664 env[1251]: time="2024-02-09T09:26:53.754557874Z" level=info msg="StartContainer for \"1daa306a0c8ac12ead95685c55f7dbea62f90dd44881f06d4044975982d2e813\"" Feb 9 09:26:53.763439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935286912.mount: Deactivated successfully. Feb 9 09:26:53.784407 env[1251]: time="2024-02-09T09:26:53.784383630Z" level=info msg="StartContainer for \"1daa306a0c8ac12ead95685c55f7dbea62f90dd44881f06d4044975982d2e813\" returns successfully" Feb 9 09:26:53.927574 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 09:26:54.758468 kubelet[2367]: I0209 09:26:54.758451 2367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bhs47" podStartSLOduration=5.758431553 pod.CreationTimestamp="2024-02-09 09:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:26:54.758336394 +0000 UTC m=+749.227627979" watchObservedRunningTime="2024-02-09 09:26:54.758431553 +0000 UTC m=+749.227723133" Feb 9 09:26:56.741061 systemd-networkd[1105]: lxc_health: Link UP Feb 9 09:26:56.764866 systemd-networkd[1105]: lxc_health: Gained carrier Feb 9 09:26:56.765587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:26:58.380682 systemd-networkd[1105]: lxc_health: Gained IPv6LL Feb 9 09:27:18.750713 systemd[1]: Started sshd@94-139.178.90.101:22-103.78.143.130:41294.service. Feb 9 09:27:20.069978 sshd[7658]: Invalid user sreejith from 103.78.143.130 port 41294 Feb 9 09:27:20.071600 sshd[7658]: pam_faillock(sshd:auth): User unknown Feb 9 09:27:20.071876 sshd[7658]: pam_unix(sshd:auth): check pass; user unknown Feb 9 09:27:20.071901 sshd[7658]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.78.143.130 Feb 9 09:27:20.072166 sshd[7658]: pam_faillock(sshd:auth): User unknown Feb 9 09:27:21.944817 sshd[7658]: Failed password for invalid user sreejith from 103.78.143.130 port 41294 ssh2 Feb 9 09:27:22.470068 sshd[7658]: Received disconnect from 103.78.143.130 port 41294:11: Bye Bye [preauth] Feb 9 09:27:22.470068 sshd[7658]: Disconnected from invalid user sreejith 103.78.143.130 port 41294 [preauth] Feb 9 09:27:22.472592 systemd[1]: sshd@94-139.178.90.101:22-103.78.143.130:41294.service: Deactivated successfully. Feb 9 09:27:25.601365 env[1251]: time="2024-02-09T09:27:25.601337358Z" level=info msg="StopPodSandbox for \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\"" Feb 9 09:27:25.601638 env[1251]: time="2024-02-09T09:27:25.601393467Z" level=info msg="TearDown network for sandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" successfully" Feb 9 09:27:25.601638 env[1251]: time="2024-02-09T09:27:25.601418263Z" level=info msg="StopPodSandbox for \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" returns successfully" Feb 9 09:27:25.601638 env[1251]: time="2024-02-09T09:27:25.601612301Z" level=info msg="RemovePodSandbox for \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\"" Feb 9 09:27:25.601722 env[1251]: time="2024-02-09T09:27:25.601630364Z" level=info msg="Forcibly stopping sandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\"" Feb 9 09:27:25.601722 env[1251]: time="2024-02-09T09:27:25.601672348Z" level=info msg="TearDown network for sandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" successfully" Feb 9 09:27:25.603288 env[1251]: time="2024-02-09T09:27:25.603272141Z" level=info msg="RemovePodSandbox \"ebbae542adaf4c1b29ceeb6d0915bd531e7cf707e1966b5e4fd7fada760c631f\" returns successfully" Feb 9 09:27:25.603502 env[1251]: time="2024-02-09T09:27:25.603485982Z" level=info msg="StopPodSandbox for \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\"" Feb 9 09:27:25.603558 env[1251]: time="2024-02-09T09:27:25.603535084Z" level=info msg="TearDown network for sandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" successfully" Feb 9 09:27:25.603598 env[1251]: time="2024-02-09T09:27:25.603558049Z" level=info msg="StopPodSandbox for \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" returns successfully" Feb 9 09:27:25.603758 env[1251]: time="2024-02-09T09:27:25.603743943Z" level=info msg="RemovePodSandbox for \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\"" Feb 9 09:27:25.603792 env[1251]: time="2024-02-09T09:27:25.603762269Z" level=info msg="Forcibly stopping sandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\"" Feb 9 09:27:25.603825 env[1251]: time="2024-02-09T09:27:25.603806297Z" level=info msg="TearDown network for sandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" successfully" Feb 9 09:27:25.605024 env[1251]: time="2024-02-09T09:27:25.605010022Z" level=info msg="RemovePodSandbox \"959afc25e944dbb001c48403c3bebd05a129096a23ffb299c318dc2749bdf2f5\" returns successfully" Feb 9 09:27:25.605192 env[1251]: time="2024-02-09T09:27:25.605176521Z" level=info msg="StopPodSandbox for \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\"" Feb 9 09:27:25.605246 env[1251]: time="2024-02-09T09:27:25.605223278Z" level=info msg="TearDown network for sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" successfully" Feb 9 09:27:25.605282 env[1251]: time="2024-02-09T09:27:25.605245577Z" level=info msg="StopPodSandbox for \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" returns successfully" Feb 9 09:27:25.605388 env[1251]: time="2024-02-09T09:27:25.605372940Z" level=info msg="RemovePodSandbox for \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\"" Feb 9 09:27:25.605424 env[1251]: time="2024-02-09T09:27:25.605391530Z" level=info msg="Forcibly stopping sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\"" Feb 9 09:27:25.605456 env[1251]: time="2024-02-09T09:27:25.605433580Z" level=info msg="TearDown network for sandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" successfully" Feb 9 09:27:25.606658 env[1251]: time="2024-02-09T09:27:25.606643254Z" level=info msg="RemovePodSandbox \"79f2f26834fbbf35f4e73b48b03764bd59a93d48e0d9e965b3609c99cf745b71\" returns successfully" Feb 9 09:28:00.437353 sshd[6484]: pam_unix(sshd:session): session closed for user core Feb 9 09:28:00.439049 systemd[1]: sshd@93-139.178.90.101:22-147.75.109.163:37434.service: Deactivated successfully. Feb 9 09:28:00.439805 systemd[1]: session-91.scope: Deactivated successfully. Feb 9 09:28:00.439843 systemd-logind[1237]: Session 91 logged out. Waiting for processes to exit. Feb 9 09:28:00.440377 systemd-logind[1237]: Removed session 91.