Feb 13 03:49:42.552574 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 03:49:42.552587 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 03:49:42.552594 kernel: BIOS-provided physical RAM map: Feb 13 03:49:42.552597 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 03:49:42.552601 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 03:49:42.552604 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 03:49:42.552609 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 03:49:42.552613 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 03:49:42.552616 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cffff] usable Feb 13 03:49:42.552620 kernel: BIOS-e820: [mem 0x00000000819d0000-0x00000000819d0fff] ACPI NVS Feb 13 03:49:42.552625 kernel: BIOS-e820: [mem 0x00000000819d1000-0x00000000819d1fff] reserved Feb 13 03:49:42.552628 kernel: BIOS-e820: [mem 0x00000000819d2000-0x000000008afccfff] usable Feb 13 03:49:42.552632 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 03:49:42.552636 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 03:49:42.552641 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 03:49:42.552645 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 03:49:42.552649 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 03:49:42.552653 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 03:49:42.552657 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 03:49:42.552661 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 03:49:42.552665 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 03:49:42.552669 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 03:49:42.552673 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 03:49:42.552677 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 03:49:42.552681 kernel: NX (Execute Disable) protection: active Feb 13 03:49:42.552685 kernel: SMBIOS 3.2.1 present. Feb 13 03:49:42.552690 kernel: DMI: Supermicro Super Server/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 03:49:42.552694 kernel: tsc: Detected 3400.000 MHz processor Feb 13 03:49:42.552698 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 03:49:42.552703 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 03:49:42.552707 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 03:49:42.552711 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 03:49:42.552715 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 03:49:42.552720 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 03:49:42.552724 kernel: Using GB pages for direct mapping Feb 13 03:49:42.552728 kernel: ACPI: Early table checksum verification disabled Feb 13 03:49:42.552733 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 03:49:42.552737 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 03:49:42.552741 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 03:49:42.552745 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 03:49:42.552751 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 03:49:42.552756 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 03:49:42.552761 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 03:49:42.552766 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 03:49:42.552770 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 03:49:42.552775 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 03:49:42.552779 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 03:49:42.552784 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 03:49:42.552788 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 03:49:42.552793 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 03:49:42.552798 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 03:49:42.552802 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 03:49:42.552807 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 03:49:42.552811 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 03:49:42.552816 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 03:49:42.552820 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 03:49:42.552825 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 03:49:42.552829 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 03:49:42.552834 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 03:49:42.552839 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 03:49:42.552843 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 03:49:42.552848 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 03:49:42.552852 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 03:49:42.552857 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 03:49:42.552861 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 03:49:42.552866 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 03:49:42.552870 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 03:49:42.552875 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 03:49:42.552880 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 03:49:42.552885 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 03:49:42.552889 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 03:49:42.552893 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 03:49:42.552898 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 03:49:42.552902 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 03:49:42.552907 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 03:49:42.552912 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 03:49:42.552917 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 03:49:42.552921 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 03:49:42.552926 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 03:49:42.552930 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 03:49:42.552934 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 03:49:42.552939 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 03:49:42.552943 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 03:49:42.552948 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 03:49:42.552953 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 03:49:42.552957 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 03:49:42.552962 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 03:49:42.552966 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 03:49:42.552971 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 03:49:42.552975 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 03:49:42.552980 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 03:49:42.552984 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 03:49:42.552989 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 03:49:42.552994 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 03:49:42.552998 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 03:49:42.553003 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 03:49:42.553007 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 03:49:42.553012 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 03:49:42.553016 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 03:49:42.553021 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 03:49:42.553025 kernel: No NUMA configuration found Feb 13 03:49:42.553030 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 03:49:42.553035 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 03:49:42.553039 kernel: Zone ranges: Feb 13 03:49:42.553044 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 03:49:42.553048 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 03:49:42.553053 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 03:49:42.553057 kernel: Movable zone start for each node Feb 13 03:49:42.553062 kernel: Early memory node ranges Feb 13 03:49:42.553066 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 03:49:42.553071 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 03:49:42.553075 kernel: node 0: [mem 0x0000000040400000-0x00000000819cffff] Feb 13 03:49:42.553080 kernel: node 0: [mem 0x00000000819d2000-0x000000008afccfff] Feb 13 03:49:42.553085 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 03:49:42.553089 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 03:49:42.553094 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 03:49:42.553098 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 03:49:42.553103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 03:49:42.553111 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 03:49:42.553116 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 03:49:42.553121 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 03:49:42.553126 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 03:49:42.553131 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 03:49:42.553136 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 03:49:42.553141 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 03:49:42.553146 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 03:49:42.553151 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 03:49:42.553155 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 03:49:42.553160 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 03:49:42.553166 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 03:49:42.553171 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 03:49:42.553175 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 03:49:42.553180 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 03:49:42.553185 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 03:49:42.553190 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 03:49:42.553194 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 03:49:42.553199 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 03:49:42.553204 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 03:49:42.553209 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 03:49:42.553214 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 03:49:42.553219 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 03:49:42.553223 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 03:49:42.553228 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 03:49:42.553233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 03:49:42.553238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 03:49:42.553243 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 03:49:42.553247 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 03:49:42.553253 kernel: TSC deadline timer available Feb 13 03:49:42.553258 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 03:49:42.553263 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 03:49:42.553267 kernel: Booting paravirtualized kernel on bare hardware Feb 13 03:49:42.553272 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 03:49:42.553277 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 03:49:42.553282 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 03:49:42.553287 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 03:49:42.553291 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 03:49:42.553297 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 03:49:42.553302 kernel: Policy zone: Normal Feb 13 03:49:42.553307 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 03:49:42.553312 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 03:49:42.553317 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 03:49:42.553322 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 03:49:42.553327 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 03:49:42.553332 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 03:49:42.553337 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 03:49:42.553342 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 03:49:42.553347 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 03:49:42.553352 kernel: rcu: Hierarchical RCU implementation. Feb 13 03:49:42.553357 kernel: rcu: RCU event tracing is enabled. Feb 13 03:49:42.553362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 03:49:42.553367 kernel: Rude variant of Tasks RCU enabled. Feb 13 03:49:42.553371 kernel: Tracing variant of Tasks RCU enabled. Feb 13 03:49:42.553377 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 03:49:42.553382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 03:49:42.553387 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 03:49:42.553392 kernel: random: crng init done Feb 13 03:49:42.553396 kernel: Console: colour dummy device 80x25 Feb 13 03:49:42.553401 kernel: printk: console [tty0] enabled Feb 13 03:49:42.553406 kernel: printk: console [ttyS1] enabled Feb 13 03:49:42.553411 kernel: ACPI: Core revision 20210730 Feb 13 03:49:42.553416 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 03:49:42.553420 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 03:49:42.553426 kernel: DMAR: Host address width 39 Feb 13 03:49:42.553431 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 03:49:42.553438 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 03:49:42.553461 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 03:49:42.553466 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 03:49:42.553471 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 03:49:42.553476 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 03:49:42.553494 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 03:49:42.553499 kernel: x2apic enabled Feb 13 03:49:42.553504 kernel: Switched APIC routing to cluster x2apic. Feb 13 03:49:42.553509 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 03:49:42.553514 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 03:49:42.553519 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 03:49:42.553524 kernel: process: using mwait in idle threads Feb 13 03:49:42.553529 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 03:49:42.553533 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 03:49:42.553538 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 03:49:42.553543 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 03:49:42.553548 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 03:49:42.553553 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 03:49:42.553558 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 03:49:42.553563 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 03:49:42.553567 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 03:49:42.553572 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 03:49:42.553577 kernel: TAA: Mitigation: TSX disabled Feb 13 03:49:42.553581 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 03:49:42.553586 kernel: SRBDS: Mitigation: Microcode Feb 13 03:49:42.553591 kernel: GDS: Vulnerable: No microcode Feb 13 03:49:42.553596 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 03:49:42.553601 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 03:49:42.553606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 03:49:42.553611 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 03:49:42.553616 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 03:49:42.553621 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 03:49:42.553626 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 03:49:42.553630 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 03:49:42.553635 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 03:49:42.553640 kernel: Freeing SMP alternatives memory: 32K Feb 13 03:49:42.553644 kernel: pid_max: default: 32768 minimum: 301 Feb 13 03:49:42.553649 kernel: LSM: Security Framework initializing Feb 13 03:49:42.553655 kernel: SELinux: Initializing. Feb 13 03:49:42.553659 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 03:49:42.553664 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 03:49:42.553669 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 03:49:42.553674 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 03:49:42.553679 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 03:49:42.553683 kernel: ... version: 4 Feb 13 03:49:42.553688 kernel: ... bit width: 48 Feb 13 03:49:42.553693 kernel: ... generic registers: 4 Feb 13 03:49:42.553698 kernel: ... value mask: 0000ffffffffffff Feb 13 03:49:42.553703 kernel: ... max period: 00007fffffffffff Feb 13 03:49:42.553708 kernel: ... fixed-purpose events: 3 Feb 13 03:49:42.553713 kernel: ... event mask: 000000070000000f Feb 13 03:49:42.553718 kernel: signal: max sigframe size: 2032 Feb 13 03:49:42.553723 kernel: rcu: Hierarchical SRCU implementation. Feb 13 03:49:42.553727 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 03:49:42.553732 kernel: smp: Bringing up secondary CPUs ... Feb 13 03:49:42.553737 kernel: x86: Booting SMP configuration: Feb 13 03:49:42.553742 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 03:49:42.553747 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 03:49:42.553752 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 03:49:42.553757 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 03:49:42.553762 kernel: smpboot: Max logical packages: 1 Feb 13 03:49:42.553767 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 03:49:42.553772 kernel: devtmpfs: initialized Feb 13 03:49:42.553777 kernel: x86/mm: Memory block size: 128MB Feb 13 03:49:42.553781 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819d0000-0x819d0fff] (4096 bytes) Feb 13 03:49:42.553786 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 03:49:42.553792 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 03:49:42.553797 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 03:49:42.553802 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 03:49:42.553807 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 03:49:42.553811 kernel: audit: initializing netlink subsys (disabled) Feb 13 03:49:42.553816 kernel: audit: type=2000 audit(1707796177.040:1): state=initialized audit_enabled=0 res=1 Feb 13 03:49:42.553821 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 03:49:42.553826 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 03:49:42.553830 kernel: cpuidle: using governor menu Feb 13 03:49:42.553836 kernel: ACPI: bus type PCI registered Feb 13 03:49:42.553841 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 03:49:42.553846 kernel: dca service started, version 1.12.1 Feb 13 03:49:42.553851 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 03:49:42.553855 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 03:49:42.553860 kernel: PCI: Using configuration type 1 for base access Feb 13 03:49:42.553865 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 03:49:42.553870 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 03:49:42.553875 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 03:49:42.553880 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 03:49:42.553885 kernel: ACPI: Added _OSI(Module Device) Feb 13 03:49:42.553890 kernel: ACPI: Added _OSI(Processor Device) Feb 13 03:49:42.553894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 03:49:42.553899 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 03:49:42.553904 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 03:49:42.553909 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 03:49:42.553914 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 03:49:42.553918 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 03:49:42.553924 kernel: ACPI: Dynamic OEM Table Load: Feb 13 03:49:42.553929 kernel: ACPI: SSDT 0xFFFF999B40212100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 03:49:42.553934 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 03:49:42.553938 kernel: ACPI: Dynamic OEM Table Load: Feb 13 03:49:42.553943 kernel: ACPI: SSDT 0xFFFF999B41AE7C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 03:49:42.553948 kernel: ACPI: Dynamic OEM Table Load: Feb 13 03:49:42.553953 kernel: ACPI: SSDT 0xFFFF999B41A5F000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 03:49:42.553958 kernel: ACPI: Dynamic OEM Table Load: Feb 13 03:49:42.553962 kernel: ACPI: SSDT 0xFFFF999B41A5B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 03:49:42.553967 kernel: ACPI: Dynamic OEM Table Load: Feb 13 03:49:42.553973 kernel: ACPI: SSDT 0xFFFF999B4014B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 03:49:42.553977 kernel: ACPI: Dynamic OEM Table Load: Feb 13 03:49:42.553982 kernel: ACPI: SSDT 0xFFFF999B41AE5800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 03:49:42.553987 kernel: ACPI: Interpreter enabled Feb 13 03:49:42.553992 kernel: ACPI: PM: (supports S0 S5) Feb 13 03:49:42.553997 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 03:49:42.554001 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 03:49:42.554006 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 03:49:42.554011 kernel: HEST: Table parsing has been initialized. Feb 13 03:49:42.554016 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 03:49:42.554021 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 03:49:42.554026 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 03:49:42.554031 kernel: ACPI: PM: Power Resource [USBC] Feb 13 03:49:42.554036 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 03:49:42.554040 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 03:49:42.554045 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 03:49:42.554050 kernel: ACPI: PM: Power Resource [WRST] Feb 13 03:49:42.554055 kernel: ACPI: PM: Power Resource [FN00] Feb 13 03:49:42.554060 kernel: ACPI: PM: Power Resource [FN01] Feb 13 03:49:42.554065 kernel: ACPI: PM: Power Resource [FN02] Feb 13 03:49:42.554070 kernel: ACPI: PM: Power Resource [FN03] Feb 13 03:49:42.554074 kernel: ACPI: PM: Power Resource [FN04] Feb 13 03:49:42.554079 kernel: ACPI: PM: Power Resource [PIN] Feb 13 03:49:42.554084 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 03:49:42.554149 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 03:49:42.554195 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 03:49:42.554238 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 03:49:42.554245 kernel: PCI host bridge to bus 0000:00 Feb 13 03:49:42.554287 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 03:49:42.554324 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 03:49:42.554361 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 03:49:42.554397 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 03:49:42.554433 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 03:49:42.554507 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 03:49:42.554557 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 03:49:42.554607 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 03:49:42.554650 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.554694 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 03:49:42.554736 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 03:49:42.554783 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 03:49:42.554826 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 03:49:42.554873 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 03:49:42.554914 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 03:49:42.554958 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 03:49:42.555002 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 03:49:42.555046 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 03:49:42.555087 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 03:49:42.555131 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 03:49:42.555173 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 03:49:42.555220 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 03:49:42.555262 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 03:49:42.555308 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 03:49:42.555350 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 03:49:42.555390 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 03:49:42.555434 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 03:49:42.555496 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 03:49:42.555538 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 03:49:42.555584 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 03:49:42.555628 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 03:49:42.555671 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 03:49:42.555716 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 03:49:42.555758 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 03:49:42.555800 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 03:49:42.555841 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 03:49:42.555882 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 03:49:42.555930 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 03:49:42.555975 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 03:49:42.556017 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 03:49:42.556062 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 03:49:42.556105 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.556151 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 03:49:42.556194 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.556244 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 03:49:42.556288 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.556334 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 03:49:42.556377 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.556424 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 03:49:42.556471 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.556517 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 03:49:42.556559 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 03:49:42.556607 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 03:49:42.556654 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 03:49:42.556698 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 03:49:42.556740 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 03:49:42.556787 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 03:49:42.556830 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 03:49:42.556878 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 03:49:42.556924 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 03:49:42.556968 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 03:49:42.557012 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 03:49:42.557055 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 03:49:42.557098 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 03:49:42.557146 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 03:49:42.557191 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 03:49:42.557236 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 03:49:42.557280 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 03:49:42.557323 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 03:49:42.557367 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 03:49:42.557410 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 03:49:42.557456 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 03:49:42.557501 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 03:49:42.557543 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 03:49:42.557594 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 03:49:42.557638 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 03:49:42.557695 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 03:49:42.557738 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 03:49:42.557781 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.557823 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 03:49:42.557866 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 03:49:42.557909 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 03:49:42.557956 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 03:49:42.558000 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 03:49:42.558042 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 03:49:42.558085 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 03:49:42.558128 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 03:49:42.558170 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 03:49:42.558211 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 03:49:42.558256 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 03:49:42.558297 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 03:49:42.558345 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 03:49:42.558388 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 03:49:42.558432 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 03:49:42.558520 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 03:49:42.558562 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 03:49:42.558607 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 03:49:42.558648 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 03:49:42.558696 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 03:49:42.558746 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 03:49:42.558793 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 03:49:42.558837 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 03:49:42.558883 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 03:49:42.558928 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 03:49:42.558975 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 03:49:42.559019 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 03:49:42.559064 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 03:49:42.559107 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 03:49:42.559150 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 03:49:42.559189 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 03:49:42.559195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 03:49:42.559222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 03:49:42.559228 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 03:49:42.559253 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 03:49:42.559258 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 03:49:42.559263 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 03:49:42.559268 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 03:49:42.559273 kernel: iommu: Default domain type: Translated Feb 13 03:49:42.559278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 03:49:42.559322 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 03:49:42.559370 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 03:49:42.559414 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 03:49:42.559422 kernel: vgaarb: loaded Feb 13 03:49:42.559427 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 03:49:42.559432 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 03:49:42.559439 kernel: PTP clock support registered Feb 13 03:49:42.559444 kernel: PCI: Using ACPI for IRQ routing Feb 13 03:49:42.559469 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 03:49:42.559474 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 03:49:42.559481 kernel: e820: reserve RAM buffer [mem 0x819d0000-0x83ffffff] Feb 13 03:49:42.559486 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 03:49:42.559491 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 03:49:42.559515 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 03:49:42.559520 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 03:49:42.559525 kernel: clocksource: Switched to clocksource tsc-early Feb 13 03:49:42.559530 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 03:49:42.559536 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 03:49:42.559541 kernel: pnp: PnP ACPI init Feb 13 03:49:42.559586 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 03:49:42.559629 kernel: pnp 00:02: [dma 0 disabled] Feb 13 03:49:42.559671 kernel: pnp 00:03: [dma 0 disabled] Feb 13 03:49:42.559714 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 03:49:42.559752 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 03:49:42.559793 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 03:49:42.559835 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 03:49:42.559873 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 03:49:42.559910 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 03:49:42.559948 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 03:49:42.559985 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 03:49:42.560021 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 03:49:42.560060 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 03:49:42.560099 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 03:49:42.560140 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 03:49:42.560177 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 03:49:42.560215 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 03:49:42.560252 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 03:49:42.560289 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 03:49:42.560326 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 03:49:42.560364 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 03:49:42.560406 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 03:49:42.560413 kernel: pnp: PnP ACPI: found 10 devices Feb 13 03:49:42.560419 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 03:49:42.560424 kernel: NET: Registered PF_INET protocol family Feb 13 03:49:42.560429 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 03:49:42.560435 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 03:49:42.560465 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 03:49:42.560471 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 03:49:42.560476 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 03:49:42.560481 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 03:49:42.560486 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 03:49:42.560511 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 03:49:42.560516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 03:49:42.560521 kernel: NET: Registered PF_XDP protocol family Feb 13 03:49:42.560563 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 03:49:42.560608 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 03:49:42.560649 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 03:49:42.560693 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 03:49:42.560736 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 03:49:42.560781 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 03:49:42.560824 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 03:49:42.560865 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 03:49:42.560908 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 03:49:42.560952 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 03:49:42.560994 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 03:49:42.561035 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 03:49:42.561077 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 03:49:42.561118 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 03:49:42.561162 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 03:49:42.561203 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 03:49:42.561244 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 03:49:42.561287 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 03:49:42.561329 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 03:49:42.561373 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 03:49:42.561415 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 03:49:42.561481 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 03:49:42.561525 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 03:49:42.561568 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 03:49:42.561606 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 03:49:42.561645 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 03:49:42.561683 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 03:49:42.561720 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 03:49:42.561757 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 03:49:42.561793 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 03:49:42.561836 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 03:49:42.561879 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 03:49:42.561924 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 03:49:42.561964 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 03:49:42.562007 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 03:49:42.562046 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 03:49:42.562091 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 03:49:42.562131 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 03:49:42.562173 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 03:49:42.562214 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 03:49:42.562221 kernel: PCI: CLS 64 bytes, default 64 Feb 13 03:49:42.562227 kernel: DMAR: No ATSR found Feb 13 03:49:42.562232 kernel: DMAR: No SATC found Feb 13 03:49:42.562237 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 03:49:42.562282 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 03:49:42.562324 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 03:49:42.562368 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 03:49:42.562410 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 03:49:42.562454 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 03:49:42.562496 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 03:49:42.562539 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 03:49:42.562580 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 03:49:42.562623 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 03:49:42.562666 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 03:49:42.562708 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 03:49:42.562750 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 03:49:42.562792 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 03:49:42.562835 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 03:49:42.562877 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 03:49:42.562919 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 03:49:42.562962 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 03:49:42.563005 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 03:49:42.563048 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 03:49:42.563090 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 03:49:42.563132 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 03:49:42.563177 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 03:49:42.563221 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 03:49:42.563265 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 03:49:42.563312 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 03:49:42.563357 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 03:49:42.563402 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 03:49:42.563409 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 03:49:42.563415 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 03:49:42.563420 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 03:49:42.563426 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 03:49:42.563431 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 03:49:42.563438 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 03:49:42.563445 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 03:49:42.563491 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 03:49:42.563499 kernel: Initialise system trusted keyrings Feb 13 03:49:42.563504 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 03:49:42.563510 kernel: Key type asymmetric registered Feb 13 03:49:42.563515 kernel: Asymmetric key parser 'x509' registered Feb 13 03:49:42.563520 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 03:49:42.563525 kernel: io scheduler mq-deadline registered Feb 13 03:49:42.563532 kernel: io scheduler kyber registered Feb 13 03:49:42.563537 kernel: io scheduler bfq registered Feb 13 03:49:42.563579 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 03:49:42.563622 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 03:49:42.563665 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 03:49:42.563708 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 03:49:42.563751 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 03:49:42.563793 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 03:49:42.563842 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 03:49:42.563851 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 03:49:42.563856 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 03:49:42.563862 kernel: pstore: Registered erst as persistent store backend Feb 13 03:49:42.563867 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 03:49:42.563872 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 03:49:42.563877 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 03:49:42.563883 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 03:49:42.563889 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 03:49:42.563934 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 03:49:42.563942 kernel: i8042: PNP: No PS/2 controller found. Feb 13 03:49:42.563981 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 03:49:42.564021 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 03:49:42.564060 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T03:49:41 UTC (1707796181) Feb 13 03:49:42.564098 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 03:49:42.564106 kernel: fail to initialize ptp_kvm Feb 13 03:49:42.564113 kernel: intel_pstate: Intel P-state driver initializing Feb 13 03:49:42.564118 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 03:49:42.564123 kernel: intel_pstate: HWP enabled Feb 13 03:49:42.564128 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 03:49:42.564134 kernel: vesafb: scrolling: redraw Feb 13 03:49:42.564139 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 03:49:42.564144 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000ba99cf12, using 768k, total 768k Feb 13 03:49:42.564150 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 03:49:42.564156 kernel: fb0: VESA VGA frame buffer device Feb 13 03:49:42.564161 kernel: NET: Registered PF_INET6 protocol family Feb 13 03:49:42.564166 kernel: Segment Routing with IPv6 Feb 13 03:49:42.564172 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 03:49:42.564177 kernel: NET: Registered PF_PACKET protocol family Feb 13 03:49:42.564182 kernel: Key type dns_resolver registered Feb 13 03:49:42.564187 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 03:49:42.564193 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 03:49:42.564198 kernel: IPI shorthand broadcast: enabled Feb 13 03:49:42.564203 kernel: sched_clock: Marking stable (1729614136, 1339444444)->(4483143103, -1414084523) Feb 13 03:49:42.564209 kernel: registered taskstats version 1 Feb 13 03:49:42.564214 kernel: Loading compiled-in X.509 certificates Feb 13 03:49:42.564220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 03:49:42.564225 kernel: Key type .fscrypt registered Feb 13 03:49:42.564230 kernel: Key type fscrypt-provisioning registered Feb 13 03:49:42.564235 kernel: pstore: Using crash dump compression: deflate Feb 13 03:49:42.564240 kernel: ima: Allocated hash algorithm: sha1 Feb 13 03:49:42.564246 kernel: ima: No architecture policies found Feb 13 03:49:42.564252 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 03:49:42.564257 kernel: Write protecting the kernel read-only data: 28672k Feb 13 03:49:42.564262 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 03:49:42.564268 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 03:49:42.564273 kernel: Run /init as init process Feb 13 03:49:42.564278 kernel: with arguments: Feb 13 03:49:42.564283 kernel: /init Feb 13 03:49:42.564289 kernel: with environment: Feb 13 03:49:42.564294 kernel: HOME=/ Feb 13 03:49:42.564299 kernel: TERM=linux Feb 13 03:49:42.564305 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 03:49:42.564311 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 03:49:42.564318 systemd[1]: Detected architecture x86-64. Feb 13 03:49:42.564324 systemd[1]: Running in initrd. Feb 13 03:49:42.564329 systemd[1]: No hostname configured, using default hostname. Feb 13 03:49:42.564335 systemd[1]: Hostname set to . Feb 13 03:49:42.564340 systemd[1]: Initializing machine ID from random generator. Feb 13 03:49:42.564347 systemd[1]: Queued start job for default target initrd.target. Feb 13 03:49:42.564352 systemd[1]: Started systemd-ask-password-console.path. Feb 13 03:49:42.564358 systemd[1]: Reached target cryptsetup.target. Feb 13 03:49:42.564363 systemd[1]: Reached target paths.target. Feb 13 03:49:42.564369 systemd[1]: Reached target slices.target. Feb 13 03:49:42.564374 systemd[1]: Reached target swap.target. Feb 13 03:49:42.564380 systemd[1]: Reached target timers.target. Feb 13 03:49:42.564385 systemd[1]: Listening on iscsid.socket. Feb 13 03:49:42.564392 systemd[1]: Listening on iscsiuio.socket. Feb 13 03:49:42.564398 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 03:49:42.564404 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 03:49:42.564409 systemd[1]: Listening on systemd-journald.socket. Feb 13 03:49:42.564415 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 03:49:42.564420 systemd[1]: Listening on systemd-networkd.socket. Feb 13 03:49:42.564426 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 03:49:42.564431 kernel: clocksource: Switched to clocksource tsc Feb 13 03:49:42.564440 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 03:49:42.564445 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 03:49:42.564451 systemd[1]: Reached target sockets.target. Feb 13 03:49:42.564457 systemd[1]: Starting kmod-static-nodes.service... Feb 13 03:49:42.564462 systemd[1]: Finished network-cleanup.service. Feb 13 03:49:42.564468 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 03:49:42.564473 systemd[1]: Starting systemd-journald.service... Feb 13 03:49:42.564479 systemd[1]: Starting systemd-modules-load.service... Feb 13 03:49:42.564486 systemd-journald[269]: Journal started Feb 13 03:49:42.564512 systemd-journald[269]: Runtime Journal (/run/log/journal/8d4ae7c54a20497e91da982aa06e4420) is 8.0M, max 640.1M, 632.1M free. Feb 13 03:49:42.568039 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 03:49:42.627565 kernel: audit: type=1334 audit(1707796182.573:2): prog-id=6 op=LOAD Feb 13 03:49:42.627575 systemd[1]: Starting systemd-resolved.service... Feb 13 03:49:42.627598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 03:49:42.573000 audit: BPF prog-id=6 op=LOAD Feb 13 03:49:42.659486 kernel: Bridge firewalling registered Feb 13 03:49:42.659501 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 03:49:42.674481 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 03:49:42.710533 systemd[1]: Started systemd-journald.service. Feb 13 03:49:42.710545 kernel: SCSI subsystem initialized Feb 13 03:49:42.680133 systemd-resolved[272]: Positive Trust Anchors: Feb 13 03:49:42.807547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 03:49:42.807559 kernel: audit: type=1130 audit(1707796182.730:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.807567 kernel: device-mapper: uevent: version 1.0.3 Feb 13 03:49:42.807574 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 03:49:42.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.680139 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 03:49:42.871736 kernel: audit: type=1130 audit(1707796182.828:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.680159 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 03:49:42.946649 kernel: audit: type=1130 audit(1707796182.880:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.681711 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 13 03:49:42.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.732118 systemd[1]: Started systemd-resolved.service. Feb 13 03:49:43.052202 kernel: audit: type=1130 audit(1707796182.954:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.052213 kernel: audit: type=1130 audit(1707796183.006:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.829200 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 13 03:49:43.106518 kernel: audit: type=1130 audit(1707796183.059:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:42.829591 systemd[1]: Finished kmod-static-nodes.service. Feb 13 03:49:42.881737 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 03:49:42.955733 systemd[1]: Finished systemd-modules-load.service. Feb 13 03:49:43.007739 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 03:49:43.060704 systemd[1]: Reached target nss-lookup.target. Feb 13 03:49:43.115044 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 03:49:43.135047 systemd[1]: Starting systemd-sysctl.service... Feb 13 03:49:43.135333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 03:49:43.138141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 03:49:43.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.139079 systemd[1]: Finished systemd-sysctl.service. Feb 13 03:49:43.250262 kernel: audit: type=1130 audit(1707796183.136:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.250274 kernel: audit: type=1130 audit(1707796183.200:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.201803 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 03:49:43.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.259103 systemd[1]: Starting dracut-cmdline.service... Feb 13 03:49:43.266627 dracut-cmdline[293]: dracut-dracut-053 Feb 13 03:49:43.266627 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 03:49:43.266627 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 03:49:43.347546 kernel: Loading iSCSI transport class v2.0-870. Feb 13 03:49:43.347559 kernel: iscsi: registered transport (tcp) Feb 13 03:49:43.396198 kernel: iscsi: registered transport (qla4xxx) Feb 13 03:49:43.396216 kernel: QLogic iSCSI HBA Driver Feb 13 03:49:43.412347 systemd[1]: Finished dracut-cmdline.service. Feb 13 03:49:43.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:43.422148 systemd[1]: Starting dracut-pre-udev.service... Feb 13 03:49:43.477469 kernel: raid6: avx2x4 gen() 48008 MB/s Feb 13 03:49:43.512499 kernel: raid6: avx2x4 xor() 14979 MB/s Feb 13 03:49:43.547494 kernel: raid6: avx2x2 gen() 53840 MB/s Feb 13 03:49:43.582479 kernel: raid6: avx2x2 xor() 32835 MB/s Feb 13 03:49:43.617503 kernel: raid6: avx2x1 gen() 45481 MB/s Feb 13 03:49:43.651493 kernel: raid6: avx2x1 xor() 28535 MB/s Feb 13 03:49:43.685500 kernel: raid6: sse2x4 gen() 21824 MB/s Feb 13 03:49:43.719500 kernel: raid6: sse2x4 xor() 11991 MB/s Feb 13 03:49:43.753472 kernel: raid6: sse2x2 gen() 22133 MB/s Feb 13 03:49:43.787500 kernel: raid6: sse2x2 xor() 13758 MB/s Feb 13 03:49:43.821503 kernel: raid6: sse2x1 gen() 18695 MB/s Feb 13 03:49:43.872945 kernel: raid6: sse2x1 xor() 9132 MB/s Feb 13 03:49:43.872960 kernel: raid6: using algorithm avx2x2 gen() 53840 MB/s Feb 13 03:49:43.872968 kernel: raid6: .... xor() 32835 MB/s, rmw enabled Feb 13 03:49:43.890918 kernel: raid6: using avx2x2 recovery algorithm Feb 13 03:49:43.936445 kernel: xor: automatically using best checksumming function avx Feb 13 03:49:44.014469 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 03:49:44.019856 systemd[1]: Finished dracut-pre-udev.service. Feb 13 03:49:44.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:44.028000 audit: BPF prog-id=7 op=LOAD Feb 13 03:49:44.028000 audit: BPF prog-id=8 op=LOAD Feb 13 03:49:44.030348 systemd[1]: Starting systemd-udevd.service... Feb 13 03:49:44.038560 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 13 03:49:44.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:44.044723 systemd[1]: Started systemd-udevd.service. Feb 13 03:49:44.087553 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 13 03:49:44.063084 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 03:49:44.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:44.094147 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 03:49:44.105943 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 03:49:44.155172 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 03:49:44.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:44.188464 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 03:49:44.190445 kernel: libata version 3.00 loaded. Feb 13 03:49:44.229562 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 03:49:44.229636 kernel: AES CTR mode by8 optimization enabled Feb 13 03:49:44.229644 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 03:49:44.230443 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 03:49:44.230456 kernel: ACPI: bus type USB registered Feb 13 03:49:44.240485 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 03:49:44.240566 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 03:49:44.248492 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 03:49:44.285412 kernel: scsi host0: ahci Feb 13 03:49:44.285512 kernel: usbcore: registered new interface driver usbfs Feb 13 03:49:44.286441 kernel: scsi host1: ahci Feb 13 03:49:44.286472 kernel: pps pps0: new PPS source ptp0 Feb 13 03:49:44.286548 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 03:49:44.286615 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 03:49:44.286676 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bb:81:6a Feb 13 03:49:44.286735 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 03:49:44.286794 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 03:49:44.324985 kernel: usbcore: registered new interface driver hub Feb 13 03:49:44.328442 kernel: scsi host2: ahci Feb 13 03:49:44.328469 kernel: pps pps1: new PPS source ptp1 Feb 13 03:49:44.328544 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 03:49:44.328613 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 03:49:44.328674 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bb:81:6b Feb 13 03:49:44.328734 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 03:49:44.328792 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 03:49:44.357327 kernel: usbcore: registered new device driver usb Feb 13 03:49:44.371958 kernel: scsi host3: ahci Feb 13 03:49:44.541964 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 03:49:44.542057 kernel: scsi host4: ahci Feb 13 03:49:44.640247 kernel: scsi host5: ahci Feb 13 03:49:44.640338 kernel: scsi host6: ahci Feb 13 03:49:44.650230 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 03:49:44.665081 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 03:49:44.679899 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 03:49:44.694697 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 03:49:44.709470 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 03:49:44.724194 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 03:49:44.738957 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 03:49:44.786146 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Feb 13 03:49:44.786221 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 03:49:44.805489 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 03:49:45.073441 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 03:49:45.073463 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Feb 13 03:49:45.073533 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 03:49:45.104445 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 03:49:45.118470 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 03:49:45.118547 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 03:49:45.149441 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 03:49:45.163442 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 03:49:45.176471 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 03:49:45.190441 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 13 03:49:45.205492 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 13 03:49:45.250022 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 03:49:45.250058 kernel: ata2.00: Features: NCQ-prio Feb 13 03:49:45.250066 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 03:49:45.276098 kernel: ata1.00: Features: NCQ-prio Feb 13 03:49:45.293512 kernel: ata2.00: configured for UDMA/133 Feb 13 03:49:45.293552 kernel: ata1.00: configured for UDMA/133 Feb 13 03:49:45.305473 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 13 03:49:45.336441 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 13 03:49:45.351444 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 03:49:45.351528 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 03:49:45.367439 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 03:49:45.384441 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 03:49:45.416050 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Feb 13 03:49:45.416205 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 03:49:45.416268 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 03:49:45.447344 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 03:49:45.478742 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 03:49:45.490970 kernel: hub 1-0:1.0: USB hub found Feb 13 03:49:45.491053 kernel: hub 1-0:1.0: 16 ports detected Feb 13 03:49:45.528233 kernel: hub 2-0:1.0: USB hub found Feb 13 03:49:45.528338 kernel: hub 2-0:1.0: 10 ports detected Feb 13 03:49:45.542443 kernel: usb: port power management may be unreliable Feb 13 03:49:45.542459 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 03:49:45.555050 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:45.567717 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 03:49:45.567794 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 03:49:45.599295 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 13 03:49:45.599368 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 13 03:49:45.599432 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 13 03:49:45.612651 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 03:49:45.625621 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 13 03:49:45.652286 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 03:49:45.652362 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 03:49:45.652420 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 03:49:45.701630 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 03:49:45.715019 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 03:49:45.715036 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:45.728095 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 13 03:49:45.741441 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 03:49:45.749487 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 03:49:45.749511 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Feb 13 03:49:45.771649 kernel: GPT:9289727 != 937703087 Feb 13 03:49:45.771664 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 03:49:45.771671 kernel: GPT:9289727 != 937703087 Feb 13 03:49:45.771677 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 03:49:45.771686 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 03:49:45.771692 kernel: port_module: 9 callbacks suppressed Feb 13 03:49:45.771699 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 03:49:45.801022 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:45.827214 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 03:49:45.827285 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 13 03:49:45.953400 kernel: hub 1-14:1.0: USB hub found Feb 13 03:49:45.953521 kernel: hub 1-14:1.0: 4 ports detected Feb 13 03:49:45.960828 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 03:49:45.992680 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (527) Feb 13 03:49:45.977064 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 03:49:46.003518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 03:49:46.007344 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 03:49:46.043383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 03:49:46.090557 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:46.090571 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 03:49:46.053950 systemd[1]: Starting disk-uuid.service... Feb 13 03:49:46.105594 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:46.105605 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 03:49:46.105717 disk-uuid[683]: Primary Header is updated. Feb 13 03:49:46.105717 disk-uuid[683]: Secondary Entries is updated. Feb 13 03:49:46.105717 disk-uuid[683]: Secondary Header is updated. Feb 13 03:49:46.192478 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 03:49:46.192491 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:46.192500 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 03:49:46.192507 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 03:49:46.222456 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 03:49:46.258470 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 03:49:46.375450 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 03:49:46.406801 kernel: usbcore: registered new interface driver usbhid Feb 13 03:49:46.406817 kernel: usbhid: USB HID core driver Feb 13 03:49:46.439547 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 03:49:46.555421 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 03:49:46.555557 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 03:49:46.555567 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 03:49:47.162760 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 03:49:47.182044 disk-uuid[684]: The operation has completed successfully. Feb 13 03:49:47.190513 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 03:49:47.220778 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 03:49:47.316137 kernel: audit: type=1130 audit(1707796187.226:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.316152 kernel: audit: type=1131 audit(1707796187.226:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.220836 systemd[1]: Finished disk-uuid.service. Feb 13 03:49:47.345529 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 03:49:47.235148 systemd[1]: Starting verity-setup.service... Feb 13 03:49:47.379182 systemd[1]: Found device dev-mapper-usr.device. Feb 13 03:49:47.389746 systemd[1]: Mounting sysusr-usr.mount... Feb 13 03:49:47.402803 systemd[1]: Finished verity-setup.service. Feb 13 03:49:47.480343 kernel: audit: type=1130 audit(1707796187.416:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.480363 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 03:49:47.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.495087 systemd[1]: Mounted sysusr-usr.mount. Feb 13 03:49:47.502740 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 03:49:47.608712 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 03:49:47.608729 kernel: BTRFS info (device sdb6): using free space tree Feb 13 03:49:47.608736 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 03:49:47.608743 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 03:49:47.503131 systemd[1]: Starting ignition-setup.service... Feb 13 03:49:47.523871 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 03:49:47.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.617999 systemd[1]: Finished ignition-setup.service. Feb 13 03:49:47.741423 kernel: audit: type=1130 audit(1707796187.633:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.741440 kernel: audit: type=1130 audit(1707796187.690:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.634810 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 03:49:47.748000 audit: BPF prog-id=9 op=LOAD Feb 13 03:49:47.692092 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 03:49:47.787515 kernel: audit: type=1334 audit(1707796187.748:24): prog-id=9 op=LOAD Feb 13 03:49:47.750312 systemd[1]: Starting systemd-networkd.service... Feb 13 03:49:47.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.818965 ignition[865]: Ignition 2.14.0 Feb 13 03:49:47.862665 kernel: audit: type=1130 audit(1707796187.794:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.786775 systemd-networkd[877]: lo: Link UP Feb 13 03:49:47.818970 ignition[865]: Stage: fetch-offline Feb 13 03:49:47.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.786777 systemd-networkd[877]: lo: Gained carrier Feb 13 03:49:48.016994 kernel: audit: type=1130 audit(1707796187.882:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:48.017007 kernel: audit: type=1130 audit(1707796187.941:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:48.017014 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 03:49:47.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.818996 ignition[865]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 03:49:48.052714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 03:49:47.787056 systemd-networkd[877]: Enumeration completed Feb 13 03:49:47.819009 ignition[865]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 03:49:47.787129 systemd[1]: Started systemd-networkd.service. Feb 13 03:49:48.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.827041 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 03:49:48.098532 iscsid[907]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 03:49:48.098532 iscsid[907]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 03:49:48.098532 iscsid[907]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 03:49:48.098532 iscsid[907]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 03:49:48.098532 iscsid[907]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 03:49:48.098532 iscsid[907]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 03:49:48.098532 iscsid[907]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 03:49:48.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:48.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:47.787678 systemd-networkd[877]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 03:49:47.827109 ignition[865]: parsed url from cmdline: "" Feb 13 03:49:47.795535 systemd[1]: Reached target network.target. Feb 13 03:49:47.827111 ignition[865]: no config URL provided Feb 13 03:49:48.297554 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 03:49:47.848973 unknown[865]: fetched base config from "system" Feb 13 03:49:47.827114 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 03:49:47.848976 unknown[865]: fetched user config from "system" Feb 13 03:49:47.827144 ignition[865]: parsing config with SHA512: c94b21fb3178567a64110fafea8598de69948d792d29fac0fa158c34b0c6a38f679e7e1f370816025d9a47103bfaa4fd7ac1a64bbf35278833315c3cef5052bf Feb 13 03:49:47.857165 systemd[1]: Starting iscsiuio.service... Feb 13 03:49:47.849571 ignition[865]: fetch-offline: fetch-offline passed Feb 13 03:49:47.869740 systemd[1]: Started iscsiuio.service. Feb 13 03:49:47.849575 ignition[865]: POST message to Packet Timeline Feb 13 03:49:47.883733 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 03:49:47.849580 ignition[865]: POST Status error: resource requires networking Feb 13 03:49:47.942680 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 03:49:47.849615 ignition[865]: Ignition finished successfully Feb 13 03:49:47.943122 systemd[1]: Starting ignition-kargs.service... Feb 13 03:49:48.021591 ignition[896]: Ignition 2.14.0 Feb 13 03:49:48.018465 systemd-networkd[877]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 03:49:48.021595 ignition[896]: Stage: kargs Feb 13 03:49:48.031009 systemd[1]: Starting iscsid.service... Feb 13 03:49:48.021653 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 03:49:48.059789 systemd[1]: Started iscsid.service. Feb 13 03:49:48.021662 ignition[896]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 03:49:48.075212 systemd[1]: Starting dracut-initqueue.service... Feb 13 03:49:48.022983 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 03:49:48.088790 systemd[1]: Finished dracut-initqueue.service. Feb 13 03:49:48.024563 ignition[896]: kargs: kargs passed Feb 13 03:49:48.106617 systemd[1]: Reached target remote-fs-pre.target. Feb 13 03:49:48.024566 ignition[896]: POST message to Packet Timeline Feb 13 03:49:48.150625 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 03:49:48.024576 ignition[896]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 03:49:48.150734 systemd[1]: Reached target remote-fs.target. Feb 13 03:49:48.028368 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41151->[::1]:53: read: connection refused Feb 13 03:49:48.186515 systemd[1]: Starting dracut-pre-mount.service... Feb 13 03:49:48.228661 ignition[896]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 03:49:48.194926 systemd[1]: Finished dracut-pre-mount.service. Feb 13 03:49:48.229078 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59973->[::1]:53: read: connection refused Feb 13 03:49:48.292475 systemd-networkd[877]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 03:49:48.320667 systemd-networkd[877]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 03:49:48.349268 systemd-networkd[877]: enp1s0f1np1: Link UP Feb 13 03:49:48.349476 systemd-networkd[877]: enp1s0f1np1: Gained carrier Feb 13 03:49:48.365957 systemd-networkd[877]: enp1s0f0np0: Link UP Feb 13 03:49:48.366307 systemd-networkd[877]: eno2: Link UP Feb 13 03:49:48.630145 ignition[896]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 03:49:48.366667 systemd-networkd[877]: eno1: Link UP Feb 13 03:49:48.631403 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57728->[::1]:53: read: connection refused Feb 13 03:49:49.061514 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 03:49:49.061563 systemd-networkd[877]: enp1s0f0np0: Gained carrier Feb 13 03:49:49.093666 systemd-networkd[877]: enp1s0f0np0: DHCPv4 address 139.178.90.101/31, gateway 139.178.90.100 acquired from 145.40.83.140 Feb 13 03:49:49.431904 ignition[896]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 03:49:49.433222 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:32822->[::1]:53: read: connection refused Feb 13 03:49:49.677965 systemd-networkd[877]: enp1s0f1np1: Gained IPv6LL Feb 13 03:49:50.253731 systemd-networkd[877]: enp1s0f0np0: Gained IPv6LL Feb 13 03:49:51.034804 ignition[896]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 03:49:51.036066 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34304->[::1]:53: read: connection refused Feb 13 03:49:54.239512 ignition[896]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 03:49:54.275800 ignition[896]: GET result: OK Feb 13 03:49:54.502680 ignition[896]: Ignition finished successfully Feb 13 03:49:54.507423 systemd[1]: Finished ignition-kargs.service. Feb 13 03:49:54.595155 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 03:49:54.595173 kernel: audit: type=1130 audit(1707796194.516:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:54.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:54.526434 ignition[925]: Ignition 2.14.0 Feb 13 03:49:54.519669 systemd[1]: Starting ignition-disks.service... Feb 13 03:49:54.526440 ignition[925]: Stage: disks Feb 13 03:49:54.526562 ignition[925]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 03:49:54.526571 ignition[925]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 03:49:54.527924 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 03:49:54.529640 ignition[925]: disks: disks passed Feb 13 03:49:54.529644 ignition[925]: POST message to Packet Timeline Feb 13 03:49:54.529654 ignition[925]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 03:49:54.556817 ignition[925]: GET result: OK Feb 13 03:49:54.777135 ignition[925]: Ignition finished successfully Feb 13 03:49:54.780077 systemd[1]: Finished ignition-disks.service. Feb 13 03:49:54.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:54.792952 systemd[1]: Reached target initrd-root-device.target. Feb 13 03:49:54.880674 kernel: audit: type=1130 audit(1707796194.791:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:54.866606 systemd[1]: Reached target local-fs-pre.target. Feb 13 03:49:54.866642 systemd[1]: Reached target local-fs.target. Feb 13 03:49:54.888643 systemd[1]: Reached target sysinit.target. Feb 13 03:49:54.902596 systemd[1]: Reached target basic.target. Feb 13 03:49:54.903228 systemd[1]: Starting systemd-fsck-root.service... Feb 13 03:49:54.928229 systemd-fsck[939]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 03:49:54.948872 systemd[1]: Finished systemd-fsck-root.service. Feb 13 03:49:55.041056 kernel: audit: type=1130 audit(1707796194.956:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.041072 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 03:49:54.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:54.958016 systemd[1]: Mounting sysroot.mount... Feb 13 03:49:55.048057 systemd[1]: Mounted sysroot.mount. Feb 13 03:49:55.061707 systemd[1]: Reached target initrd-root-fs.target. Feb 13 03:49:55.070387 systemd[1]: Mounting sysroot-usr.mount... Feb 13 03:49:55.095475 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 03:49:55.105036 systemd[1]: Starting flatcar-static-network.service... Feb 13 03:49:55.121554 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 03:49:55.121593 systemd[1]: Reached target ignition-diskful.target. Feb 13 03:49:55.139240 systemd[1]: Mounted sysroot-usr.mount. Feb 13 03:49:55.164656 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 03:49:55.304817 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (952) Feb 13 03:49:55.304834 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 03:49:55.304844 kernel: BTRFS info (device sdb6): using free space tree Feb 13 03:49:55.304859 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 03:49:55.304876 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 03:49:55.175810 systemd[1]: Starting initrd-setup-root.service... Feb 13 03:49:55.368660 kernel: audit: type=1130 audit(1707796195.312:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.368696 coreos-metadata[947]: Feb 13 03:49:55.210 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 03:49:55.368696 coreos-metadata[947]: Feb 13 03:49:55.237 INFO Fetch successful Feb 13 03:49:55.553259 kernel: audit: type=1130 audit(1707796195.375:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.553270 kernel: audit: type=1130 audit(1707796195.439:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.553277 kernel: audit: type=1131 audit(1707796195.439:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.553333 coreos-metadata[946]: Feb 13 03:49:55.210 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 03:49:55.553333 coreos-metadata[946]: Feb 13 03:49:55.232 INFO Fetch successful Feb 13 03:49:55.553333 coreos-metadata[946]: Feb 13 03:49:55.250 INFO wrote hostname ci-3510.3.2-a-fff065a016 to /sysroot/etc/hostname Feb 13 03:49:55.602484 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 03:49:55.211032 systemd[1]: Finished initrd-setup-root.service. Feb 13 03:49:55.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.654653 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Feb 13 03:49:55.693517 kernel: audit: type=1130 audit(1707796195.626:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.314804 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 03:49:55.701704 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 03:49:55.376754 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 03:49:55.722796 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 03:49:55.376792 systemd[1]: Finished flatcar-static-network.service. Feb 13 03:49:55.740664 ignition[1022]: INFO : Ignition 2.14.0 Feb 13 03:49:55.740664 ignition[1022]: INFO : Stage: mount Feb 13 03:49:55.740664 ignition[1022]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 03:49:55.740664 ignition[1022]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 03:49:55.740664 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 03:49:55.740664 ignition[1022]: INFO : mount: mount passed Feb 13 03:49:55.740664 ignition[1022]: INFO : POST message to Packet Timeline Feb 13 03:49:55.740664 ignition[1022]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 03:49:55.740664 ignition[1022]: INFO : GET result: OK Feb 13 03:49:55.461307 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 03:49:55.562098 systemd[1]: Starting ignition-mount.service... Feb 13 03:49:55.589082 systemd[1]: Starting sysroot-boot.service... Feb 13 03:49:55.610404 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 03:49:55.610444 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 03:49:55.611059 systemd[1]: Finished sysroot-boot.service. Feb 13 03:49:55.878826 ignition[1022]: INFO : Ignition finished successfully Feb 13 03:49:55.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.946483 kernel: audit: type=1130 audit(1707796195.886:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:49:55.873383 systemd[1]: Finished ignition-mount.service. Feb 13 03:49:55.889551 systemd[1]: Starting ignition-files.service... Feb 13 03:49:56.005538 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1036) Feb 13 03:49:56.005552 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 03:49:55.955220 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 03:49:56.054525 kernel: BTRFS info (device sdb6): using free space tree Feb 13 03:49:56.054536 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 03:49:56.054543 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 03:49:56.090058 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 03:49:56.107615 ignition[1056]: INFO : Ignition 2.14.0 Feb 13 03:49:56.107615 ignition[1056]: INFO : Stage: files Feb 13 03:49:56.107615 ignition[1056]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 03:49:56.107615 ignition[1056]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 03:49:56.107615 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 03:49:56.107615 ignition[1056]: DEBUG : files: compiled without relabeling support, skipping Feb 13 03:49:56.107615 ignition[1056]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 03:49:56.107615 ignition[1056]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 03:49:56.109648 unknown[1056]: wrote ssh authorized keys file for user: core Feb 13 03:49:56.204597 ignition[1056]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 03:49:56.204597 ignition[1056]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 03:49:56.204597 ignition[1056]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 03:49:56.204597 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 03:49:56.204597 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 03:49:56.204597 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 03:49:56.204597 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 03:49:56.204597 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 03:49:56.204597 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 13 03:49:56.751295 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 03:49:56.833026 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 13 03:49:56.857691 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 03:49:56.857691 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 03:49:56.857691 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 13 03:49:57.242890 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 03:49:57.335079 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 13 03:49:57.358659 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 03:49:57.358659 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 03:49:57.358659 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 13 03:49:57.564809 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 03:50:03.495836 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 13 03:50:03.521786 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 03:50:03.521786 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 03:50:03.521786 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 13 03:50:03.678082 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 03:50:18.687147 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 13 03:50:18.712842 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 03:50:18.712842 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 13 03:50:18.712842 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 13 03:50:18.847666 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 13 03:50:24.855832 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 13 03:50:24.881797 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 13 03:50:24.881797 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 03:50:24.881797 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 03:50:24.881797 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 03:50:24.881797 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 03:50:25.362509 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 03:50:25.414797 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 03:50:25.430740 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 03:50:25.654747 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1065) Feb 13 03:50:25.654847 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem577091887" Feb 13 03:50:25.654847 ignition[1056]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem577091887": device or resource busy Feb 13 03:50:25.654847 ignition[1056]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem577091887", trying btrfs: device or resource busy Feb 13 03:50:25.654847 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem577091887" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem577091887" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem577091887" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem577091887" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 13 03:50:25.654847 ignition[1056]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 13 03:50:26.282606 kernel: audit: type=1130 audit(1707796225.756:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.282649 kernel: audit: type=1130 audit(1707796225.886:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.282673 kernel: audit: type=1130 audit(1707796225.952:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.282694 kernel: audit: type=1131 audit(1707796225.952:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.282721 kernel: audit: type=1130 audit(1707796226.128:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.282743 kernel: audit: type=1131 audit(1707796226.128:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1e): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1e): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 03:50:26.283052 ignition[1056]: INFO : files: files passed Feb 13 03:50:26.875065 kernel: audit: type=1130 audit(1707796226.314:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.875093 kernel: audit: type=1131 audit(1707796226.486:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.875104 kernel: audit: type=1131 audit(1707796226.803:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.737507 systemd[1]: Finished ignition-files.service. Feb 13 03:50:26.890656 ignition[1056]: INFO : POST message to Packet Timeline Feb 13 03:50:26.890656 ignition[1056]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 03:50:26.890656 ignition[1056]: INFO : GET result: OK Feb 13 03:50:26.890656 ignition[1056]: INFO : Ignition finished successfully Feb 13 03:50:27.006662 kernel: audit: type=1131 audit(1707796226.897:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.763939 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 03:50:27.024704 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 03:50:25.825707 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 03:50:27.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.826019 systemd[1]: Starting ignition-quench.service... Feb 13 03:50:27.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.864748 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 03:50:27.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:25.887900 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 03:50:27.120594 iscsid[907]: iscsid shutting down. Feb 13 03:50:25.887995 systemd[1]: Finished ignition-quench.service. Feb 13 03:50:25.953729 systemd[1]: Reached target ignition-complete.target. Feb 13 03:50:27.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.076047 systemd[1]: Starting initrd-parse-etc.service... Feb 13 03:50:27.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.182692 ignition[1106]: INFO : Ignition 2.14.0 Feb 13 03:50:27.182692 ignition[1106]: INFO : Stage: umount Feb 13 03:50:27.182692 ignition[1106]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 03:50:27.182692 ignition[1106]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 03:50:27.182692 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 03:50:27.182692 ignition[1106]: INFO : umount: umount passed Feb 13 03:50:27.182692 ignition[1106]: INFO : POST message to Packet Timeline Feb 13 03:50:27.182692 ignition[1106]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 03:50:27.182692 ignition[1106]: INFO : GET result: OK Feb 13 03:50:27.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.094333 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 03:50:27.350905 ignition[1106]: INFO : Ignition finished successfully Feb 13 03:50:27.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.094378 systemd[1]: Finished initrd-parse-etc.service. Feb 13 03:50:27.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.129658 systemd[1]: Reached target initrd-fs.target. Feb 13 03:50:27.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.390000 audit: BPF prog-id=6 op=UNLOAD Feb 13 03:50:26.251616 systemd[1]: Reached target initrd.target. Feb 13 03:50:26.271727 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 03:50:27.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.272937 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 03:50:27.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.292231 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 03:50:27.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.317940 systemd[1]: Starting initrd-cleanup.service... Feb 13 03:50:27.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.393639 systemd[1]: Stopped target nss-lookup.target. Feb 13 03:50:26.419689 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 03:50:27.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.444793 systemd[1]: Stopped target timers.target. Feb 13 03:50:27.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.468865 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 03:50:27.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.469088 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 03:50:26.488320 systemd[1]: Stopped target initrd.target. Feb 13 03:50:27.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.562727 systemd[1]: Stopped target basic.target. Feb 13 03:50:26.575822 systemd[1]: Stopped target ignition-complete.target. Feb 13 03:50:26.596789 systemd[1]: Stopped target ignition-diskful.target. Feb 13 03:50:26.626758 systemd[1]: Stopped target initrd-root-device.target. Feb 13 03:50:27.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.648913 systemd[1]: Stopped target remote-fs.target. Feb 13 03:50:27.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.670024 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 03:50:27.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.693183 systemd[1]: Stopped target sysinit.target. Feb 13 03:50:26.715044 systemd[1]: Stopped target local-fs.target. Feb 13 03:50:27.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.736033 systemd[1]: Stopped target local-fs-pre.target. Feb 13 03:50:27.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.757012 systemd[1]: Stopped target swap.target. Feb 13 03:50:27.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.779907 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 03:50:27.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:26.780267 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 03:50:26.805251 systemd[1]: Stopped target cryptsetup.target. Feb 13 03:50:26.883713 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 03:50:26.883797 systemd[1]: Stopped dracut-initqueue.service. Feb 13 03:50:26.898796 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 03:50:26.898875 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 03:50:26.967814 systemd[1]: Stopped target paths.target. Feb 13 03:50:26.977751 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 03:50:26.981689 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 03:50:26.990832 systemd[1]: Stopped target slices.target. Feb 13 03:50:27.014705 systemd[1]: Stopped target sockets.target. Feb 13 03:50:27.032775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 03:50:27.032917 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 03:50:27.057044 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 03:50:27.057333 systemd[1]: Stopped ignition-files.service. Feb 13 03:50:27.083241 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 03:50:27.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:27.083620 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 03:50:27.100179 systemd[1]: Stopping ignition-mount.service... Feb 13 03:50:27.113154 systemd[1]: Stopping iscsid.service... Feb 13 03:50:27.128120 systemd[1]: Stopping sysroot-boot.service... Feb 13 03:50:27.135538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 03:50:27.135629 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 03:50:27.158722 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 03:50:27.158790 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 03:50:27.176812 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 03:50:27.177421 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 03:50:27.177504 systemd[1]: Stopped iscsid.service. Feb 13 03:50:27.190019 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 03:50:27.190100 systemd[1]: Stopped sysroot-boot.service. Feb 13 03:50:27.197206 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 03:50:27.197310 systemd[1]: Closed iscsid.socket. Feb 13 03:50:27.208955 systemd[1]: Stopping iscsiuio.service... Feb 13 03:50:27.233063 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 03:50:27.233296 systemd[1]: Stopped iscsiuio.service. Feb 13 03:50:27.260338 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 03:50:27.260581 systemd[1]: Finished initrd-cleanup.service. Feb 13 03:50:27.281721 systemd[1]: Stopped target network.target. Feb 13 03:50:27.296726 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 03:50:27.296825 systemd[1]: Closed iscsiuio.socket. Feb 13 03:50:27.313044 systemd[1]: Stopping systemd-networkd.service... Feb 13 03:50:27.319595 systemd-networkd[877]: enp1s0f1np1: DHCPv6 lease lost Feb 13 03:50:27.323021 systemd[1]: Stopping systemd-resolved.service... Feb 13 03:50:27.334663 systemd-networkd[877]: enp1s0f0np0: DHCPv6 lease lost Feb 13 03:50:27.343291 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 03:50:27.343550 systemd[1]: Stopped systemd-resolved.service. Feb 13 03:50:27.972000 audit: BPF prog-id=9 op=UNLOAD Feb 13 03:50:27.360786 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 03:50:27.361108 systemd[1]: Stopped systemd-networkd.service. Feb 13 03:50:27.376625 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 03:50:27.376670 systemd[1]: Stopped ignition-mount.service. Feb 13 03:50:27.391765 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 03:50:27.391785 systemd[1]: Closed systemd-networkd.socket. Feb 13 03:50:27.406605 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 03:50:27.406636 systemd[1]: Stopped ignition-disks.service. Feb 13 03:50:27.422628 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 03:50:27.422675 systemd[1]: Stopped ignition-kargs.service. Feb 13 03:50:27.441885 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 03:50:27.441999 systemd[1]: Stopped ignition-setup.service. Feb 13 03:50:27.458807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 03:50:27.458949 systemd[1]: Stopped initrd-setup-root.service. Feb 13 03:50:27.475531 systemd[1]: Stopping network-cleanup.service... Feb 13 03:50:27.974454 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 13 03:50:27.487647 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 03:50:27.487794 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 03:50:27.502812 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 03:50:27.502939 systemd[1]: Stopped systemd-sysctl.service. Feb 13 03:50:27.519107 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 03:50:27.519245 systemd[1]: Stopped systemd-modules-load.service. Feb 13 03:50:27.534073 systemd[1]: Stopping systemd-udevd.service... Feb 13 03:50:27.552363 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 03:50:27.553738 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 03:50:27.554049 systemd[1]: Stopped systemd-udevd.service. Feb 13 03:50:27.566187 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 03:50:27.566311 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 03:50:27.578881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 03:50:27.578983 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 03:50:27.593743 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 03:50:27.593893 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 03:50:27.616636 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 03:50:27.616665 systemd[1]: Stopped dracut-cmdline.service. Feb 13 03:50:27.633688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 03:50:27.633741 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 03:50:27.649654 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 03:50:27.665514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 03:50:27.665545 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 03:50:27.681596 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 03:50:27.681626 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 03:50:27.697566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 03:50:27.697613 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 03:50:27.713988 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 03:50:27.714873 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 03:50:27.715019 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 03:50:27.852543 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 03:50:27.852767 systemd[1]: Stopped network-cleanup.service. Feb 13 03:50:27.863986 systemd[1]: Reached target initrd-switch-root.target. Feb 13 03:50:27.884244 systemd[1]: Starting initrd-switch-root.service... Feb 13 03:50:27.924269 systemd[1]: Switching root. Feb 13 03:50:27.975177 systemd-journald[269]: Journal stopped Feb 13 03:50:31.979534 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 03:50:31.979547 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 03:50:31.979555 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 03:50:31.979561 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 03:50:31.979565 kernel: SELinux: policy capability open_perms=1 Feb 13 03:50:31.979570 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 03:50:31.979576 kernel: SELinux: policy capability always_check_network=0 Feb 13 03:50:31.979581 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 03:50:31.979586 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 03:50:31.979592 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 03:50:31.979598 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 03:50:31.979603 systemd[1]: Successfully loaded SELinux policy in 320.129ms. Feb 13 03:50:31.979610 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.802ms. Feb 13 03:50:31.979616 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 03:50:31.979624 systemd[1]: Detected architecture x86-64. Feb 13 03:50:31.979629 systemd[1]: Detected first boot. Feb 13 03:50:31.979635 systemd[1]: Hostname set to . Feb 13 03:50:31.979641 systemd[1]: Initializing machine ID from random generator. Feb 13 03:50:31.979647 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 03:50:31.979652 systemd[1]: Populated /etc with preset unit settings. Feb 13 03:50:31.979658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 03:50:31.979665 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 03:50:31.979672 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 03:50:31.979678 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 03:50:31.979684 systemd[1]: Stopped initrd-switch-root.service. Feb 13 03:50:31.979689 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 03:50:31.979696 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 03:50:31.979703 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 03:50:31.979711 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 03:50:31.979717 systemd[1]: Created slice system-getty.slice. Feb 13 03:50:31.979724 systemd[1]: Created slice system-modprobe.slice. Feb 13 03:50:31.979730 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 03:50:31.979737 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 03:50:31.979744 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 03:50:31.979750 systemd[1]: Created slice user.slice. Feb 13 03:50:31.979757 systemd[1]: Started systemd-ask-password-console.path. Feb 13 03:50:31.979764 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 03:50:31.979770 systemd[1]: Set up automount boot.automount. Feb 13 03:50:31.979776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 03:50:31.979782 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 03:50:31.979789 systemd[1]: Stopped target initrd-fs.target. Feb 13 03:50:31.979795 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 03:50:31.979801 systemd[1]: Reached target integritysetup.target. Feb 13 03:50:31.979807 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 03:50:31.979814 systemd[1]: Reached target remote-fs.target. Feb 13 03:50:31.979821 systemd[1]: Reached target slices.target. Feb 13 03:50:31.979827 systemd[1]: Reached target swap.target. Feb 13 03:50:31.979833 systemd[1]: Reached target torcx.target. Feb 13 03:50:31.979839 systemd[1]: Reached target veritysetup.target. Feb 13 03:50:31.979845 systemd[1]: Listening on systemd-coredump.socket. Feb 13 03:50:31.979851 systemd[1]: Listening on systemd-initctl.socket. Feb 13 03:50:31.979857 systemd[1]: Listening on systemd-networkd.socket. Feb 13 03:50:31.979864 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 03:50:31.979870 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 03:50:31.979877 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 03:50:31.979883 systemd[1]: Mounting dev-hugepages.mount... Feb 13 03:50:31.979889 systemd[1]: Mounting dev-mqueue.mount... Feb 13 03:50:31.979895 systemd[1]: Mounting media.mount... Feb 13 03:50:31.979902 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 03:50:31.979908 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 03:50:31.979915 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 03:50:31.979921 systemd[1]: Mounting tmp.mount... Feb 13 03:50:31.979927 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 03:50:31.979933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 03:50:31.979939 systemd[1]: Starting kmod-static-nodes.service... Feb 13 03:50:31.979945 systemd[1]: Starting modprobe@configfs.service... Feb 13 03:50:31.979952 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 03:50:31.979959 systemd[1]: Starting modprobe@drm.service... Feb 13 03:50:31.979965 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 03:50:31.979971 systemd[1]: Starting modprobe@fuse.service... Feb 13 03:50:31.979977 kernel: fuse: init (API version 7.34) Feb 13 03:50:31.979983 systemd[1]: Starting modprobe@loop.service... Feb 13 03:50:31.979989 kernel: loop: module loaded Feb 13 03:50:31.979995 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 03:50:31.980002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 03:50:31.980009 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 03:50:31.980015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 03:50:31.980021 kernel: kauditd_printk_skb: 66 callbacks suppressed Feb 13 03:50:31.980027 kernel: audit: type=1131 audit(1707796231.620:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.980033 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 03:50:31.980039 kernel: audit: type=1131 audit(1707796231.708:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.980045 systemd[1]: Stopped systemd-journald.service. Feb 13 03:50:31.980051 kernel: audit: type=1130 audit(1707796231.772:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.980058 kernel: audit: type=1131 audit(1707796231.772:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.980063 kernel: audit: type=1334 audit(1707796231.858:113): prog-id=18 op=LOAD Feb 13 03:50:31.980069 kernel: audit: type=1334 audit(1707796231.876:114): prog-id=19 op=LOAD Feb 13 03:50:31.980074 kernel: audit: type=1334 audit(1707796231.894:115): prog-id=20 op=LOAD Feb 13 03:50:31.980080 kernel: audit: type=1334 audit(1707796231.912:116): prog-id=16 op=UNLOAD Feb 13 03:50:31.980086 systemd[1]: Starting systemd-journald.service... Feb 13 03:50:31.980092 kernel: audit: type=1334 audit(1707796231.912:117): prog-id=17 op=UNLOAD Feb 13 03:50:31.980097 systemd[1]: Starting systemd-modules-load.service... Feb 13 03:50:31.980105 kernel: audit: type=1305 audit(1707796231.976:118): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 03:50:31.980112 systemd-journald[1257]: Journal started Feb 13 03:50:31.980135 systemd-journald[1257]: Runtime Journal (/run/log/journal/239f7ffb628d4191a23854941cdcd165) is 8.0M, max 640.1M, 632.1M free. Feb 13 03:50:28.443000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 03:50:28.724000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 03:50:28.727000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 03:50:28.727000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 03:50:28.727000 audit: BPF prog-id=10 op=LOAD Feb 13 03:50:28.727000 audit: BPF prog-id=10 op=UNLOAD Feb 13 03:50:28.727000 audit: BPF prog-id=11 op=LOAD Feb 13 03:50:28.727000 audit: BPF prog-id=11 op=UNLOAD Feb 13 03:50:28.791000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 03:50:28.791000 audit[1147]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d989c a1=c00015adf8 a2=c000163ac0 a3=32 items=0 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 03:50:28.791000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 03:50:28.816000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 03:50:28.816000 audit[1147]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9975 a2=1ed a3=0 items=2 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 03:50:28.816000 audit: CWD cwd="/" Feb 13 03:50:28.816000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:28.816000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:28.816000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 03:50:30.354000 audit: BPF prog-id=12 op=LOAD Feb 13 03:50:30.354000 audit: BPF prog-id=3 op=UNLOAD Feb 13 03:50:30.355000 audit: BPF prog-id=13 op=LOAD Feb 13 03:50:30.355000 audit: BPF prog-id=14 op=LOAD Feb 13 03:50:30.355000 audit: BPF prog-id=4 op=UNLOAD Feb 13 03:50:30.355000 audit: BPF prog-id=5 op=UNLOAD Feb 13 03:50:30.355000 audit: BPF prog-id=15 op=LOAD Feb 13 03:50:30.355000 audit: BPF prog-id=12 op=UNLOAD Feb 13 03:50:30.356000 audit: BPF prog-id=16 op=LOAD Feb 13 03:50:30.356000 audit: BPF prog-id=17 op=LOAD Feb 13 03:50:30.356000 audit: BPF prog-id=13 op=UNLOAD Feb 13 03:50:30.356000 audit: BPF prog-id=14 op=UNLOAD Feb 13 03:50:30.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:30.403000 audit: BPF prog-id=15 op=UNLOAD Feb 13 03:50:30.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:30.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:31.858000 audit: BPF prog-id=18 op=LOAD Feb 13 03:50:31.876000 audit: BPF prog-id=19 op=LOAD Feb 13 03:50:31.894000 audit: BPF prog-id=20 op=LOAD Feb 13 03:50:31.912000 audit: BPF prog-id=16 op=UNLOAD Feb 13 03:50:31.912000 audit: BPF prog-id=17 op=UNLOAD Feb 13 03:50:31.976000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 03:50:28.790125 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 03:50:30.354872 systemd[1]: Queued start job for default target multi-user.target. Feb 13 03:50:28.790520 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 03:50:30.357820 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 03:50:28.790534 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 03:50:28.790554 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 03:50:28.790561 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 03:50:28.790579 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 03:50:28.790587 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 03:50:28.790714 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 03:50:28.790740 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 03:50:28.790750 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 03:50:28.791180 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 03:50:28.791202 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 03:50:28.791215 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 03:50:28.791224 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 03:50:28.791235 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 03:50:28.791243 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 03:50:29.989545 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 03:50:29.989689 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 03:50:29.989745 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 03:50:29.989839 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 03:50:29.989870 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 03:50:29.989906 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-13T03:50:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 03:50:31.976000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff5025e530 a2=4000 a3=7fff5025e5cc items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 03:50:31.976000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 03:50:32.058644 systemd[1]: Starting systemd-network-generator.service... Feb 13 03:50:32.085477 systemd[1]: Starting systemd-remount-fs.service... Feb 13 03:50:32.112490 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 03:50:32.155230 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 03:50:32.155253 systemd[1]: Stopped verity-setup.service. Feb 13 03:50:32.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.200471 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 03:50:32.220626 systemd[1]: Started systemd-journald.service. Feb 13 03:50:32.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.228974 systemd[1]: Mounted dev-hugepages.mount. Feb 13 03:50:32.236796 systemd[1]: Mounted dev-mqueue.mount. Feb 13 03:50:32.243704 systemd[1]: Mounted media.mount. Feb 13 03:50:32.250710 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 03:50:32.259686 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 03:50:32.268683 systemd[1]: Mounted tmp.mount. Feb 13 03:50:32.275764 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 03:50:32.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.284864 systemd[1]: Finished kmod-static-nodes.service. Feb 13 03:50:32.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.293953 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 03:50:32.294090 systemd[1]: Finished modprobe@configfs.service. Feb 13 03:50:32.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.303287 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 03:50:32.303606 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 03:50:32.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.312272 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 03:50:32.312601 systemd[1]: Finished modprobe@drm.service. Feb 13 03:50:32.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.321268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 03:50:32.321588 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 03:50:32.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.330266 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 03:50:32.330586 systemd[1]: Finished modprobe@fuse.service. Feb 13 03:50:32.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.339317 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 03:50:32.339656 systemd[1]: Finished modprobe@loop.service. Feb 13 03:50:32.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.348289 systemd[1]: Finished systemd-modules-load.service. Feb 13 03:50:32.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.357253 systemd[1]: Finished systemd-network-generator.service. Feb 13 03:50:32.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.366234 systemd[1]: Finished systemd-remount-fs.service. Feb 13 03:50:32.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.375375 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 03:50:32.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.384901 systemd[1]: Reached target network-pre.target. Feb 13 03:50:32.396191 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 03:50:32.405118 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 03:50:32.412645 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 03:50:32.413609 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 03:50:32.421057 systemd[1]: Starting systemd-journal-flush.service... Feb 13 03:50:32.424970 systemd-journald[1257]: Time spent on flushing to /var/log/journal/239f7ffb628d4191a23854941cdcd165 is 15.131ms for 1624 entries. Feb 13 03:50:32.424970 systemd-journald[1257]: System Journal (/var/log/journal/239f7ffb628d4191a23854941cdcd165) is 8.0M, max 195.6M, 187.6M free. Feb 13 03:50:32.464434 systemd-journald[1257]: Received client request to flush runtime journal. Feb 13 03:50:32.437581 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 03:50:32.438058 systemd[1]: Starting systemd-random-seed.service... Feb 13 03:50:32.453571 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 03:50:32.454082 systemd[1]: Starting systemd-sysctl.service... Feb 13 03:50:32.461257 systemd[1]: Starting systemd-sysusers.service... Feb 13 03:50:32.468027 systemd[1]: Starting systemd-udev-settle.service... Feb 13 03:50:32.475577 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 03:50:32.483620 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 03:50:32.491651 systemd[1]: Finished systemd-journal-flush.service. Feb 13 03:50:32.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.499739 systemd[1]: Finished systemd-random-seed.service. Feb 13 03:50:32.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.507648 systemd[1]: Finished systemd-sysctl.service. Feb 13 03:50:32.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.515667 systemd[1]: Finished systemd-sysusers.service. Feb 13 03:50:32.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.524638 systemd[1]: Reached target first-boot-complete.target. Feb 13 03:50:32.533172 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 03:50:32.542484 udevadm[1274]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 03:50:32.553577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 03:50:32.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.717121 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 03:50:32.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.724000 audit: BPF prog-id=21 op=LOAD Feb 13 03:50:32.724000 audit: BPF prog-id=22 op=LOAD Feb 13 03:50:32.724000 audit: BPF prog-id=7 op=UNLOAD Feb 13 03:50:32.724000 audit: BPF prog-id=8 op=UNLOAD Feb 13 03:50:32.726698 systemd[1]: Starting systemd-udevd.service... Feb 13 03:50:32.738634 systemd-udevd[1277]: Using default interface naming scheme 'v252'. Feb 13 03:50:32.755055 systemd[1]: Started systemd-udevd.service. Feb 13 03:50:32.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:32.765512 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 03:50:32.764000 audit: BPF prog-id=23 op=LOAD Feb 13 03:50:32.766856 systemd[1]: Starting systemd-networkd.service... Feb 13 03:50:32.793466 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 03:50:32.793538 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 03:50:32.836064 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1281) Feb 13 03:50:32.836118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 03:50:32.855000 audit: BPF prog-id=24 op=LOAD Feb 13 03:50:32.856000 audit: BPF prog-id=25 op=LOAD Feb 13 03:50:32.856000 audit: BPF prog-id=26 op=LOAD Feb 13 03:50:32.857990 systemd[1]: Starting systemd-userdbd.service... Feb 13 03:50:32.877479 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 03:50:32.877503 kernel: ACPI: button: Power Button [PWRF] Feb 13 03:50:32.799000 audit[1345]: AVC avc: denied { confidentiality } for pid=1345 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 03:50:32.799000 audit[1345]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c13742cc90 a1=4d8bc a2=7f0c43bb2bc5 a3=5 items=42 ppid=1277 pid=1345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 03:50:32.799000 audit: CWD cwd="/" Feb 13 03:50:32.799000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=1 name=(null) inode=12523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=2 name=(null) inode=12523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=3 name=(null) inode=12524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=4 name=(null) inode=12523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=5 name=(null) inode=12525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=6 name=(null) inode=12523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=7 name=(null) inode=12526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=8 name=(null) inode=12526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=9 name=(null) inode=12527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=10 name=(null) inode=12526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=11 name=(null) inode=12528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=12 name=(null) inode=12526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=13 name=(null) inode=12529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=14 name=(null) inode=12526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=15 name=(null) inode=12530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=16 name=(null) inode=12526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=17 name=(null) inode=12531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=18 name=(null) inode=12523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=19 name=(null) inode=12532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=20 name=(null) inode=12532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=21 name=(null) inode=12533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=22 name=(null) inode=12532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=23 name=(null) inode=12534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=24 name=(null) inode=12532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=25 name=(null) inode=12535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=26 name=(null) inode=12532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=27 name=(null) inode=12536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=28 name=(null) inode=12532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=29 name=(null) inode=12537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=30 name=(null) inode=12523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=31 name=(null) inode=12538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=32 name=(null) inode=12538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=33 name=(null) inode=12539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=34 name=(null) inode=12538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=35 name=(null) inode=12540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=36 name=(null) inode=12538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=37 name=(null) inode=12541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=38 name=(null) inode=12538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=39 name=(null) inode=12542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=40 name=(null) inode=12538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PATH item=41 name=(null) inode=12543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 03:50:32.799000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 03:50:32.906444 kernel: IPMI message handler: version 39.2 Feb 13 03:50:32.906484 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 03:50:32.910633 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 03:50:32.918023 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 03:50:32.918113 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 03:50:32.937078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 03:50:32.940453 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Feb 13 03:50:33.029934 systemd[1]: Started systemd-userdbd.service. Feb 13 03:50:33.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:33.050446 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 03:50:33.050481 kernel: ipmi device interface Feb 13 03:50:33.123446 kernel: ipmi_si: IPMI System Interface driver Feb 13 03:50:33.123511 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 03:50:33.123691 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 03:50:33.163125 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 03:50:33.182646 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 03:50:33.183014 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 03:50:33.226448 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 03:50:33.226565 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 03:50:33.287442 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 03:50:33.287534 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 03:50:33.287547 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 03:50:33.289931 systemd-networkd[1312]: bond0: netdev ready Feb 13 03:50:33.292006 systemd-networkd[1312]: lo: Link UP Feb 13 03:50:33.292009 systemd-networkd[1312]: lo: Gained carrier Feb 13 03:50:33.292484 systemd-networkd[1312]: Enumeration completed Feb 13 03:50:33.292565 systemd[1]: Started systemd-networkd.service. Feb 13 03:50:33.292756 systemd-networkd[1312]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 03:50:33.304838 systemd-networkd[1312]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:91.network. Feb 13 03:50:33.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:33.372101 kernel: intel_rapl_common: Found RAPL domain package Feb 13 03:50:33.372132 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 03:50:33.372227 kernel: intel_rapl_common: Found RAPL domain core Feb 13 03:50:33.410876 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 03:50:33.410966 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 03:50:33.463457 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 03:50:33.485441 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 03:50:33.487754 systemd[1]: Finished systemd-udev-settle.service. Feb 13 03:50:33.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:33.496180 systemd[1]: Starting lvm2-activation-early.service... Feb 13 03:50:33.512510 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 03:50:33.537892 systemd[1]: Finished lvm2-activation-early.service. Feb 13 03:50:33.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:33.546558 systemd[1]: Reached target cryptsetup.target. Feb 13 03:50:33.556055 systemd[1]: Starting lvm2-activation.service... Feb 13 03:50:33.558093 lvm[1386]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 03:50:33.594877 systemd[1]: Finished lvm2-activation.service. Feb 13 03:50:33.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:33.603568 systemd[1]: Reached target local-fs-pre.target. Feb 13 03:50:33.612491 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 03:50:33.612505 systemd[1]: Reached target local-fs.target. Feb 13 03:50:33.621476 systemd[1]: Reached target machines.target. Feb 13 03:50:33.631066 systemd[1]: Starting ldconfig.service... Feb 13 03:50:33.638781 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 03:50:33.638803 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 03:50:33.639328 systemd[1]: Starting systemd-boot-update.service... Feb 13 03:50:33.647910 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 03:50:33.658990 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 03:50:33.659098 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 03:50:33.659130 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 03:50:33.659669 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 03:50:33.659877 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1388 (bootctl) Feb 13 03:50:33.660527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 03:50:33.669251 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 03:50:33.675473 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 03:50:33.679866 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 03:50:33.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:33.683408 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 03:50:34.100424 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 03:50:34.100902 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 03:50:34.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:34.132816 systemd-fsck[1396]: fsck.fat 4.2 (2021-01-31) Feb 13 03:50:34.132816 systemd-fsck[1396]: /dev/sdb1: 789 files, 115339/258078 clusters Feb 13 03:50:34.133665 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 03:50:34.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:34.144432 systemd[1]: Mounting boot.mount... Feb 13 03:50:34.156450 systemd[1]: Mounted boot.mount. Feb 13 03:50:34.174941 systemd[1]: Finished systemd-boot-update.service. Feb 13 03:50:34.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:34.206515 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 03:50:34.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 03:50:34.216240 systemd[1]: Starting audit-rules.service... Feb 13 03:50:34.224005 systemd[1]: Starting clean-ca-certificates.service... Feb 13 03:50:34.233075 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 03:50:34.236000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 03:50:34.236000 audit[1417]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1c74d870 a2=420 a3=0 items=0 ppid=1400 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 03:50:34.236000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 03:50:34.237679 augenrules[1417]: No rules Feb 13 03:50:34.243379 systemd[1]: Starting systemd-resolved.service... Feb 13 03:50:34.251338 systemd[1]: Starting systemd-timesyncd.service... Feb 13 03:50:34.259001 systemd[1]: Starting systemd-update-utmp.service... Feb 13 03:50:34.265733 systemd[1]: Finished audit-rules.service. Feb 13 03:50:34.272597 systemd[1]: Finished clean-ca-certificates.service. Feb 13 03:50:34.280608 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 03:50:34.292160 systemd[1]: Finished systemd-update-utmp.service. Feb 13 03:50:34.300551 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 03:50:34.317726 systemd[1]: Started systemd-timesyncd.service. Feb 13 03:50:34.319484 systemd-resolved[1422]: Positive Trust Anchors: Feb 13 03:50:34.319489 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 03:50:34.319508 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 03:50:34.322578 ldconfig[1387]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 03:50:34.323582 systemd-resolved[1422]: Using system hostname 'ci-3510.3.2-a-fff065a016'. Feb 13 03:50:34.325668 systemd[1]: Finished ldconfig.service. Feb 13 03:50:34.333519 systemd[1]: Reached target time-set.target. Feb 13 03:50:34.343148 systemd[1]: Starting systemd-update-done.service... Feb 13 03:50:34.349670 systemd[1]: Finished systemd-update-done.service. Feb 13 03:50:34.476463 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 03:50:34.501506 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 03:50:34.503828 systemd-networkd[1312]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:90.network. Feb 13 03:50:34.504332 systemd[1]: Started systemd-resolved.service. Feb 13 03:50:34.512672 systemd[1]: Reached target network.target. Feb 13 03:50:34.521636 systemd[1]: Reached target nss-lookup.target. Feb 13 03:50:34.529530 systemd[1]: Reached target sysinit.target. Feb 13 03:50:34.537565 systemd[1]: Started motdgen.path. Feb 13 03:50:34.553530 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 03:50:34.563491 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 03:50:34.572588 systemd[1]: Started logrotate.timer. Feb 13 03:50:34.579589 systemd[1]: Started mdadm.timer. Feb 13 03:50:34.586534 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 03:50:34.594527 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 03:50:34.594543 systemd[1]: Reached target paths.target. Feb 13 03:50:34.601627 systemd[1]: Reached target timers.target. Feb 13 03:50:34.608770 systemd[1]: Listening on dbus.socket. Feb 13 03:50:34.616173 systemd[1]: Starting docker.socket... Feb 13 03:50:34.624371 systemd[1]: Listening on sshd.socket. Feb 13 03:50:34.631606 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 03:50:34.631905 systemd[1]: Listening on docker.socket. Feb 13 03:50:34.638687 systemd[1]: Reached target sockets.target. Feb 13 03:50:34.646594 systemd[1]: Reached target basic.target. Feb 13 03:50:34.653665 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 03:50:34.653702 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 03:50:34.654858 systemd[1]: Starting containerd.service... Feb 13 03:50:34.663122 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 03:50:34.679987 systemd[1]: Starting coreos-metadata.service... Feb 13 03:50:34.692495 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 03:50:34.699043 systemd[1]: Starting dbus.service... Feb 13 03:50:34.702413 coreos-metadata[1430]: Feb 13 03:50:34.702 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 03:50:34.705057 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 03:50:34.706391 coreos-metadata[1430]: Feb 13 03:50:34.706 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 03:50:34.709752 coreos-metadata[1433]: Feb 13 03:50:34.709 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 03:50:34.710381 coreos-metadata[1433]: Feb 13 03:50:34.710 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 03:50:34.710556 jq[1438]: false Feb 13 03:50:34.712045 systemd[1]: Starting extend-filesystems.service... Feb 13 03:50:34.718307 dbus-daemon[1436]: [system] SELinux support is enabled Feb 13 03:50:34.718554 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 03:50:34.719092 systemd[1]: Starting motdgen.service... Feb 13 03:50:34.719844 extend-filesystems[1440]: Found sda Feb 13 03:50:34.719844 extend-filesystems[1440]: Found sdb Feb 13 03:50:34.816384 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 03:50:34.816775 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 13 03:50:34.816815 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 03:50:34.816841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 03:50:34.816874 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 03:50:34.726202 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb1 Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb2 Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb3 Feb 13 03:50:34.817099 extend-filesystems[1440]: Found usr Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb4 Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb6 Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb7 Feb 13 03:50:34.817099 extend-filesystems[1440]: Found sdb9 Feb 13 03:50:34.817099 extend-filesystems[1440]: Checking size of /dev/sdb9 Feb 13 03:50:34.817099 extend-filesystems[1440]: Resized partition /dev/sdb9 Feb 13 03:50:35.009206 kernel: bond0: active interface up! Feb 13 03:50:35.009226 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Feb 13 03:50:35.009236 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 03:50:35.009255 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 03:50:35.009272 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 03:50:34.804252 systemd[1]: Starting prepare-critools.service... Feb 13 03:50:35.009386 extend-filesystems[1454]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 03:50:35.032042 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 03:50:34.805257 systemd-networkd[1312]: bond0: Link UP Feb 13 03:50:34.805477 systemd-networkd[1312]: enp1s0f1np1: Link UP Feb 13 03:50:34.805625 systemd-networkd[1312]: enp1s0f1np1: Gained carrier Feb 13 03:50:34.806587 systemd-networkd[1312]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:84:90.network. Feb 13 03:50:34.864703 systemd[1]: Starting prepare-helm.service... Feb 13 03:50:35.032407 update_engine[1469]: I0213 03:50:35.017827 1469 main.cc:92] Flatcar Update Engine starting Feb 13 03:50:35.032407 update_engine[1469]: I0213 03:50:35.020882 1469 update_check_scheduler.cc:74] Next update check in 5m26s Feb 13 03:50:34.882044 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 03:50:35.032568 jq[1470]: true Feb 13 03:50:34.902004 systemd[1]: Starting sshd-keygen.service... Feb 13 03:50:34.929716 systemd[1]: Starting systemd-logind.service... Feb 13 03:50:34.947515 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 03:50:34.948039 systemd[1]: Starting tcsd.service... Feb 13 03:50:34.949619 systemd-logind[1467]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 03:50:34.949628 systemd-logind[1467]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 03:50:34.949637 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 03:50:34.949832 systemd-logind[1467]: New seat seat0. Feb 13 03:50:34.974568 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 03:50:34.974997 systemd[1]: Starting update-engine.service... Feb 13 03:50:35.008065 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 03:50:35.031789 systemd[1]: Started dbus.service. Feb 13 03:50:35.055118 systemd-networkd[1312]: enp1s0f0np0: Link UP Feb 13 03:50:35.055323 systemd-networkd[1312]: bond0: Gained carrier Feb 13 03:50:35.055432 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.055578 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 03:50:35.055443 systemd-networkd[1312]: enp1s0f0np0: Gained carrier Feb 13 03:50:35.094595 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 03:50:35.094620 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 13 03:50:35.097637 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.097784 systemd-networkd[1312]: enp1s0f1np1: Link DOWN Feb 13 03:50:35.097786 systemd-networkd[1312]: enp1s0f1np1: Lost carrier Feb 13 03:50:35.103548 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 03:50:35.103638 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 03:50:35.103811 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 03:50:35.103890 systemd[1]: Finished motdgen.service. Feb 13 03:50:35.104653 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.104829 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.111650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 03:50:35.111732 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 03:50:35.115696 tar[1473]: ./ Feb 13 03:50:35.115696 tar[1473]: ./macvlan Feb 13 03:50:35.122379 jq[1479]: true Feb 13 03:50:35.122796 dbus-daemon[1436]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 03:50:35.123807 tar[1474]: crictl Feb 13 03:50:35.125067 tar[1475]: linux-amd64/helm Feb 13 03:50:35.128944 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 03:50:35.129089 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 03:50:35.130247 systemd[1]: Started update-engine.service. Feb 13 03:50:35.133624 env[1480]: time="2024-02-13T03:50:35.133594339Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 03:50:35.137995 tar[1473]: ./static Feb 13 03:50:35.142174 env[1480]: time="2024-02-13T03:50:35.142157113Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 03:50:35.142520 systemd[1]: Started systemd-logind.service. Feb 13 03:50:35.145113 env[1480]: time="2024-02-13T03:50:35.145101072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 03:50:35.146014 env[1480]: time="2024-02-13T03:50:35.145946118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 03:50:35.146014 env[1480]: time="2024-02-13T03:50:35.145981147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148025 env[1480]: time="2024-02-13T03:50:35.148009688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148062 env[1480]: time="2024-02-13T03:50:35.148025109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148062 env[1480]: time="2024-02-13T03:50:35.148033838Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 03:50:35.148062 env[1480]: time="2024-02-13T03:50:35.148039677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148131 env[1480]: time="2024-02-13T03:50:35.148091701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148229 env[1480]: time="2024-02-13T03:50:35.148219799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148300 env[1480]: time="2024-02-13T03:50:35.148289122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 03:50:35.148325 env[1480]: time="2024-02-13T03:50:35.148301363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 03:50:35.149873 env[1480]: time="2024-02-13T03:50:35.149862706Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 03:50:35.149873 env[1480]: time="2024-02-13T03:50:35.149871835Z" level=info msg="metadata content store policy set" policy=shared Feb 13 03:50:35.153001 systemd[1]: Started locksmithd.service. Feb 13 03:50:35.159589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 03:50:35.159726 systemd[1]: Reached target system-config.target. Feb 13 03:50:35.164055 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Feb 13 03:50:35.167564 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 03:50:35.167707 systemd[1]: Reached target user-config.target. Feb 13 03:50:35.170223 env[1480]: time="2024-02-13T03:50:35.170207171Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 03:50:35.170288 env[1480]: time="2024-02-13T03:50:35.170230191Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 03:50:35.170288 env[1480]: time="2024-02-13T03:50:35.170238320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 03:50:35.170288 env[1480]: time="2024-02-13T03:50:35.170257398Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170288 env[1480]: time="2024-02-13T03:50:35.170265875Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170288 env[1480]: time="2024-02-13T03:50:35.170275557Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170288 env[1480]: time="2024-02-13T03:50:35.170282561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170427 env[1480]: time="2024-02-13T03:50:35.170290080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170427 env[1480]: time="2024-02-13T03:50:35.170298099Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170427 env[1480]: time="2024-02-13T03:50:35.170306043Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170427 env[1480]: time="2024-02-13T03:50:35.170313041Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170427 env[1480]: time="2024-02-13T03:50:35.170321093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 03:50:35.170427 env[1480]: time="2024-02-13T03:50:35.170379438Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 03:50:35.170572 env[1480]: time="2024-02-13T03:50:35.170428703Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 03:50:35.170600 env[1480]: time="2024-02-13T03:50:35.170585138Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 03:50:35.170628 env[1480]: time="2024-02-13T03:50:35.170601101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170628 env[1480]: time="2024-02-13T03:50:35.170610697Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 03:50:35.170675 env[1480]: time="2024-02-13T03:50:35.170641627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170675 env[1480]: time="2024-02-13T03:50:35.170651449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170675 env[1480]: time="2024-02-13T03:50:35.170658709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170675 env[1480]: time="2024-02-13T03:50:35.170664642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170675 env[1480]: time="2024-02-13T03:50:35.170671373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170789 env[1480]: time="2024-02-13T03:50:35.170677484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170789 env[1480]: time="2024-02-13T03:50:35.170683756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170789 env[1480]: time="2024-02-13T03:50:35.170689661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170789 env[1480]: time="2024-02-13T03:50:35.170698065Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 03:50:35.170789 env[1480]: time="2024-02-13T03:50:35.170771010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170789 env[1480]: time="2024-02-13T03:50:35.170784628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170930 env[1480]: time="2024-02-13T03:50:35.170791538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.170930 env[1480]: time="2024-02-13T03:50:35.170797647Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 03:50:35.170930 env[1480]: time="2024-02-13T03:50:35.170805705Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 03:50:35.170930 env[1480]: time="2024-02-13T03:50:35.170812171Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 03:50:35.170930 env[1480]: time="2024-02-13T03:50:35.170825445Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 03:50:35.170930 env[1480]: time="2024-02-13T03:50:35.170851128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 03:50:35.171074 env[1480]: time="2024-02-13T03:50:35.170983278Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 03:50:35.171074 env[1480]: time="2024-02-13T03:50:35.171016828Z" level=info msg="Connect containerd service" Feb 13 03:50:35.171074 env[1480]: time="2024-02-13T03:50:35.171033673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171329322Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171418085Z" level=info msg="Start subscribing containerd event" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171462435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171462605Z" level=info msg="Start recovering state" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171487733Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171510086Z" level=info msg="containerd successfully booted in 0.038270s" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171511772Z" level=info msg="Start event monitor" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171524760Z" level=info msg="Start snapshots syncer" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171532983Z" level=info msg="Start cni network conf syncer for default" Feb 13 03:50:35.174390 env[1480]: time="2024-02-13T03:50:35.171538627Z" level=info msg="Start streaming server" Feb 13 03:50:35.175869 tar[1473]: ./vlan Feb 13 03:50:35.178062 systemd[1]: Started containerd.service. Feb 13 03:50:35.184815 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 03:50:35.196350 tar[1473]: ./portmap Feb 13 03:50:35.215790 tar[1473]: ./host-local Feb 13 03:50:35.215821 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 03:50:35.232589 tar[1473]: ./vrf Feb 13 03:50:35.253739 tar[1473]: ./bridge Feb 13 03:50:35.275454 tar[1473]: ./tuning Feb 13 03:50:35.292762 tar[1473]: ./firewall Feb 13 03:50:35.298442 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 13 03:50:35.321598 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 03:50:35.321754 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 13 03:50:35.321777 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Feb 13 03:50:35.321793 extend-filesystems[1454]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 13 03:50:35.321793 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 03:50:35.321793 extend-filesystems[1454]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 13 03:50:35.412537 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 13 03:50:35.412566 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 03:50:35.412628 tar[1475]: linux-amd64/LICENSE Feb 13 03:50:35.412628 tar[1475]: linux-amd64/README.md Feb 13 03:50:35.412691 tar[1473]: ./host-device Feb 13 03:50:35.412691 tar[1473]: ./sbr Feb 13 03:50:35.412691 tar[1473]: ./loopback Feb 13 03:50:35.412691 tar[1473]: ./dhcp Feb 13 03:50:35.322172 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 03:50:35.412843 extend-filesystems[1440]: Resized filesystem in /dev/sdb9 Feb 13 03:50:35.322271 systemd[1]: Finished extend-filesystems.service. Feb 13 03:50:35.351214 systemd-networkd[1312]: enp1s0f1np1: Link UP Feb 13 03:50:35.351222 systemd-networkd[1312]: enp1s0f1np1: Gained carrier Feb 13 03:50:35.399649 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.399698 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.399754 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.399853 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:35.414851 systemd[1]: Finished prepare-helm.service. Feb 13 03:50:35.428850 systemd[1]: Finished prepare-critools.service. Feb 13 03:50:35.442479 tar[1473]: ./ptp Feb 13 03:50:35.463490 tar[1473]: ./ipvlan Feb 13 03:50:35.483829 tar[1473]: ./bandwidth Feb 13 03:50:35.508716 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 03:50:35.598761 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 03:50:35.610424 systemd[1]: Finished sshd-keygen.service. Feb 13 03:50:35.619373 systemd[1]: Starting issuegen.service... Feb 13 03:50:35.627774 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 03:50:35.627846 systemd[1]: Finished issuegen.service. Feb 13 03:50:35.636358 systemd[1]: Starting systemd-user-sessions.service... Feb 13 03:50:35.644759 systemd[1]: Finished systemd-user-sessions.service. Feb 13 03:50:35.653280 systemd[1]: Started getty@tty1.service. Feb 13 03:50:35.661153 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 03:50:35.669654 systemd[1]: Reached target getty.target. Feb 13 03:50:35.706512 coreos-metadata[1430]: Feb 13 03:50:35.706 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 03:50:35.710481 coreos-metadata[1433]: Feb 13 03:50:35.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 03:50:36.077546 systemd-networkd[1312]: bond0: Gained IPv6LL Feb 13 03:50:36.077843 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:36.525952 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:36.526012 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:38.636523 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 03:50:40.763207 login[1543]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 13 03:50:40.764291 login[1542]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 03:50:40.794487 systemd-logind[1467]: New session 1 of user core. Feb 13 03:50:40.798698 systemd[1]: Created slice user-500.slice. Feb 13 03:50:40.801977 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 03:50:40.826322 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 03:50:40.830581 systemd[1]: Starting user@500.service... Feb 13 03:50:40.836053 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:40.913799 systemd[1547]: Queued start job for default target default.target. Feb 13 03:50:40.914022 systemd[1547]: Reached target paths.target. Feb 13 03:50:40.914033 systemd[1547]: Reached target sockets.target. Feb 13 03:50:40.914040 systemd[1547]: Reached target timers.target. Feb 13 03:50:40.914047 systemd[1547]: Reached target basic.target. Feb 13 03:50:40.914065 systemd[1547]: Reached target default.target. Feb 13 03:50:40.914079 systemd[1547]: Startup finished in 68ms. Feb 13 03:50:40.914108 systemd[1]: Started user@500.service. Feb 13 03:50:40.914806 systemd[1]: Started session-1.scope. Feb 13 03:50:41.768898 login[1543]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 03:50:41.771415 systemd-logind[1467]: New session 2 of user core. Feb 13 03:50:41.771929 systemd[1]: Started session-2.scope. Feb 13 03:50:41.838410 coreos-metadata[1430]: Feb 13 03:50:41.838 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 03:50:41.838946 coreos-metadata[1433]: Feb 13 03:50:41.838 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 03:50:42.891591 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 03:50:42.891762 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 03:50:43.495691 systemd[1]: Created slice system-sshd.slice. Feb 13 03:50:43.496309 systemd[1]: Started sshd@0-139.178.90.101:22-139.178.68.195:44998.service. Feb 13 03:50:43.537002 sshd[1568]: Accepted publickey for core from 139.178.68.195 port 44998 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:43.538072 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:43.542113 systemd-logind[1467]: New session 3 of user core. Feb 13 03:50:43.543051 systemd[1]: Started session-3.scope. Feb 13 03:50:43.598367 systemd[1]: Started sshd@1-139.178.90.101:22-139.178.68.195:45002.service. Feb 13 03:50:43.626786 sshd[1573]: Accepted publickey for core from 139.178.68.195 port 45002 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:43.627539 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:43.629777 systemd-logind[1467]: New session 4 of user core. Feb 13 03:50:43.630204 systemd[1]: Started session-4.scope. Feb 13 03:50:43.680442 sshd[1573]: pam_unix(sshd:session): session closed for user core Feb 13 03:50:43.682362 systemd[1]: sshd@1-139.178.90.101:22-139.178.68.195:45002.service: Deactivated successfully. Feb 13 03:50:43.682816 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 03:50:43.683261 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Feb 13 03:50:43.684000 systemd[1]: Started sshd@2-139.178.90.101:22-139.178.68.195:45006.service. Feb 13 03:50:43.684652 systemd-logind[1467]: Removed session 4. Feb 13 03:50:43.714860 sshd[1579]: Accepted publickey for core from 139.178.68.195 port 45006 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:43.715737 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:43.718678 systemd-logind[1467]: New session 5 of user core. Feb 13 03:50:43.719318 systemd[1]: Started session-5.scope. Feb 13 03:50:43.773427 sshd[1579]: pam_unix(sshd:session): session closed for user core Feb 13 03:50:43.774791 systemd[1]: sshd@2-139.178.90.101:22-139.178.68.195:45006.service: Deactivated successfully. Feb 13 03:50:43.775160 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 03:50:43.775428 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Feb 13 03:50:43.776040 systemd-logind[1467]: Removed session 5. Feb 13 03:50:43.838693 coreos-metadata[1430]: Feb 13 03:50:43.838 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 03:50:43.839600 coreos-metadata[1433]: Feb 13 03:50:43.838 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 03:50:43.864329 coreos-metadata[1430]: Feb 13 03:50:43.864 INFO Fetch successful Feb 13 03:50:43.864432 coreos-metadata[1433]: Feb 13 03:50:43.864 INFO Fetch successful Feb 13 03:50:43.885550 systemd[1]: Finished coreos-metadata.service. Feb 13 03:50:43.886249 systemd[1]: Started packet-phone-home.service. Feb 13 03:50:43.886445 unknown[1430]: wrote ssh authorized keys file for user: core Feb 13 03:50:43.892365 curl[1587]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 03:50:43.892575 curl[1587]: Dload Upload Total Spent Left Speed Feb 13 03:50:43.913858 update-ssh-keys[1588]: Updated "/home/core/.ssh/authorized_keys" Feb 13 03:50:43.915092 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 03:50:43.916131 systemd[1]: Reached target multi-user.target. Feb 13 03:50:43.919238 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 03:50:43.938993 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 03:50:43.939408 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 03:50:43.939913 systemd[1]: Startup finished in 1.901s (kernel) + 46.268s (initrd) + 15.836s (userspace) = 1min 4.007s. Feb 13 03:50:44.105163 curl[1587]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 03:50:44.107511 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 03:50:53.782208 systemd[1]: Started sshd@3-139.178.90.101:22-139.178.68.195:54970.service. Feb 13 03:50:53.811259 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 54970 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:53.812066 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:53.814995 systemd-logind[1467]: New session 6 of user core. Feb 13 03:50:53.815529 systemd[1]: Started session-6.scope. Feb 13 03:50:53.879976 sshd[1591]: pam_unix(sshd:session): session closed for user core Feb 13 03:50:53.886425 systemd[1]: sshd@3-139.178.90.101:22-139.178.68.195:54970.service: Deactivated successfully. Feb 13 03:50:53.887189 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 03:50:53.887450 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Feb 13 03:50:53.887982 systemd[1]: Started sshd@4-139.178.90.101:22-139.178.68.195:54974.service. Feb 13 03:50:53.888361 systemd-logind[1467]: Removed session 6. Feb 13 03:50:53.916824 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 54974 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:53.917564 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:53.920126 systemd-logind[1467]: New session 7 of user core. Feb 13 03:50:53.920620 systemd[1]: Started session-7.scope. Feb 13 03:50:53.969584 sshd[1597]: pam_unix(sshd:session): session closed for user core Feb 13 03:50:53.974417 systemd[1]: sshd@4-139.178.90.101:22-139.178.68.195:54974.service: Deactivated successfully. Feb 13 03:50:53.975850 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 03:50:53.977360 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Feb 13 03:50:53.979703 systemd[1]: Started sshd@5-139.178.90.101:22-139.178.68.195:54988.service. Feb 13 03:50:53.982118 systemd-logind[1467]: Removed session 7. Feb 13 03:50:54.033842 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 54988 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:54.034496 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:54.036739 systemd-logind[1467]: New session 8 of user core. Feb 13 03:50:54.037118 systemd[1]: Started session-8.scope. Feb 13 03:50:54.087789 sshd[1604]: pam_unix(sshd:session): session closed for user core Feb 13 03:50:54.090319 systemd[1]: sshd@5-139.178.90.101:22-139.178.68.195:54988.service: Deactivated successfully. Feb 13 03:50:54.090968 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 03:50:54.091682 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Feb 13 03:50:54.092746 systemd[1]: Started sshd@6-139.178.90.101:22-139.178.68.195:55004.service. Feb 13 03:50:54.093655 systemd-logind[1467]: Removed session 8. Feb 13 03:50:54.162776 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 55004 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 03:50:54.164918 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 03:50:54.171430 systemd-logind[1467]: New session 9 of user core. Feb 13 03:50:54.172892 systemd[1]: Started session-9.scope. Feb 13 03:50:54.263561 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 03:50:54.264184 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 03:50:58.292716 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 03:50:58.297005 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 03:50:58.297194 systemd[1]: Reached target network-online.target. Feb 13 03:50:58.297883 systemd[1]: Starting docker.service... Feb 13 03:50:58.318921 env[1633]: time="2024-02-13T03:50:58.318860802Z" level=info msg="Starting up" Feb 13 03:50:58.319535 env[1633]: time="2024-02-13T03:50:58.319523085Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 03:50:58.319535 env[1633]: time="2024-02-13T03:50:58.319532275Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 03:50:58.319602 env[1633]: time="2024-02-13T03:50:58.319547886Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 03:50:58.319602 env[1633]: time="2024-02-13T03:50:58.319557606Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 03:50:58.320577 env[1633]: time="2024-02-13T03:50:58.320565864Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 03:50:58.320577 env[1633]: time="2024-02-13T03:50:58.320574157Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 03:50:58.320659 env[1633]: time="2024-02-13T03:50:58.320585463Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 03:50:58.320659 env[1633]: time="2024-02-13T03:50:58.320593414Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 03:50:58.333063 env[1633]: time="2024-02-13T03:50:58.333047167Z" level=info msg="Loading containers: start." Feb 13 03:50:58.487478 kernel: Initializing XFRM netlink socket Feb 13 03:50:58.547832 env[1633]: time="2024-02-13T03:50:58.547780490Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 13 03:50:58.548649 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 03:50:58.588244 systemd-networkd[1312]: docker0: Link UP Feb 13 03:50:58.592496 env[1633]: time="2024-02-13T03:50:58.592447906Z" level=info msg="Loading containers: done." Feb 13 03:50:58.597508 env[1633]: time="2024-02-13T03:50:58.597456625Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 03:50:58.597580 env[1633]: time="2024-02-13T03:50:58.597536443Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 13 03:50:58.597603 env[1633]: time="2024-02-13T03:50:58.597577877Z" level=info msg="Daemon has completed initialization" Feb 13 03:50:58.604054 systemd[1]: Started docker.service. Feb 13 03:50:58.607738 env[1633]: time="2024-02-13T03:50:58.607684092Z" level=info msg="API listen on /run/docker.sock" Feb 13 03:50:58.623287 systemd[1]: Reloading. Feb 13 03:50:58.660916 /usr/lib/systemd/system-generators/torcx-generator[1786]: time="2024-02-13T03:50:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 03:50:58.660948 /usr/lib/systemd/system-generators/torcx-generator[1786]: time="2024-02-13T03:50:58Z" level=info msg="torcx already run" Feb 13 03:50:59.358708 systemd-resolved[1422]: Clock change detected. Flushing caches. Feb 13 03:50:59.358729 systemd-timesyncd[1423]: Contacted time server [2600:3c01::f03c:91ff:febc:67d4]:123 (2.flatcar.pool.ntp.org). Feb 13 03:50:59.358764 systemd-timesyncd[1423]: Initial clock synchronization to Tue 2024-02-13 03:50:59.358639 UTC. Feb 13 03:50:59.364828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 03:50:59.364838 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 03:50:59.379719 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 03:50:59.432516 systemd[1]: Started kubelet.service. Feb 13 03:50:59.454806 kubelet[1846]: E0213 03:50:59.454716 1846 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 03:50:59.455931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 03:50:59.455997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 03:51:00.139047 env[1480]: time="2024-02-13T03:51:00.138906626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 13 03:51:00.758251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144045692.mount: Deactivated successfully. Feb 13 03:51:02.074672 env[1480]: time="2024-02-13T03:51:02.074598258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:02.075775 env[1480]: time="2024-02-13T03:51:02.075720699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:02.077899 env[1480]: time="2024-02-13T03:51:02.077844680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:02.079823 env[1480]: time="2024-02-13T03:51:02.079766802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:02.080713 env[1480]: time="2024-02-13T03:51:02.080661143Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 13 03:51:02.091176 env[1480]: time="2024-02-13T03:51:02.091145338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 13 03:51:03.865282 env[1480]: time="2024-02-13T03:51:03.865235234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:03.866442 env[1480]: time="2024-02-13T03:51:03.866428719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:03.867241 env[1480]: time="2024-02-13T03:51:03.867226967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:03.868411 env[1480]: time="2024-02-13T03:51:03.868397570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:03.868848 env[1480]: time="2024-02-13T03:51:03.868813730Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 13 03:51:03.875382 env[1480]: time="2024-02-13T03:51:03.875361980Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 13 03:51:05.147414 env[1480]: time="2024-02-13T03:51:05.147351172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:05.148029 env[1480]: time="2024-02-13T03:51:05.147992008Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:05.149322 env[1480]: time="2024-02-13T03:51:05.149282267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:05.150221 env[1480]: time="2024-02-13T03:51:05.150180036Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:05.150625 env[1480]: time="2024-02-13T03:51:05.150582329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 13 03:51:05.157751 env[1480]: time="2024-02-13T03:51:05.157737183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 13 03:51:06.058160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318607593.mount: Deactivated successfully. Feb 13 03:51:06.348046 env[1480]: time="2024-02-13T03:51:06.347970742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.348763 env[1480]: time="2024-02-13T03:51:06.348729208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.349357 env[1480]: time="2024-02-13T03:51:06.349323009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.349988 env[1480]: time="2024-02-13T03:51:06.349948443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.350273 env[1480]: time="2024-02-13T03:51:06.350232044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 13 03:51:06.355823 env[1480]: time="2024-02-13T03:51:06.355771232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 03:51:06.927306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894433407.mount: Deactivated successfully. Feb 13 03:51:06.928598 env[1480]: time="2024-02-13T03:51:06.928558330Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.929246 env[1480]: time="2024-02-13T03:51:06.929206354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.929993 env[1480]: time="2024-02-13T03:51:06.929949690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.930791 env[1480]: time="2024-02-13T03:51:06.930754034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:06.931185 env[1480]: time="2024-02-13T03:51:06.931130116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 03:51:06.936740 env[1480]: time="2024-02-13T03:51:06.936706946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 13 03:51:07.627819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369369105.mount: Deactivated successfully. Feb 13 03:51:09.557139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 03:51:09.557260 systemd[1]: Stopped kubelet.service. Feb 13 03:51:09.558138 systemd[1]: Started kubelet.service. Feb 13 03:51:09.582219 kubelet[1936]: E0213 03:51:09.582132 1936 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 03:51:09.584307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 03:51:09.584380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 03:51:10.520414 env[1480]: time="2024-02-13T03:51:10.520334818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:10.521081 env[1480]: time="2024-02-13T03:51:10.521017779Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:10.521830 env[1480]: time="2024-02-13T03:51:10.521785186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:10.522959 env[1480]: time="2024-02-13T03:51:10.522923283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:10.523253 env[1480]: time="2024-02-13T03:51:10.523205127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 13 03:51:10.529916 env[1480]: time="2024-02-13T03:51:10.529897570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 13 03:51:11.062125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201174059.mount: Deactivated successfully. Feb 13 03:51:11.466799 env[1480]: time="2024-02-13T03:51:11.466733256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:11.467444 env[1480]: time="2024-02-13T03:51:11.467419403Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:11.468215 env[1480]: time="2024-02-13T03:51:11.468202712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:11.468968 env[1480]: time="2024-02-13T03:51:11.468955625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:11.469197 env[1480]: time="2024-02-13T03:51:11.469184347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 13 03:51:12.820837 systemd[1]: Stopped kubelet.service. Feb 13 03:51:12.831482 systemd[1]: Reloading. Feb 13 03:51:12.866778 /usr/lib/systemd/system-generators/torcx-generator[2095]: time="2024-02-13T03:51:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 03:51:12.866805 /usr/lib/systemd/system-generators/torcx-generator[2095]: time="2024-02-13T03:51:12Z" level=info msg="torcx already run" Feb 13 03:51:12.925886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 03:51:12.925896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 03:51:12.940415 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 03:51:12.994744 systemd[1]: Started kubelet.service. Feb 13 03:51:13.017914 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 03:51:13.017914 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 03:51:13.018182 kubelet[2155]: I0213 03:51:13.017945 2155 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 03:51:13.019542 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 03:51:13.019542 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 03:51:13.323226 kubelet[2155]: I0213 03:51:13.323186 2155 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 03:51:13.323226 kubelet[2155]: I0213 03:51:13.323199 2155 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 03:51:13.323319 kubelet[2155]: I0213 03:51:13.323315 2155 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 03:51:13.324699 kubelet[2155]: I0213 03:51:13.324688 2155 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 03:51:13.325088 kubelet[2155]: E0213 03:51:13.325052 2155 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.90.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.343904 kubelet[2155]: I0213 03:51:13.343864 2155 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 03:51:13.343967 kubelet[2155]: I0213 03:51:13.343960 2155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 03:51:13.344007 kubelet[2155]: I0213 03:51:13.343998 2155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 03:51:13.344061 kubelet[2155]: I0213 03:51:13.344009 2155 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 03:51:13.344061 kubelet[2155]: I0213 03:51:13.344016 2155 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 03:51:13.344061 kubelet[2155]: I0213 03:51:13.344060 2155 state_mem.go:36] "Initialized new in-memory state store" Feb 13 03:51:13.345409 kubelet[2155]: I0213 03:51:13.345401 2155 kubelet.go:398] "Attempting to sync node with API server" Feb 13 03:51:13.345445 kubelet[2155]: I0213 03:51:13.345412 2155 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 03:51:13.345445 kubelet[2155]: I0213 03:51:13.345422 2155 kubelet.go:297] "Adding apiserver pod source" Feb 13 03:51:13.345445 kubelet[2155]: I0213 03:51:13.345431 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 03:51:13.346242 kubelet[2155]: I0213 03:51:13.346223 2155 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 03:51:13.346472 kubelet[2155]: W0213 03:51:13.346403 2155 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.90.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.346520 kubelet[2155]: W0213 03:51:13.346497 2155 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 03:51:13.346552 kubelet[2155]: E0213 03:51:13.346526 2155 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.90.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.346628 kubelet[2155]: W0213 03:51:13.346583 2155 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.90.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-fff065a016&limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.346679 kubelet[2155]: E0213 03:51:13.346650 2155 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.90.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-fff065a016&limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.346926 kubelet[2155]: I0213 03:51:13.346919 2155 server.go:1186] "Started kubelet" Feb 13 03:51:13.347009 kubelet[2155]: I0213 03:51:13.346999 2155 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 03:51:13.347181 kubelet[2155]: E0213 03:51:13.347170 2155 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 03:51:13.347218 kubelet[2155]: E0213 03:51:13.347104 2155 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-fff065a016.17b34fa98d940c70", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-fff065a016", UID:"ci-3510.3.2-a-fff065a016", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-fff065a016"}, FirstTimestamp:time.Date(2024, time.February, 13, 3, 51, 13, 346907248, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 3, 51, 13, 346907248, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.90.101:6443/api/v1/namespaces/default/events": dial tcp 139.178.90.101:6443: connect: connection refused'(may retry after sleeping) Feb 13 03:51:13.347218 kubelet[2155]: E0213 03:51:13.347187 2155 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 03:51:13.356883 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 03:51:13.356954 kubelet[2155]: I0213 03:51:13.356895 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 03:51:13.356954 kubelet[2155]: I0213 03:51:13.356921 2155 server.go:451] "Adding debug handlers to kubelet server" Feb 13 03:51:13.357025 kubelet[2155]: I0213 03:51:13.356971 2155 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 03:51:13.357025 kubelet[2155]: E0213 03:51:13.357012 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:13.357025 kubelet[2155]: I0213 03:51:13.357019 2155 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 03:51:13.357174 kubelet[2155]: E0213 03:51:13.357159 2155 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.90.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-fff065a016?timeout=10s": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.357266 kubelet[2155]: W0213 03:51:13.357243 2155 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.357305 kubelet[2155]: E0213 03:51:13.357274 2155 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.376577 kubelet[2155]: I0213 03:51:13.376536 2155 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 03:51:13.388059 kubelet[2155]: I0213 03:51:13.388016 2155 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 03:51:13.388059 kubelet[2155]: I0213 03:51:13.388026 2155 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 03:51:13.388059 kubelet[2155]: I0213 03:51:13.388038 2155 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 03:51:13.388161 kubelet[2155]: E0213 03:51:13.388068 2155 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 03:51:13.388308 kubelet[2155]: W0213 03:51:13.388294 2155 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.90.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.388340 kubelet[2155]: E0213 03:51:13.388316 2155 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.90.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.441525 kubelet[2155]: I0213 03:51:13.441442 2155 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 03:51:13.441525 kubelet[2155]: I0213 03:51:13.441485 2155 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 03:51:13.441525 kubelet[2155]: I0213 03:51:13.441519 2155 state_mem.go:36] "Initialized new in-memory state store" Feb 13 03:51:13.443275 kubelet[2155]: I0213 03:51:13.443189 2155 policy_none.go:49] "None policy: Start" Feb 13 03:51:13.444216 kubelet[2155]: I0213 03:51:13.444175 2155 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 03:51:13.444216 kubelet[2155]: I0213 03:51:13.444224 2155 state_mem.go:35] "Initializing new in-memory state store" Feb 13 03:51:13.453998 systemd[1]: Created slice kubepods.slice. Feb 13 03:51:13.461032 kubelet[2155]: I0213 03:51:13.460952 2155 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.461703 kubelet[2155]: E0213 03:51:13.461618 2155 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.101:6443/api/v1/nodes\": dial tcp 139.178.90.101:6443: connect: connection refused" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.464107 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 03:51:13.471399 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 03:51:13.486940 kubelet[2155]: I0213 03:51:13.486854 2155 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 03:51:13.487425 kubelet[2155]: I0213 03:51:13.487344 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 03:51:13.487983 kubelet[2155]: E0213 03:51:13.487938 2155 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:13.489078 kubelet[2155]: I0213 03:51:13.489027 2155 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:13.492481 kubelet[2155]: I0213 03:51:13.492401 2155 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:13.495845 kubelet[2155]: I0213 03:51:13.495801 2155 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:13.496263 kubelet[2155]: I0213 03:51:13.496219 2155 status_manager.go:698] "Failed to get status for pod" podUID=f5138d2e00e53a3987728b6dd8150ab3 pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" err="Get \"https://139.178.90.101:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-fff065a016\": dial tcp 139.178.90.101:6443: connect: connection refused" Feb 13 03:51:13.500061 kubelet[2155]: I0213 03:51:13.499977 2155 status_manager.go:698] "Failed to get status for pod" podUID=ce2158e3d065369744fa10d8f9abe709 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" err="Get \"https://139.178.90.101:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-fff065a016\": dial tcp 139.178.90.101:6443: connect: connection refused" Feb 13 03:51:13.503317 kubelet[2155]: I0213 03:51:13.503270 2155 status_manager.go:698] "Failed to get status for pod" podUID=63f2a98ce6cba3e4c95d811d0b2b8226 pod="kube-system/kube-scheduler-ci-3510.3.2-a-fff065a016" err="Get \"https://139.178.90.101:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-fff065a016\": dial tcp 139.178.90.101:6443: connect: connection refused" Feb 13 03:51:13.507866 systemd[1]: Created slice kubepods-burstable-podf5138d2e00e53a3987728b6dd8150ab3.slice. Feb 13 03:51:13.543653 systemd[1]: Created slice kubepods-burstable-podce2158e3d065369744fa10d8f9abe709.slice. Feb 13 03:51:13.559013 kubelet[2155]: E0213 03:51:13.558905 2155 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.90.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-fff065a016?timeout=10s": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:13.571792 systemd[1]: Created slice kubepods-burstable-pod63f2a98ce6cba3e4c95d811d0b2b8226.slice. Feb 13 03:51:13.659212 kubelet[2155]: I0213 03:51:13.658977 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5138d2e00e53a3987728b6dd8150ab3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fff065a016\" (UID: \"f5138d2e00e53a3987728b6dd8150ab3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.659212 kubelet[2155]: I0213 03:51:13.659088 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.659593 kubelet[2155]: I0213 03:51:13.659290 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.659593 kubelet[2155]: I0213 03:51:13.659417 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5138d2e00e53a3987728b6dd8150ab3-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fff065a016\" (UID: \"f5138d2e00e53a3987728b6dd8150ab3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.659593 kubelet[2155]: I0213 03:51:13.659511 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5138d2e00e53a3987728b6dd8150ab3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-fff065a016\" (UID: \"f5138d2e00e53a3987728b6dd8150ab3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.659593 kubelet[2155]: I0213 03:51:13.659582 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.660305 kubelet[2155]: I0213 03:51:13.659644 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.660305 kubelet[2155]: I0213 03:51:13.659719 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.660305 kubelet[2155]: I0213 03:51:13.659834 2155 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63f2a98ce6cba3e4c95d811d0b2b8226-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-fff065a016\" (UID: \"63f2a98ce6cba3e4c95d811d0b2b8226\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.665965 kubelet[2155]: I0213 03:51:13.665882 2155 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.666629 kubelet[2155]: E0213 03:51:13.666548 2155 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.101:6443/api/v1/nodes\": dial tcp 139.178.90.101:6443: connect: connection refused" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:13.839668 env[1480]: time="2024-02-13T03:51:13.839521303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-fff065a016,Uid:f5138d2e00e53a3987728b6dd8150ab3,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:13.867559 env[1480]: time="2024-02-13T03:51:13.867420844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-fff065a016,Uid:ce2158e3d065369744fa10d8f9abe709,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:13.877405 env[1480]: time="2024-02-13T03:51:13.877315806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-fff065a016,Uid:63f2a98ce6cba3e4c95d811d0b2b8226,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:13.959803 kubelet[2155]: E0213 03:51:13.959589 2155 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.90.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-fff065a016?timeout=10s": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:14.070911 kubelet[2155]: I0213 03:51:14.070859 2155 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:14.071652 kubelet[2155]: E0213 03:51:14.071570 2155 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.101:6443/api/v1/nodes\": dial tcp 139.178.90.101:6443: connect: connection refused" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:14.173752 kubelet[2155]: W0213 03:51:14.173605 2155 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:14.173752 kubelet[2155]: E0213 03:51:14.173728 2155 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.90.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:14.404677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499821149.mount: Deactivated successfully. Feb 13 03:51:14.406071 env[1480]: time="2024-02-13T03:51:14.406022630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.406159 kubelet[2155]: W0213 03:51:14.406105 2155 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.90.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-fff065a016&limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:14.406159 kubelet[2155]: E0213 03:51:14.406137 2155 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.90.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-fff065a016&limit=500&resourceVersion=0": dial tcp 139.178.90.101:6443: connect: connection refused Feb 13 03:51:14.406981 env[1480]: time="2024-02-13T03:51:14.406924314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.407694 env[1480]: time="2024-02-13T03:51:14.407648591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.408291 env[1480]: time="2024-02-13T03:51:14.408251933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.408674 env[1480]: time="2024-02-13T03:51:14.408636296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.409369 env[1480]: time="2024-02-13T03:51:14.409328082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.411083 env[1480]: time="2024-02-13T03:51:14.411042060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.411520 env[1480]: time="2024-02-13T03:51:14.411479280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.412820 env[1480]: time="2024-02-13T03:51:14.412780074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.413125 env[1480]: time="2024-02-13T03:51:14.413112195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.414319 env[1480]: time="2024-02-13T03:51:14.414294680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.414986 env[1480]: time="2024-02-13T03:51:14.414949717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:14.420359 env[1480]: time="2024-02-13T03:51:14.420290063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:14.420359 env[1480]: time="2024-02-13T03:51:14.420335015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:14.420359 env[1480]: time="2024-02-13T03:51:14.420346974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:14.420655 env[1480]: time="2024-02-13T03:51:14.420632862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/253c1666fc49d014b048f6fe714d1cedb51eabb4323f398826556a9634200e35 pid=2241 runtime=io.containerd.runc.v2 Feb 13 03:51:14.422508 env[1480]: time="2024-02-13T03:51:14.422468052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:14.422508 env[1480]: time="2024-02-13T03:51:14.422492807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:14.422508 env[1480]: time="2024-02-13T03:51:14.422500430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:14.422669 env[1480]: time="2024-02-13T03:51:14.422594084Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3691c21b3103359982bc5033f8a132dee902a4edbc3b5416ee455dfa6a6da3ae pid=2266 runtime=io.containerd.runc.v2 Feb 13 03:51:14.422729 env[1480]: time="2024-02-13T03:51:14.422710229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:14.422750 env[1480]: time="2024-02-13T03:51:14.422728866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:14.422750 env[1480]: time="2024-02-13T03:51:14.422737072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:14.422801 env[1480]: time="2024-02-13T03:51:14.422789797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/767931e55d9348336884a1d479b7cbd2f78ff6f88836e903df92267121b443f3 pid=2264 runtime=io.containerd.runc.v2 Feb 13 03:51:14.428696 systemd[1]: Started cri-containerd-253c1666fc49d014b048f6fe714d1cedb51eabb4323f398826556a9634200e35.scope. Feb 13 03:51:14.431903 systemd[1]: Started cri-containerd-3691c21b3103359982bc5033f8a132dee902a4edbc3b5416ee455dfa6a6da3ae.scope. Feb 13 03:51:14.432679 systemd[1]: Started cri-containerd-767931e55d9348336884a1d479b7cbd2f78ff6f88836e903df92267121b443f3.scope. Feb 13 03:51:14.455591 env[1480]: time="2024-02-13T03:51:14.455550380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-fff065a016,Uid:f5138d2e00e53a3987728b6dd8150ab3,Namespace:kube-system,Attempt:0,} returns sandbox id \"253c1666fc49d014b048f6fe714d1cedb51eabb4323f398826556a9634200e35\"" Feb 13 03:51:14.455785 env[1480]: time="2024-02-13T03:51:14.455740935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-fff065a016,Uid:63f2a98ce6cba3e4c95d811d0b2b8226,Namespace:kube-system,Attempt:0,} returns sandbox id \"767931e55d9348336884a1d479b7cbd2f78ff6f88836e903df92267121b443f3\"" Feb 13 03:51:14.457339 env[1480]: time="2024-02-13T03:51:14.457321865Z" level=info msg="CreateContainer within sandbox \"767931e55d9348336884a1d479b7cbd2f78ff6f88836e903df92267121b443f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 03:51:14.457483 env[1480]: time="2024-02-13T03:51:14.457467150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-fff065a016,Uid:ce2158e3d065369744fa10d8f9abe709,Namespace:kube-system,Attempt:0,} returns sandbox id \"3691c21b3103359982bc5033f8a132dee902a4edbc3b5416ee455dfa6a6da3ae\"" Feb 13 03:51:14.457535 env[1480]: time="2024-02-13T03:51:14.457476436Z" level=info msg="CreateContainer within sandbox \"253c1666fc49d014b048f6fe714d1cedb51eabb4323f398826556a9634200e35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 03:51:14.458409 env[1480]: time="2024-02-13T03:51:14.458396306Z" level=info msg="CreateContainer within sandbox \"3691c21b3103359982bc5033f8a132dee902a4edbc3b5416ee455dfa6a6da3ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 03:51:14.463557 env[1480]: time="2024-02-13T03:51:14.463514287Z" level=info msg="CreateContainer within sandbox \"767931e55d9348336884a1d479b7cbd2f78ff6f88836e903df92267121b443f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6fcc8b8d89b93b952ce59fcc7dfa4a14e94428fb497fc5bb7e17bf4fe85aaed\"" Feb 13 03:51:14.463796 env[1480]: time="2024-02-13T03:51:14.463757222Z" level=info msg="StartContainer for \"d6fcc8b8d89b93b952ce59fcc7dfa4a14e94428fb497fc5bb7e17bf4fe85aaed\"" Feb 13 03:51:14.465240 env[1480]: time="2024-02-13T03:51:14.465199541Z" level=info msg="CreateContainer within sandbox \"3691c21b3103359982bc5033f8a132dee902a4edbc3b5416ee455dfa6a6da3ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5180669f0e39ca5e408451ecfc6e302709d6d3ae6a5515cbaeea9313a4e93882\"" Feb 13 03:51:14.465344 env[1480]: time="2024-02-13T03:51:14.465327693Z" level=info msg="CreateContainer within sandbox \"253c1666fc49d014b048f6fe714d1cedb51eabb4323f398826556a9634200e35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"874b0fce43635b5fec555ac8c7f9606ab1cd3000a1a0b384f3bfffb2eba185af\"" Feb 13 03:51:14.465383 env[1480]: time="2024-02-13T03:51:14.465359375Z" level=info msg="StartContainer for \"5180669f0e39ca5e408451ecfc6e302709d6d3ae6a5515cbaeea9313a4e93882\"" Feb 13 03:51:14.465479 env[1480]: time="2024-02-13T03:51:14.465467519Z" level=info msg="StartContainer for \"874b0fce43635b5fec555ac8c7f9606ab1cd3000a1a0b384f3bfffb2eba185af\"" Feb 13 03:51:14.472234 systemd[1]: Started cri-containerd-d6fcc8b8d89b93b952ce59fcc7dfa4a14e94428fb497fc5bb7e17bf4fe85aaed.scope. Feb 13 03:51:14.474494 systemd[1]: Started cri-containerd-5180669f0e39ca5e408451ecfc6e302709d6d3ae6a5515cbaeea9313a4e93882.scope. Feb 13 03:51:14.475088 systemd[1]: Started cri-containerd-874b0fce43635b5fec555ac8c7f9606ab1cd3000a1a0b384f3bfffb2eba185af.scope. Feb 13 03:51:14.498534 env[1480]: time="2024-02-13T03:51:14.498506016Z" level=info msg="StartContainer for \"d6fcc8b8d89b93b952ce59fcc7dfa4a14e94428fb497fc5bb7e17bf4fe85aaed\" returns successfully" Feb 13 03:51:14.507096 env[1480]: time="2024-02-13T03:51:14.507045801Z" level=info msg="StartContainer for \"874b0fce43635b5fec555ac8c7f9606ab1cd3000a1a0b384f3bfffb2eba185af\" returns successfully" Feb 13 03:51:14.508039 env[1480]: time="2024-02-13T03:51:14.508021012Z" level=info msg="StartContainer for \"5180669f0e39ca5e408451ecfc6e302709d6d3ae6a5515cbaeea9313a4e93882\" returns successfully" Feb 13 03:51:14.873769 kubelet[2155]: I0213 03:51:14.873752 2155 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:15.332454 kubelet[2155]: E0213 03:51:15.332428 2155 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-fff065a016\" not found" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:15.433034 kubelet[2155]: I0213 03:51:15.432956 2155 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:15.453157 kubelet[2155]: E0213 03:51:15.453124 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:15.553859 kubelet[2155]: E0213 03:51:15.553799 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:15.654232 kubelet[2155]: E0213 03:51:15.654041 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:15.754559 kubelet[2155]: E0213 03:51:15.754502 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:15.855221 kubelet[2155]: E0213 03:51:15.855153 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:15.955458 kubelet[2155]: E0213 03:51:15.955277 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.056493 kubelet[2155]: E0213 03:51:16.056429 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.157729 kubelet[2155]: E0213 03:51:16.157633 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.258161 kubelet[2155]: E0213 03:51:16.257969 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.358808 kubelet[2155]: E0213 03:51:16.358719 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.459415 kubelet[2155]: E0213 03:51:16.459306 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.560432 kubelet[2155]: E0213 03:51:16.560338 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.661515 kubelet[2155]: E0213 03:51:16.661412 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.762582 kubelet[2155]: E0213 03:51:16.762498 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:16.863676 kubelet[2155]: E0213 03:51:16.863480 2155 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fff065a016\" not found" Feb 13 03:51:17.347823 kubelet[2155]: I0213 03:51:17.347718 2155 apiserver.go:52] "Watching apiserver" Feb 13 03:51:17.357623 kubelet[2155]: I0213 03:51:17.357570 2155 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 03:51:17.383379 kubelet[2155]: I0213 03:51:17.383263 2155 reconciler.go:41] "Reconciler: start to sync state" Feb 13 03:51:18.668652 systemd[1]: Reloading. Feb 13 03:51:18.726360 /usr/lib/systemd/system-generators/torcx-generator[2527]: time="2024-02-13T03:51:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 03:51:18.726396 /usr/lib/systemd/system-generators/torcx-generator[2527]: time="2024-02-13T03:51:18Z" level=info msg="torcx already run" Feb 13 03:51:18.804303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 03:51:18.804315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 03:51:18.821093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 03:51:18.885011 systemd[1]: Stopping kubelet.service... Feb 13 03:51:18.885160 kubelet[2155]: I0213 03:51:18.885057 2155 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 03:51:18.901756 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 03:51:18.901860 systemd[1]: Stopped kubelet.service. Feb 13 03:51:18.902798 systemd[1]: Started kubelet.service. Feb 13 03:51:18.926616 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 03:51:18.926616 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 03:51:18.926616 kubelet[2585]: I0213 03:51:18.926560 2585 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 03:51:18.928633 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 03:51:18.928633 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 03:51:18.930527 kubelet[2585]: I0213 03:51:18.930489 2585 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 03:51:18.930527 kubelet[2585]: I0213 03:51:18.930499 2585 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 03:51:18.930656 kubelet[2585]: I0213 03:51:18.930622 2585 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 03:51:18.931323 kubelet[2585]: I0213 03:51:18.931315 2585 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 03:51:18.931691 kubelet[2585]: I0213 03:51:18.931657 2585 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 03:51:18.949375 kubelet[2585]: I0213 03:51:18.949359 2585 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 03:51:18.949501 kubelet[2585]: I0213 03:51:18.949465 2585 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 03:51:18.949542 kubelet[2585]: I0213 03:51:18.949504 2585 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 03:51:18.949542 kubelet[2585]: I0213 03:51:18.949515 2585 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 03:51:18.949542 kubelet[2585]: I0213 03:51:18.949521 2585 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 03:51:18.949542 kubelet[2585]: I0213 03:51:18.949540 2585 state_mem.go:36] "Initialized new in-memory state store" Feb 13 03:51:18.951038 kubelet[2585]: I0213 03:51:18.951002 2585 kubelet.go:398] "Attempting to sync node with API server" Feb 13 03:51:18.951038 kubelet[2585]: I0213 03:51:18.951012 2585 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 03:51:18.951038 kubelet[2585]: I0213 03:51:18.951024 2585 kubelet.go:297] "Adding apiserver pod source" Feb 13 03:51:18.951038 kubelet[2585]: I0213 03:51:18.951032 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 03:51:18.951360 kubelet[2585]: I0213 03:51:18.951351 2585 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 03:51:18.951627 kubelet[2585]: I0213 03:51:18.951619 2585 server.go:1186] "Started kubelet" Feb 13 03:51:18.951719 kubelet[2585]: I0213 03:51:18.951708 2585 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 03:51:18.953422 kubelet[2585]: E0213 03:51:18.952537 2585 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 03:51:18.953510 kubelet[2585]: E0213 03:51:18.953428 2585 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 03:51:18.953887 kubelet[2585]: I0213 03:51:18.953874 2585 server.go:451] "Adding debug handlers to kubelet server" Feb 13 03:51:18.953952 kubelet[2585]: I0213 03:51:18.953874 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 03:51:18.953992 kubelet[2585]: I0213 03:51:18.953987 2585 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 03:51:18.954038 kubelet[2585]: I0213 03:51:18.954030 2585 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 03:51:18.966421 kubelet[2585]: I0213 03:51:18.966372 2585 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 03:51:18.973269 kubelet[2585]: I0213 03:51:18.973229 2585 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 03:51:18.973269 kubelet[2585]: I0213 03:51:18.973241 2585 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 03:51:18.973269 kubelet[2585]: I0213 03:51:18.973252 2585 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 03:51:18.973379 kubelet[2585]: E0213 03:51:18.973277 2585 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 03:51:18.973597 kubelet[2585]: I0213 03:51:18.973584 2585 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 03:51:18.973597 kubelet[2585]: I0213 03:51:18.973594 2585 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 03:51:18.973661 kubelet[2585]: I0213 03:51:18.973602 2585 state_mem.go:36] "Initialized new in-memory state store" Feb 13 03:51:18.973739 kubelet[2585]: I0213 03:51:18.973702 2585 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 03:51:18.973739 kubelet[2585]: I0213 03:51:18.973713 2585 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 13 03:51:18.973739 kubelet[2585]: I0213 03:51:18.973718 2585 policy_none.go:49] "None policy: Start" Feb 13 03:51:18.973992 kubelet[2585]: I0213 03:51:18.973956 2585 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 03:51:18.973992 kubelet[2585]: I0213 03:51:18.973966 2585 state_mem.go:35] "Initializing new in-memory state store" Feb 13 03:51:18.974054 kubelet[2585]: I0213 03:51:18.974038 2585 state_mem.go:75] "Updated machine memory state" Feb 13 03:51:18.975832 kubelet[2585]: I0213 03:51:18.975822 2585 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 03:51:18.975955 kubelet[2585]: I0213 03:51:18.975947 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 03:51:18.994595 sudo[2648]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 03:51:18.994711 sudo[2648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 13 03:51:19.055710 kubelet[2585]: I0213 03:51:19.055667 2585 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.060911 kubelet[2585]: I0213 03:51:19.060872 2585 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.060911 kubelet[2585]: I0213 03:51:19.060908 2585 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.074059 kubelet[2585]: I0213 03:51:19.074019 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:19.074127 kubelet[2585]: I0213 03:51:19.074069 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:19.074127 kubelet[2585]: I0213 03:51:19.074088 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:19.077177 kubelet[2585]: E0213 03:51:19.077135 2585 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-fff065a016\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.154831 kubelet[2585]: E0213 03:51:19.154778 2585 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-fff065a016\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255546 kubelet[2585]: I0213 03:51:19.255419 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5138d2e00e53a3987728b6dd8150ab3-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fff065a016\" (UID: \"f5138d2e00e53a3987728b6dd8150ab3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255546 kubelet[2585]: I0213 03:51:19.255445 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5138d2e00e53a3987728b6dd8150ab3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fff065a016\" (UID: \"f5138d2e00e53a3987728b6dd8150ab3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255546 kubelet[2585]: I0213 03:51:19.255485 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5138d2e00e53a3987728b6dd8150ab3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-fff065a016\" (UID: \"f5138d2e00e53a3987728b6dd8150ab3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255546 kubelet[2585]: I0213 03:51:19.255543 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255721 kubelet[2585]: I0213 03:51:19.255605 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255721 kubelet[2585]: I0213 03:51:19.255628 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255721 kubelet[2585]: I0213 03:51:19.255643 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63f2a98ce6cba3e4c95d811d0b2b8226-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-fff065a016\" (UID: \"63f2a98ce6cba3e4c95d811d0b2b8226\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255721 kubelet[2585]: I0213 03:51:19.255656 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.255721 kubelet[2585]: I0213 03:51:19.255680 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce2158e3d065369744fa10d8f9abe709-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" (UID: \"ce2158e3d065369744fa10d8f9abe709\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:19.324494 sudo[2648]: pam_unix(sudo:session): session closed for user root Feb 13 03:51:19.951345 kubelet[2585]: I0213 03:51:19.951214 2585 apiserver.go:52] "Watching apiserver" Feb 13 03:51:20.055042 kubelet[2585]: I0213 03:51:20.054970 2585 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 03:51:20.061387 kubelet[2585]: I0213 03:51:20.061270 2585 reconciler.go:41] "Reconciler: start to sync state" Feb 13 03:51:20.355156 kubelet[2585]: E0213 03:51:20.355136 2585 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-fff065a016\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-fff065a016" Feb 13 03:51:20.367413 sudo[1613]: pam_unix(sudo:session): session closed for user root Feb 13 03:51:20.368431 sshd[1610]: pam_unix(sshd:session): session closed for user core Feb 13 03:51:20.370144 systemd[1]: sshd@6-139.178.90.101:22-139.178.68.195:55004.service: Deactivated successfully. Feb 13 03:51:20.370700 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 03:51:20.370820 systemd[1]: session-9.scope: Consumed 2.640s CPU time. Feb 13 03:51:20.371190 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Feb 13 03:51:20.371942 systemd-logind[1467]: Removed session 9. Feb 13 03:51:20.560398 kubelet[2585]: E0213 03:51:20.560283 2585 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-fff065a016\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" Feb 13 03:51:20.565617 update_engine[1469]: I0213 03:51:20.565519 1469 update_attempter.cc:509] Updating boot flags... Feb 13 03:51:20.760753 kubelet[2585]: E0213 03:51:20.760526 2585 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-fff065a016\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" Feb 13 03:51:21.359813 kubelet[2585]: I0213 03:51:21.359776 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-fff065a016" podStartSLOduration=4.359714994 pod.CreationTimestamp="2024-02-13 03:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:20.972257391 +0000 UTC m=+2.067744048" watchObservedRunningTime="2024-02-13 03:51:21.359714994 +0000 UTC m=+2.455201579" Feb 13 03:51:21.360064 kubelet[2585]: I0213 03:51:21.359830 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fff065a016" podStartSLOduration=4.359819778 pod.CreationTimestamp="2024-02-13 03:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:21.359714166 +0000 UTC m=+2.455200752" watchObservedRunningTime="2024-02-13 03:51:21.359819778 +0000 UTC m=+2.455306364" Feb 13 03:51:21.760653 kubelet[2585]: I0213 03:51:21.760530 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fff065a016" podStartSLOduration=2.760489342 pod.CreationTimestamp="2024-02-13 03:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:21.760448767 +0000 UTC m=+2.855935359" watchObservedRunningTime="2024-02-13 03:51:21.760489342 +0000 UTC m=+2.855975933" Feb 13 03:51:31.786032 kubelet[2585]: I0213 03:51:31.785960 2585 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 03:51:31.787183 kubelet[2585]: I0213 03:51:31.786964 2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 03:51:31.787333 env[1480]: time="2024-02-13T03:51:31.786582091Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 03:51:32.679612 kubelet[2585]: I0213 03:51:32.679527 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:32.688931 kubelet[2585]: I0213 03:51:32.688897 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:32.691320 systemd[1]: Created slice kubepods-besteffort-pode97a5ab1_7837_4e8a_aac8_c8dfdd3a23db.slice. Feb 13 03:51:32.713030 systemd[1]: Created slice kubepods-burstable-podaae4fc55_28a9_499f_a1e6_1b669e3cc369.slice. Feb 13 03:51:32.741227 kubelet[2585]: I0213 03:51:32.741199 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-net\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741406 kubelet[2585]: I0213 03:51:32.741243 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-lib-modules\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741406 kubelet[2585]: I0213 03:51:32.741272 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-etc-cni-netd\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741406 kubelet[2585]: I0213 03:51:32.741297 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-config-path\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741406 kubelet[2585]: I0213 03:51:32.741392 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-run\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741725 kubelet[2585]: I0213 03:51:32.741447 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hostproc\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741725 kubelet[2585]: I0213 03:51:32.741484 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cni-path\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741725 kubelet[2585]: I0213 03:51:32.741512 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hubble-tls\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741725 kubelet[2585]: I0213 03:51:32.741538 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clmpv\" (UniqueName: \"kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-kube-api-access-clmpv\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.741725 kubelet[2585]: I0213 03:51:32.741566 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db-kube-proxy\") pod \"kube-proxy-5gg5g\" (UID: \"e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db\") " pod="kube-system/kube-proxy-5gg5g" Feb 13 03:51:32.741725 kubelet[2585]: I0213 03:51:32.741591 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aae4fc55-28a9-499f-a1e6-1b669e3cc369-clustermesh-secrets\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.742103 kubelet[2585]: I0213 03:51:32.741616 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-kernel\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.742103 kubelet[2585]: I0213 03:51:32.741642 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-cgroup\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.742103 kubelet[2585]: I0213 03:51:32.741702 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db-lib-modules\") pod \"kube-proxy-5gg5g\" (UID: \"e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db\") " pod="kube-system/kube-proxy-5gg5g" Feb 13 03:51:32.742103 kubelet[2585]: I0213 03:51:32.741773 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-bpf-maps\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.742103 kubelet[2585]: I0213 03:51:32.741806 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db-xtables-lock\") pod \"kube-proxy-5gg5g\" (UID: \"e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db\") " pod="kube-system/kube-proxy-5gg5g" Feb 13 03:51:32.742353 kubelet[2585]: I0213 03:51:32.741841 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nlm\" (UniqueName: \"kubernetes.io/projected/e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db-kube-api-access-29nlm\") pod \"kube-proxy-5gg5g\" (UID: \"e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db\") " pod="kube-system/kube-proxy-5gg5g" Feb 13 03:51:32.742353 kubelet[2585]: I0213 03:51:32.741874 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-xtables-lock\") pod \"cilium-z7w72\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " pod="kube-system/cilium-z7w72" Feb 13 03:51:32.763590 kubelet[2585]: I0213 03:51:32.763551 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:32.769620 systemd[1]: Created slice kubepods-besteffort-podacd663c6_00a6_4e48_9a65_42cb478b7569.slice. Feb 13 03:51:32.842987 kubelet[2585]: I0213 03:51:32.842927 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7xnj\" (UniqueName: \"kubernetes.io/projected/acd663c6-00a6-4e48-9a65-42cb478b7569-kube-api-access-h7xnj\") pod \"cilium-operator-f59cbd8c6-5psks\" (UID: \"acd663c6-00a6-4e48-9a65-42cb478b7569\") " pod="kube-system/cilium-operator-f59cbd8c6-5psks" Feb 13 03:51:32.843815 kubelet[2585]: I0213 03:51:32.843411 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd663c6-00a6-4e48-9a65-42cb478b7569-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-5psks\" (UID: \"acd663c6-00a6-4e48-9a65-42cb478b7569\") " pod="kube-system/cilium-operator-f59cbd8c6-5psks" Feb 13 03:51:33.316737 env[1480]: time="2024-02-13T03:51:33.316596045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7w72,Uid:aae4fc55-28a9-499f-a1e6-1b669e3cc369,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:33.343517 env[1480]: time="2024-02-13T03:51:33.343287373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:33.343517 env[1480]: time="2024-02-13T03:51:33.343401353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:33.343517 env[1480]: time="2024-02-13T03:51:33.343452668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:33.344051 env[1480]: time="2024-02-13T03:51:33.343855495Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca pid=2775 runtime=io.containerd.runc.v2 Feb 13 03:51:33.369867 systemd[1]: Started cri-containerd-0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca.scope. Feb 13 03:51:33.419706 env[1480]: time="2024-02-13T03:51:33.419573934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7w72,Uid:aae4fc55-28a9-499f-a1e6-1b669e3cc369,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\"" Feb 13 03:51:33.423097 env[1480]: time="2024-02-13T03:51:33.423027884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 03:51:33.609690 env[1480]: time="2024-02-13T03:51:33.609462830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gg5g,Uid:e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:33.633360 env[1480]: time="2024-02-13T03:51:33.633161800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:33.633360 env[1480]: time="2024-02-13T03:51:33.633255951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:33.633360 env[1480]: time="2024-02-13T03:51:33.633295651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:33.633939 env[1480]: time="2024-02-13T03:51:33.633707610Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/203ddf765d0bb7d55c8ea2e5d7af2219dbc8d5b4647d8e96770699b2c8d3cd6f pid=2816 runtime=io.containerd.runc.v2 Feb 13 03:51:33.661948 systemd[1]: Started cri-containerd-203ddf765d0bb7d55c8ea2e5d7af2219dbc8d5b4647d8e96770699b2c8d3cd6f.scope. Feb 13 03:51:33.673526 env[1480]: time="2024-02-13T03:51:33.673451206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-5psks,Uid:acd663c6-00a6-4e48-9a65-42cb478b7569,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:33.691695 env[1480]: time="2024-02-13T03:51:33.691523238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:33.691695 env[1480]: time="2024-02-13T03:51:33.691621334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:33.691695 env[1480]: time="2024-02-13T03:51:33.691658903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:33.692081 env[1480]: time="2024-02-13T03:51:33.691965242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130 pid=2850 runtime=io.containerd.runc.v2 Feb 13 03:51:33.698326 env[1480]: time="2024-02-13T03:51:33.698232964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gg5g,Uid:e97a5ab1-7837-4e8a-aac8-c8dfdd3a23db,Namespace:kube-system,Attempt:0,} returns sandbox id \"203ddf765d0bb7d55c8ea2e5d7af2219dbc8d5b4647d8e96770699b2c8d3cd6f\"" Feb 13 03:51:33.702133 env[1480]: time="2024-02-13T03:51:33.702048377Z" level=info msg="CreateContainer within sandbox \"203ddf765d0bb7d55c8ea2e5d7af2219dbc8d5b4647d8e96770699b2c8d3cd6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 03:51:33.712391 systemd[1]: Started cri-containerd-eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130.scope. Feb 13 03:51:33.719851 env[1480]: time="2024-02-13T03:51:33.719746770Z" level=info msg="CreateContainer within sandbox \"203ddf765d0bb7d55c8ea2e5d7af2219dbc8d5b4647d8e96770699b2c8d3cd6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0661a61b24115dc3390bb237046d6860b8851c4b77bc9b7899b66030ea17d4f\"" Feb 13 03:51:33.720619 env[1480]: time="2024-02-13T03:51:33.720554615Z" level=info msg="StartContainer for \"d0661a61b24115dc3390bb237046d6860b8851c4b77bc9b7899b66030ea17d4f\"" Feb 13 03:51:33.737916 systemd[1]: Started cri-containerd-d0661a61b24115dc3390bb237046d6860b8851c4b77bc9b7899b66030ea17d4f.scope. Feb 13 03:51:33.755606 env[1480]: time="2024-02-13T03:51:33.755578705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-5psks,Uid:acd663c6-00a6-4e48-9a65-42cb478b7569,Namespace:kube-system,Attempt:0,} returns sandbox id \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\"" Feb 13 03:51:33.755970 env[1480]: time="2024-02-13T03:51:33.755951173Z" level=info msg="StartContainer for \"d0661a61b24115dc3390bb237046d6860b8851c4b77bc9b7899b66030ea17d4f\" returns successfully" Feb 13 03:51:34.386389 kubelet[2585]: I0213 03:51:34.386351 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5gg5g" podStartSLOduration=2.386326875 pod.CreationTimestamp="2024-02-13 03:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:34.386235338 +0000 UTC m=+15.481721925" watchObservedRunningTime="2024-02-13 03:51:34.386326875 +0000 UTC m=+15.481813458" Feb 13 03:51:37.518323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586456883.mount: Deactivated successfully. Feb 13 03:51:39.228141 env[1480]: time="2024-02-13T03:51:39.228087985Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:39.228766 env[1480]: time="2024-02-13T03:51:39.228726711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:39.229943 env[1480]: time="2024-02-13T03:51:39.229923285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:39.230579 env[1480]: time="2024-02-13T03:51:39.230563685Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 03:51:39.231121 env[1480]: time="2024-02-13T03:51:39.231058740Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 03:51:39.231841 env[1480]: time="2024-02-13T03:51:39.231822344Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 03:51:39.236088 env[1480]: time="2024-02-13T03:51:39.236045646Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\"" Feb 13 03:51:39.236304 env[1480]: time="2024-02-13T03:51:39.236294250Z" level=info msg="StartContainer for \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\"" Feb 13 03:51:39.245700 systemd[1]: Started cri-containerd-bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba.scope. Feb 13 03:51:39.258037 env[1480]: time="2024-02-13T03:51:39.257985972Z" level=info msg="StartContainer for \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\" returns successfully" Feb 13 03:51:39.263036 systemd[1]: cri-containerd-bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba.scope: Deactivated successfully. Feb 13 03:51:40.238862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba-rootfs.mount: Deactivated successfully. Feb 13 03:51:40.371782 env[1480]: time="2024-02-13T03:51:40.371637266Z" level=info msg="shim disconnected" id=bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba Feb 13 03:51:40.371782 env[1480]: time="2024-02-13T03:51:40.371744120Z" level=warning msg="cleaning up after shim disconnected" id=bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba namespace=k8s.io Feb 13 03:51:40.371782 env[1480]: time="2024-02-13T03:51:40.371775192Z" level=info msg="cleaning up dead shim" Feb 13 03:51:40.387163 env[1480]: time="2024-02-13T03:51:40.387038643Z" level=warning msg="cleanup warnings time=\"2024-02-13T03:51:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3091 runtime=io.containerd.runc.v2\n" Feb 13 03:51:41.033162 env[1480]: time="2024-02-13T03:51:41.033037002Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 03:51:41.043960 env[1480]: time="2024-02-13T03:51:41.043894334Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\"" Feb 13 03:51:41.044295 env[1480]: time="2024-02-13T03:51:41.044281101Z" level=info msg="StartContainer for \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\"" Feb 13 03:51:41.053071 systemd[1]: Started cri-containerd-2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770.scope. Feb 13 03:51:41.064350 env[1480]: time="2024-02-13T03:51:41.064318162Z" level=info msg="StartContainer for \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\" returns successfully" Feb 13 03:51:41.070465 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 03:51:41.070589 systemd[1]: Stopped systemd-sysctl.service. Feb 13 03:51:41.070676 systemd[1]: Stopping systemd-sysctl.service... Feb 13 03:51:41.071530 systemd[1]: Starting systemd-sysctl.service... Feb 13 03:51:41.071673 systemd[1]: cri-containerd-2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770.scope: Deactivated successfully. Feb 13 03:51:41.075649 systemd[1]: Finished systemd-sysctl.service. Feb 13 03:51:41.098456 env[1480]: time="2024-02-13T03:51:41.098320961Z" level=info msg="shim disconnected" id=2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770 Feb 13 03:51:41.098812 env[1480]: time="2024-02-13T03:51:41.098457438Z" level=warning msg="cleaning up after shim disconnected" id=2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770 namespace=k8s.io Feb 13 03:51:41.098812 env[1480]: time="2024-02-13T03:51:41.098488624Z" level=info msg="cleaning up dead shim" Feb 13 03:51:41.113864 env[1480]: time="2024-02-13T03:51:41.113753755Z" level=warning msg="cleanup warnings time=\"2024-02-13T03:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3153 runtime=io.containerd.runc.v2\n" Feb 13 03:51:41.238986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770-rootfs.mount: Deactivated successfully. Feb 13 03:51:41.322250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134473197.mount: Deactivated successfully. Feb 13 03:51:42.039069 env[1480]: time="2024-02-13T03:51:42.038964402Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 03:51:42.051677 env[1480]: time="2024-02-13T03:51:42.051629660Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\"" Feb 13 03:51:42.052068 env[1480]: time="2024-02-13T03:51:42.052008121Z" level=info msg="StartContainer for \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\"" Feb 13 03:51:42.061025 systemd[1]: Started cri-containerd-53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07.scope. Feb 13 03:51:42.077783 systemd[1]: cri-containerd-53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07.scope: Deactivated successfully. Feb 13 03:51:42.078913 env[1480]: time="2024-02-13T03:51:42.078891283Z" level=info msg="StartContainer for \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\" returns successfully" Feb 13 03:51:42.220215 env[1480]: time="2024-02-13T03:51:42.220075096Z" level=info msg="shim disconnected" id=53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07 Feb 13 03:51:42.220215 env[1480]: time="2024-02-13T03:51:42.220179580Z" level=warning msg="cleaning up after shim disconnected" id=53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07 namespace=k8s.io Feb 13 03:51:42.220215 env[1480]: time="2024-02-13T03:51:42.220210352Z" level=info msg="cleaning up dead shim" Feb 13 03:51:42.237853 env[1480]: time="2024-02-13T03:51:42.237772641Z" level=warning msg="cleanup warnings time=\"2024-02-13T03:51:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3210 runtime=io.containerd.runc.v2\n" Feb 13 03:51:42.564698 env[1480]: time="2024-02-13T03:51:42.564673077Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:42.565322 env[1480]: time="2024-02-13T03:51:42.565309946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:42.566183 env[1480]: time="2024-02-13T03:51:42.566169272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 03:51:42.566497 env[1480]: time="2024-02-13T03:51:42.566482437Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 03:51:42.567876 env[1480]: time="2024-02-13T03:51:42.567802120Z" level=info msg="CreateContainer within sandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 03:51:42.572784 env[1480]: time="2024-02-13T03:51:42.572742312Z" level=info msg="CreateContainer within sandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\"" Feb 13 03:51:42.573129 env[1480]: time="2024-02-13T03:51:42.573021712Z" level=info msg="StartContainer for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\"" Feb 13 03:51:42.573376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240005532.mount: Deactivated successfully. Feb 13 03:51:42.581418 systemd[1]: Started cri-containerd-307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3.scope. Feb 13 03:51:42.593814 env[1480]: time="2024-02-13T03:51:42.593779749Z" level=info msg="StartContainer for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" returns successfully" Feb 13 03:51:43.038409 env[1480]: time="2024-02-13T03:51:43.038344478Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 03:51:43.042875 kubelet[2585]: I0213 03:51:43.042825 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-5psks" podStartSLOduration=-9.223372025811985e+09 pod.CreationTimestamp="2024-02-13 03:51:32 +0000 UTC" firstStartedPulling="2024-02-13 03:51:33.756118251 +0000 UTC m=+14.851604835" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:43.042228632 +0000 UTC m=+24.137715223" watchObservedRunningTime="2024-02-13 03:51:43.042791566 +0000 UTC m=+24.138278148" Feb 13 03:51:43.043932 env[1480]: time="2024-02-13T03:51:43.043886900Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\"" Feb 13 03:51:43.044237 env[1480]: time="2024-02-13T03:51:43.044224650Z" level=info msg="StartContainer for \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\"" Feb 13 03:51:43.064935 systemd[1]: Started cri-containerd-5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578.scope. Feb 13 03:51:43.076647 env[1480]: time="2024-02-13T03:51:43.076587993Z" level=info msg="StartContainer for \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\" returns successfully" Feb 13 03:51:43.077508 systemd[1]: cri-containerd-5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578.scope: Deactivated successfully. Feb 13 03:51:43.101800 env[1480]: time="2024-02-13T03:51:43.101737595Z" level=info msg="shim disconnected" id=5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578 Feb 13 03:51:43.101800 env[1480]: time="2024-02-13T03:51:43.101764002Z" level=warning msg="cleaning up after shim disconnected" id=5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578 namespace=k8s.io Feb 13 03:51:43.101800 env[1480]: time="2024-02-13T03:51:43.101769936Z" level=info msg="cleaning up dead shim" Feb 13 03:51:43.105232 env[1480]: time="2024-02-13T03:51:43.105214560Z" level=warning msg="cleanup warnings time=\"2024-02-13T03:51:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3314 runtime=io.containerd.runc.v2\n" Feb 13 03:51:44.040725 env[1480]: time="2024-02-13T03:51:44.040677247Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 03:51:44.048933 env[1480]: time="2024-02-13T03:51:44.048558802Z" level=info msg="CreateContainer within sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\"" Feb 13 03:51:44.049328 env[1480]: time="2024-02-13T03:51:44.049312340Z" level=info msg="StartContainer for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\"" Feb 13 03:51:44.058441 systemd[1]: Started cri-containerd-c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789.scope. Feb 13 03:51:44.070602 env[1480]: time="2024-02-13T03:51:44.070574150Z" level=info msg="StartContainer for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" returns successfully" Feb 13 03:51:44.126442 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 03:51:44.139746 kubelet[2585]: I0213 03:51:44.139732 2585 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 03:51:44.152820 kubelet[2585]: I0213 03:51:44.152798 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:44.153563 kubelet[2585]: I0213 03:51:44.153550 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 03:51:44.155928 systemd[1]: Created slice kubepods-burstable-pod2087a203_9ee7_4576_9861_90c60181c91e.slice. Feb 13 03:51:44.158072 systemd[1]: Created slice kubepods-burstable-podba8c3d77_3c54_42fe_8e6a_615d9b68e96d.slice. Feb 13 03:51:44.220711 kubelet[2585]: I0213 03:51:44.220653 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba8c3d77-3c54-42fe-8e6a-615d9b68e96d-config-volume\") pod \"coredns-787d4945fb-jzpgl\" (UID: \"ba8c3d77-3c54-42fe-8e6a-615d9b68e96d\") " pod="kube-system/coredns-787d4945fb-jzpgl" Feb 13 03:51:44.220711 kubelet[2585]: I0213 03:51:44.220680 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2087a203-9ee7-4576-9861-90c60181c91e-config-volume\") pod \"coredns-787d4945fb-b7hd8\" (UID: \"2087a203-9ee7-4576-9861-90c60181c91e\") " pod="kube-system/coredns-787d4945fb-b7hd8" Feb 13 03:51:44.220711 kubelet[2585]: I0213 03:51:44.220693 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxgc\" (UniqueName: \"kubernetes.io/projected/2087a203-9ee7-4576-9861-90c60181c91e-kube-api-access-9dxgc\") pod \"coredns-787d4945fb-b7hd8\" (UID: \"2087a203-9ee7-4576-9861-90c60181c91e\") " pod="kube-system/coredns-787d4945fb-b7hd8" Feb 13 03:51:44.220711 kubelet[2585]: I0213 03:51:44.220716 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljjk\" (UniqueName: \"kubernetes.io/projected/ba8c3d77-3c54-42fe-8e6a-615d9b68e96d-kube-api-access-7ljjk\") pod \"coredns-787d4945fb-jzpgl\" (UID: \"ba8c3d77-3c54-42fe-8e6a-615d9b68e96d\") " pod="kube-system/coredns-787d4945fb-jzpgl" Feb 13 03:51:44.266379 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 03:51:44.459181 env[1480]: time="2024-02-13T03:51:44.459051715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b7hd8,Uid:2087a203-9ee7-4576-9861-90c60181c91e,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:44.460130 env[1480]: time="2024-02-13T03:51:44.460022156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jzpgl,Uid:ba8c3d77-3c54-42fe-8e6a-615d9b68e96d,Namespace:kube-system,Attempt:0,}" Feb 13 03:51:45.082084 kubelet[2585]: I0213 03:51:45.082001 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z7w72" podStartSLOduration=-9.223372023772877e+09 pod.CreationTimestamp="2024-02-13 03:51:32 +0000 UTC" firstStartedPulling="2024-02-13 03:51:33.421999813 +0000 UTC m=+14.517486467" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:45.081193814 +0000 UTC m=+26.176680507" watchObservedRunningTime="2024-02-13 03:51:45.081899896 +0000 UTC m=+26.177386531" Feb 13 03:51:45.861224 systemd-networkd[1312]: cilium_host: Link UP Feb 13 03:51:45.861593 systemd-networkd[1312]: cilium_net: Link UP Feb 13 03:51:45.875745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 13 03:51:45.875907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 03:51:45.876316 systemd-networkd[1312]: cilium_net: Gained carrier Feb 13 03:51:45.876556 systemd-networkd[1312]: cilium_host: Gained carrier Feb 13 03:51:45.920684 systemd-networkd[1312]: cilium_vxlan: Link UP Feb 13 03:51:45.920688 systemd-networkd[1312]: cilium_vxlan: Gained carrier Feb 13 03:51:46.051380 kernel: NET: Registered PF_ALG protocol family Feb 13 03:51:46.485003 systemd-networkd[1312]: lxc_health: Link UP Feb 13 03:51:46.506392 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 03:51:46.506443 systemd-networkd[1312]: lxc_health: Gained carrier Feb 13 03:51:46.533527 systemd-networkd[1312]: cilium_net: Gained IPv6LL Feb 13 03:51:46.725509 systemd-networkd[1312]: cilium_host: Gained IPv6LL Feb 13 03:51:47.014222 systemd-networkd[1312]: lxc208404b3b11f: Link UP Feb 13 03:51:47.014314 systemd-networkd[1312]: lxc1cbf9fb9c11b: Link UP Feb 13 03:51:47.046441 kernel: eth0: renamed from tmp6339f Feb 13 03:51:47.069375 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 03:51:47.069476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1cbf9fb9c11b: link becomes ready Feb 13 03:51:47.077439 kernel: eth0: renamed from tmp6b08f Feb 13 03:51:47.108062 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 03:51:47.108095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc208404b3b11f: link becomes ready Feb 13 03:51:47.108298 systemd-networkd[1312]: lxc1cbf9fb9c11b: Gained carrier Feb 13 03:51:47.108408 systemd-networkd[1312]: lxc208404b3b11f: Gained carrier Feb 13 03:51:47.621494 systemd-networkd[1312]: cilium_vxlan: Gained IPv6LL Feb 13 03:51:48.261511 systemd-networkd[1312]: lxc_health: Gained IPv6LL Feb 13 03:51:48.261666 systemd-networkd[1312]: lxc208404b3b11f: Gained IPv6LL Feb 13 03:51:48.261778 systemd-networkd[1312]: lxc1cbf9fb9c11b: Gained IPv6LL Feb 13 03:51:49.390186 env[1480]: time="2024-02-13T03:51:49.390136309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:49.390186 env[1480]: time="2024-02-13T03:51:49.390172277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:49.390186 env[1480]: time="2024-02-13T03:51:49.390182427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:49.390466 env[1480]: time="2024-02-13T03:51:49.390246114Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6339f2d5ed1aefae7337b53e46c18ba0d471ef3829d2c65f99e5b5dc633f9ffd pid=4011 runtime=io.containerd.runc.v2 Feb 13 03:51:49.390466 env[1480]: time="2024-02-13T03:51:49.390319069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 03:51:49.390466 env[1480]: time="2024-02-13T03:51:49.390336884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 03:51:49.390466 env[1480]: time="2024-02-13T03:51:49.390344050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 03:51:49.390466 env[1480]: time="2024-02-13T03:51:49.390405253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b08f2cfa1230427bad06159a5da77fc57825472d33c1f40582d2d9b6a5e07da pid=4012 runtime=io.containerd.runc.v2 Feb 13 03:51:49.398940 systemd[1]: Started cri-containerd-6339f2d5ed1aefae7337b53e46c18ba0d471ef3829d2c65f99e5b5dc633f9ffd.scope. Feb 13 03:51:49.399735 systemd[1]: Started cri-containerd-6b08f2cfa1230427bad06159a5da77fc57825472d33c1f40582d2d9b6a5e07da.scope. Feb 13 03:51:49.421501 env[1480]: time="2024-02-13T03:51:49.421448423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b7hd8,Uid:2087a203-9ee7-4576-9861-90c60181c91e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b08f2cfa1230427bad06159a5da77fc57825472d33c1f40582d2d9b6a5e07da\"" Feb 13 03:51:49.421880 env[1480]: time="2024-02-13T03:51:49.421860925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jzpgl,Uid:ba8c3d77-3c54-42fe-8e6a-615d9b68e96d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6339f2d5ed1aefae7337b53e46c18ba0d471ef3829d2c65f99e5b5dc633f9ffd\"" Feb 13 03:51:49.422866 env[1480]: time="2024-02-13T03:51:49.422850775Z" level=info msg="CreateContainer within sandbox \"6b08f2cfa1230427bad06159a5da77fc57825472d33c1f40582d2d9b6a5e07da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 03:51:49.422915 env[1480]: time="2024-02-13T03:51:49.422887326Z" level=info msg="CreateContainer within sandbox \"6339f2d5ed1aefae7337b53e46c18ba0d471ef3829d2c65f99e5b5dc633f9ffd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 03:51:49.428135 env[1480]: time="2024-02-13T03:51:49.428089860Z" level=info msg="CreateContainer within sandbox \"6b08f2cfa1230427bad06159a5da77fc57825472d33c1f40582d2d9b6a5e07da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a725086ccda6f786cc32b433a292ebf6b07be7be1c9c26578cae15b1e143a2b5\"" Feb 13 03:51:49.428362 env[1480]: time="2024-02-13T03:51:49.428342265Z" level=info msg="StartContainer for \"a725086ccda6f786cc32b433a292ebf6b07be7be1c9c26578cae15b1e143a2b5\"" Feb 13 03:51:49.429017 env[1480]: time="2024-02-13T03:51:49.428975499Z" level=info msg="CreateContainer within sandbox \"6339f2d5ed1aefae7337b53e46c18ba0d471ef3829d2c65f99e5b5dc633f9ffd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2a15d809055b337db818607ba1c0f45b28f2f50af035e4311413247bd90c1bc\"" Feb 13 03:51:49.429181 env[1480]: time="2024-02-13T03:51:49.429165860Z" level=info msg="StartContainer for \"a2a15d809055b337db818607ba1c0f45b28f2f50af035e4311413247bd90c1bc\"" Feb 13 03:51:49.449401 systemd[1]: Started cri-containerd-a725086ccda6f786cc32b433a292ebf6b07be7be1c9c26578cae15b1e143a2b5.scope. Feb 13 03:51:49.450870 systemd[1]: Started cri-containerd-a2a15d809055b337db818607ba1c0f45b28f2f50af035e4311413247bd90c1bc.scope. Feb 13 03:51:49.463116 env[1480]: time="2024-02-13T03:51:49.463086041Z" level=info msg="StartContainer for \"a725086ccda6f786cc32b433a292ebf6b07be7be1c9c26578cae15b1e143a2b5\" returns successfully" Feb 13 03:51:49.463259 env[1480]: time="2024-02-13T03:51:49.463221359Z" level=info msg="StartContainer for \"a2a15d809055b337db818607ba1c0f45b28f2f50af035e4311413247bd90c1bc\" returns successfully" Feb 13 03:51:50.079182 kubelet[2585]: I0213 03:51:50.079092 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jzpgl" podStartSLOduration=18.07901301 pod.CreationTimestamp="2024-02-13 03:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:50.078027353 +0000 UTC m=+31.173514008" watchObservedRunningTime="2024-02-13 03:51:50.07901301 +0000 UTC m=+31.174499659" Feb 13 03:51:50.092065 kubelet[2585]: I0213 03:51:50.092027 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-b7hd8" podStartSLOduration=18.091985382 pod.CreationTimestamp="2024-02-13 03:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 03:51:50.091785893 +0000 UTC m=+31.187272485" watchObservedRunningTime="2024-02-13 03:51:50.091985382 +0000 UTC m=+31.187471965" Feb 13 03:53:24.092211 systemd[1]: Started sshd@7-139.178.90.101:22-141.98.11.11:16318.service. Feb 13 03:53:25.239756 sshd[4235]: Invalid user admin from 141.98.11.11 port 16318 Feb 13 03:53:25.488909 sshd[4235]: pam_faillock(sshd:auth): User unknown Feb 13 03:53:25.490051 sshd[4235]: pam_unix(sshd:auth): check pass; user unknown Feb 13 03:53:25.490203 sshd[4235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.11 Feb 13 03:53:25.491315 sshd[4235]: pam_faillock(sshd:auth): User unknown Feb 13 03:53:27.434623 sshd[4235]: Failed password for invalid user admin from 141.98.11.11 port 16318 ssh2 Feb 13 03:53:27.826922 sshd[4235]: Connection closed by invalid user admin 141.98.11.11 port 16318 [preauth] Feb 13 03:53:27.829486 systemd[1]: sshd@7-139.178.90.101:22-141.98.11.11:16318.service: Deactivated successfully. Feb 13 03:56:01.665303 update_engine[1469]: I0213 03:56:01.665181 1469 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 03:56:01.665303 update_engine[1469]: I0213 03:56:01.665258 1469 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 03:56:01.673730 update_engine[1469]: I0213 03:56:01.671710 1469 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 03:56:01.673730 update_engine[1469]: I0213 03:56:01.673266 1469 omaha_request_params.cc:62] Current group set to lts Feb 13 03:56:01.673730 update_engine[1469]: I0213 03:56:01.673607 1469 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 03:56:01.673730 update_engine[1469]: I0213 03:56:01.673627 1469 update_attempter.cc:643] Scheduling an action processor start. Feb 13 03:56:01.673730 update_engine[1469]: I0213 03:56:01.673662 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 03:56:01.673730 update_engine[1469]: I0213 03:56:01.673732 1469 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 03:56:01.674296 update_engine[1469]: I0213 03:56:01.673887 1469 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 03:56:01.674296 update_engine[1469]: I0213 03:56:01.673903 1469 omaha_request_action.cc:271] Request: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: Feb 13 03:56:01.674296 update_engine[1469]: I0213 03:56:01.673913 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 03:56:01.675319 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 03:56:01.676985 update_engine[1469]: I0213 03:56:01.676895 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 03:56:01.677190 update_engine[1469]: E0213 03:56:01.677119 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 03:56:01.677314 update_engine[1469]: I0213 03:56:01.677271 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 03:56:11.574777 update_engine[1469]: I0213 03:56:11.574649 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 03:56:11.575715 update_engine[1469]: I0213 03:56:11.575098 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 03:56:11.575715 update_engine[1469]: E0213 03:56:11.575297 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 03:56:11.575715 update_engine[1469]: I0213 03:56:11.575495 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 03:56:21.574740 update_engine[1469]: I0213 03:56:21.574621 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 03:56:21.575669 update_engine[1469]: I0213 03:56:21.575102 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 03:56:21.575669 update_engine[1469]: E0213 03:56:21.575299 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 03:56:21.575669 update_engine[1469]: I0213 03:56:21.575491 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 03:56:31.574986 update_engine[1469]: I0213 03:56:31.574869 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 03:56:31.575953 update_engine[1469]: I0213 03:56:31.575350 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 03:56:31.575953 update_engine[1469]: E0213 03:56:31.575592 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 03:56:31.575953 update_engine[1469]: I0213 03:56:31.575739 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 03:56:31.575953 update_engine[1469]: I0213 03:56:31.575753 1469 omaha_request_action.cc:621] Omaha request response: Feb 13 03:56:31.575953 update_engine[1469]: E0213 03:56:31.575899 1469 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 03:56:31.575953 update_engine[1469]: I0213 03:56:31.575927 1469 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 03:56:31.575953 update_engine[1469]: I0213 03:56:31.575935 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 03:56:31.575953 update_engine[1469]: I0213 03:56:31.575943 1469 update_attempter.cc:306] Processing Done. Feb 13 03:56:31.576785 update_engine[1469]: E0213 03:56:31.575969 1469 update_attempter.cc:619] Update failed. Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.575979 1469 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.575989 1469 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.575997 1469 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.576150 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.576210 1469 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.576218 1469 omaha_request_action.cc:271] Request: Feb 13 03:56:31.576785 update_engine[1469]: Feb 13 03:56:31.576785 update_engine[1469]: Feb 13 03:56:31.576785 update_engine[1469]: Feb 13 03:56:31.576785 update_engine[1469]: Feb 13 03:56:31.576785 update_engine[1469]: Feb 13 03:56:31.576785 update_engine[1469]: Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.576228 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 03:56:31.576785 update_engine[1469]: I0213 03:56:31.576561 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 03:56:31.576785 update_engine[1469]: E0213 03:56:31.576720 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576851 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576865 1469 omaha_request_action.cc:621] Omaha request response: Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576875 1469 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576884 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576890 1469 update_attempter.cc:306] Processing Done. Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576898 1469 update_attempter.cc:310] Error event sent. Feb 13 03:56:31.578230 update_engine[1469]: I0213 03:56:31.576920 1469 update_check_scheduler.cc:74] Next update check in 42m15s Feb 13 03:56:31.578901 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 03:56:31.578901 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 03:57:04.228527 systemd[1]: Started sshd@8-139.178.90.101:22-185.196.9.45:45442.service. Feb 13 03:57:05.284920 sshd[4273]: Invalid user ftp from 185.196.9.45 port 45442 Feb 13 03:57:05.291040 sshd[4273]: pam_faillock(sshd:auth): User unknown Feb 13 03:57:05.292170 sshd[4273]: pam_unix(sshd:auth): check pass; user unknown Feb 13 03:57:05.292260 sshd[4273]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.196.9.45 Feb 13 03:57:05.293186 sshd[4273]: pam_faillock(sshd:auth): User unknown Feb 13 03:57:07.106127 sshd[4273]: Failed password for invalid user ftp from 185.196.9.45 port 45442 ssh2 Feb 13 03:57:11.821463 sshd[4273]: pam_faillock(sshd:auth): User unknown Feb 13 03:57:11.822466 sshd[4273]: pam_unix(sshd:auth): check pass; user unknown Feb 13 03:57:11.823450 sshd[4273]: pam_faillock(sshd:auth): User unknown Feb 13 03:57:13.791984 sshd[4273]: Failed password for invalid user ftp from 185.196.9.45 port 45442 ssh2 Feb 13 03:57:18.360125 sshd[4273]: Failed password for invalid user ftp from 185.196.9.45 port 45442 ssh2 Feb 13 03:57:20.331407 systemd[1]: Started sshd@9-139.178.90.101:22-2.57.122.87:59522.service. Feb 13 03:57:21.072651 sshd[4279]: Invalid user hanzhang from 2.57.122.87 port 59522 Feb 13 03:57:21.259082 sshd[4279]: pam_faillock(sshd:auth): User unknown Feb 13 03:57:21.260178 sshd[4279]: pam_unix(sshd:auth): check pass; user unknown Feb 13 03:57:21.260270 sshd[4279]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=2.57.122.87 Feb 13 03:57:21.261156 sshd[4279]: pam_faillock(sshd:auth): User unknown Feb 13 03:57:21.518662 sshd[4273]: Connection closed by invalid user ftp 185.196.9.45 port 45442 [preauth] Feb 13 03:57:21.519130 sshd[4273]: PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.196.9.45 Feb 13 03:57:21.521205 systemd[1]: sshd@8-139.178.90.101:22-185.196.9.45:45442.service: Deactivated successfully. Feb 13 03:57:23.269639 sshd[4279]: Failed password for invalid user hanzhang from 2.57.122.87 port 59522 ssh2 Feb 13 03:57:25.335556 sshd[4279]: Connection closed by invalid user hanzhang 2.57.122.87 port 59522 [preauth] Feb 13 03:57:25.338076 systemd[1]: sshd@9-139.178.90.101:22-2.57.122.87:59522.service: Deactivated successfully. Feb 13 03:57:30.879786 kubelet[2585]: I0213 03:57:30.879664 2585 log.go:198] http: TLS handshake error from 162.142.125.214:59768: read tcp 139.178.90.101:10250->162.142.125.214:59768: read: connection reset by peer Feb 13 04:04:56.332399 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 13 04:04:56.344312 systemd-tmpfiles[4346]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 04:04:56.344584 systemd-tmpfiles[4346]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 04:04:56.345210 systemd-tmpfiles[4346]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 04:04:56.355726 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 13 04:04:56.355814 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 13 04:04:56.356991 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 13 04:07:29.632952 systemd[1]: Started sshd@10-139.178.90.101:22-2.57.122.87:50478.service. Feb 13 04:07:30.372947 sshd[4372]: Invalid user hanzhang from 2.57.122.87 port 50478 Feb 13 04:07:30.609482 sshd[4372]: pam_faillock(sshd:auth): User unknown Feb 13 04:07:30.610667 sshd[4372]: pam_unix(sshd:auth): check pass; user unknown Feb 13 04:07:30.610761 sshd[4372]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=2.57.122.87 Feb 13 04:07:30.611880 sshd[4372]: pam_faillock(sshd:auth): User unknown Feb 13 04:07:32.559599 sshd[4372]: Failed password for invalid user hanzhang from 2.57.122.87 port 50478 ssh2 Feb 13 04:07:34.682677 sshd[4372]: Connection closed by invalid user hanzhang 2.57.122.87 port 50478 [preauth] Feb 13 04:07:34.685163 systemd[1]: sshd@10-139.178.90.101:22-2.57.122.87:50478.service: Deactivated successfully. Feb 13 04:11:57.020600 systemd[1]: Started sshd@11-139.178.90.101:22-180.101.88.197:48745.service. Feb 13 04:11:58.119306 sshd[4409]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 04:12:00.193159 sshd[4409]: Failed password for root from 180.101.88.197 port 48745 ssh2 Feb 13 04:12:03.746157 sshd[4409]: Failed password for root from 180.101.88.197 port 48745 ssh2 Feb 13 04:12:07.769413 sshd[4409]: Failed password for root from 180.101.88.197 port 48745 ssh2 Feb 13 04:12:08.910210 systemd[1]: Started sshd@12-139.178.90.101:22-139.178.68.195:34670.service. Feb 13 04:12:08.976676 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 34670 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:08.978095 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:08.982950 systemd-logind[1467]: New session 10 of user core. Feb 13 04:12:08.983951 systemd[1]: Started session-10.scope. Feb 13 04:12:09.120723 sshd[4414]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:09.122105 systemd[1]: sshd@12-139.178.90.101:22-139.178.68.195:34670.service: Deactivated successfully. Feb 13 04:12:09.122545 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 04:12:09.122958 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Feb 13 04:12:09.123340 systemd-logind[1467]: Removed session 10. Feb 13 04:12:09.526699 sshd[4409]: Received disconnect from 180.101.88.197 port 48745:11: [preauth] Feb 13 04:12:09.526699 sshd[4409]: Disconnected from authenticating user root 180.101.88.197 port 48745 [preauth] Feb 13 04:12:09.527238 sshd[4409]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 04:12:09.529238 systemd[1]: sshd@11-139.178.90.101:22-180.101.88.197:48745.service: Deactivated successfully. Feb 13 04:12:09.685655 systemd[1]: Started sshd@13-139.178.90.101:22-180.101.88.197:18257.service. Feb 13 04:12:10.688103 sshd[4443]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 04:12:13.077593 sshd[4443]: Failed password for root from 180.101.88.197 port 18257 ssh2 Feb 13 04:12:14.128341 systemd[1]: Started sshd@14-139.178.90.101:22-139.178.68.195:34678.service. Feb 13 04:12:14.158882 sshd[4447]: Accepted publickey for core from 139.178.68.195 port 34678 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:14.159806 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:14.163135 systemd-logind[1467]: New session 11 of user core. Feb 13 04:12:14.163841 systemd[1]: Started session-11.scope. Feb 13 04:12:14.276134 sshd[4447]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:14.277601 systemd[1]: sshd@14-139.178.90.101:22-139.178.68.195:34678.service: Deactivated successfully. Feb 13 04:12:14.278015 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 04:12:14.278333 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Feb 13 04:12:14.278891 systemd-logind[1467]: Removed session 11. Feb 13 04:12:14.474632 sshd[4443]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 04:12:16.744002 sshd[4443]: Failed password for root from 180.101.88.197 port 18257 ssh2 Feb 13 04:12:16.828190 systemd[1]: Started sshd@15-139.178.90.101:22-141.98.11.90:58970.service. Feb 13 04:12:18.231431 sshd[4473]: Invalid user ubnt from 141.98.11.90 port 58970 Feb 13 04:12:18.469303 sshd[4473]: pam_faillock(sshd:auth): User unknown Feb 13 04:12:18.470435 sshd[4473]: pam_unix(sshd:auth): check pass; user unknown Feb 13 04:12:18.470527 sshd[4473]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.90 Feb 13 04:12:18.471494 sshd[4473]: pam_faillock(sshd:auth): User unknown Feb 13 04:12:19.285630 systemd[1]: Started sshd@16-139.178.90.101:22-139.178.68.195:35582.service. Feb 13 04:12:19.314774 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 35582 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:19.315567 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:19.318126 systemd-logind[1467]: New session 12 of user core. Feb 13 04:12:19.318771 systemd[1]: Started session-12.scope. Feb 13 04:12:19.403505 sshd[4478]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:19.405089 systemd[1]: sshd@16-139.178.90.101:22-139.178.68.195:35582.service: Deactivated successfully. Feb 13 04:12:19.405591 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 04:12:19.405956 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Feb 13 04:12:19.406365 systemd-logind[1467]: Removed session 12. Feb 13 04:12:20.412659 sshd[4443]: Failed password for root from 180.101.88.197 port 18257 ssh2 Feb 13 04:12:20.623149 sshd[4473]: Failed password for invalid user ubnt from 141.98.11.90 port 58970 ssh2 Feb 13 04:12:22.042968 sshd[4443]: Received disconnect from 180.101.88.197 port 18257:11: [preauth] Feb 13 04:12:22.042968 sshd[4443]: Disconnected from authenticating user root 180.101.88.197 port 18257 [preauth] Feb 13 04:12:22.043529 sshd[4443]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 04:12:22.045542 systemd[1]: sshd@13-139.178.90.101:22-180.101.88.197:18257.service: Deactivated successfully. Feb 13 04:12:22.122066 sshd[4473]: Connection closed by invalid user ubnt 141.98.11.90 port 58970 [preauth] Feb 13 04:12:22.125022 systemd[1]: sshd@15-139.178.90.101:22-141.98.11.90:58970.service: Deactivated successfully. Feb 13 04:12:22.192956 systemd[1]: Started sshd@17-139.178.90.101:22-180.101.88.197:36846.service. Feb 13 04:12:23.983832 sshd[4508]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 04:12:24.413342 systemd[1]: Started sshd@18-139.178.90.101:22-139.178.68.195:35592.service. Feb 13 04:12:24.442701 sshd[4511]: Accepted publickey for core from 139.178.68.195 port 35592 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:24.443467 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:24.446082 systemd-logind[1467]: New session 13 of user core. Feb 13 04:12:24.446590 systemd[1]: Started session-13.scope. Feb 13 04:12:24.570569 sshd[4511]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:24.572860 systemd[1]: sshd@18-139.178.90.101:22-139.178.68.195:35592.service: Deactivated successfully. Feb 13 04:12:24.573262 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 04:12:24.573669 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Feb 13 04:12:24.574400 systemd[1]: Started sshd@19-139.178.90.101:22-139.178.68.195:35602.service. Feb 13 04:12:24.574859 systemd-logind[1467]: Removed session 13. Feb 13 04:12:24.605775 sshd[4538]: Accepted publickey for core from 139.178.68.195 port 35602 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:24.606560 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:24.609357 systemd-logind[1467]: New session 14 of user core. Feb 13 04:12:24.610029 systemd[1]: Started session-14.scope. Feb 13 04:12:25.125097 sshd[4538]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:25.127531 systemd[1]: sshd@19-139.178.90.101:22-139.178.68.195:35602.service: Deactivated successfully. Feb 13 04:12:25.128013 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 04:12:25.128384 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Feb 13 04:12:25.129123 systemd[1]: Started sshd@20-139.178.90.101:22-139.178.68.195:35610.service. Feb 13 04:12:25.129586 systemd-logind[1467]: Removed session 14. Feb 13 04:12:25.159166 sshd[4561]: Accepted publickey for core from 139.178.68.195 port 35610 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:25.159952 sshd[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:25.162144 systemd-logind[1467]: New session 15 of user core. Feb 13 04:12:25.162683 systemd[1]: Started session-15.scope. Feb 13 04:12:25.296127 sshd[4561]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:25.300125 systemd[1]: sshd@20-139.178.90.101:22-139.178.68.195:35610.service: Deactivated successfully. Feb 13 04:12:25.301388 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 04:12:25.302512 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Feb 13 04:12:25.304019 systemd-logind[1467]: Removed session 15. Feb 13 04:12:26.488644 sshd[4508]: Failed password for root from 180.101.88.197 port 36846 ssh2 Feb 13 04:12:29.818636 sshd[4508]: Failed password for root from 180.101.88.197 port 36846 ssh2 Feb 13 04:12:30.304610 systemd[1]: Started sshd@21-139.178.90.101:22-139.178.68.195:57564.service. Feb 13 04:12:30.334084 sshd[4586]: Accepted publickey for core from 139.178.68.195 port 57564 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:30.334925 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:30.337919 systemd-logind[1467]: New session 16 of user core. Feb 13 04:12:30.338510 systemd[1]: Started session-16.scope. Feb 13 04:12:30.465426 sshd[4586]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:30.466973 systemd[1]: sshd@21-139.178.90.101:22-139.178.68.195:57564.service: Deactivated successfully. Feb 13 04:12:30.467430 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 04:12:30.467831 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Feb 13 04:12:30.468275 systemd-logind[1467]: Removed session 16. Feb 13 04:12:33.485645 sshd[4508]: Failed password for root from 180.101.88.197 port 36846 ssh2 Feb 13 04:12:35.343813 sshd[4508]: Received disconnect from 180.101.88.197 port 36846:11: [preauth] Feb 13 04:12:35.343813 sshd[4508]: Disconnected from authenticating user root 180.101.88.197 port 36846 [preauth] Feb 13 04:12:35.344358 sshd[4508]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.197 user=root Feb 13 04:12:35.346404 systemd[1]: sshd@17-139.178.90.101:22-180.101.88.197:36846.service: Deactivated successfully. Feb 13 04:12:35.474719 systemd[1]: Started sshd@22-139.178.90.101:22-139.178.68.195:57566.service. Feb 13 04:12:35.504365 sshd[4613]: Accepted publickey for core from 139.178.68.195 port 57566 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:35.507656 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:35.518173 systemd-logind[1467]: New session 17 of user core. Feb 13 04:12:35.520642 systemd[1]: Started session-17.scope. Feb 13 04:12:35.626772 sshd[4613]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:35.628239 systemd[1]: sshd@22-139.178.90.101:22-139.178.68.195:57566.service: Deactivated successfully. Feb 13 04:12:35.628689 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 04:12:35.629097 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Feb 13 04:12:35.629730 systemd-logind[1467]: Removed session 17. Feb 13 04:12:40.636874 systemd[1]: Started sshd@23-139.178.90.101:22-139.178.68.195:38388.service. Feb 13 04:12:40.666433 sshd[4640]: Accepted publickey for core from 139.178.68.195 port 38388 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:40.667314 sshd[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:40.670178 systemd-logind[1467]: New session 18 of user core. Feb 13 04:12:40.670769 systemd[1]: Started session-18.scope. Feb 13 04:12:40.766294 sshd[4640]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:40.772250 systemd[1]: sshd@23-139.178.90.101:22-139.178.68.195:38388.service: Deactivated successfully. Feb 13 04:12:40.774110 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 04:12:40.775851 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Feb 13 04:12:40.778108 systemd-logind[1467]: Removed session 18. Feb 13 04:12:45.774801 systemd[1]: Started sshd@24-139.178.90.101:22-139.178.68.195:38402.service. Feb 13 04:12:45.803825 sshd[4666]: Accepted publickey for core from 139.178.68.195 port 38402 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:45.804644 systemd[1]: Started sshd@25-139.178.90.101:22-218.92.0.118:62231.service. Feb 13 04:12:45.804650 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:45.807198 systemd-logind[1467]: New session 19 of user core. Feb 13 04:12:45.807837 systemd[1]: Started session-19.scope. Feb 13 04:12:45.892360 sshd[4666]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:45.893879 systemd[1]: sshd@24-139.178.90.101:22-139.178.68.195:38402.service: Deactivated successfully. Feb 13 04:12:45.894304 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 04:12:45.894717 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Feb 13 04:12:45.895194 systemd-logind[1467]: Removed session 19. Feb 13 04:12:47.239719 sshd[4669]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 04:12:49.372617 sshd[4669]: Failed password for root from 218.92.0.118 port 62231 ssh2 Feb 13 04:12:50.901734 systemd[1]: Started sshd@26-139.178.90.101:22-139.178.68.195:39184.service. Feb 13 04:12:50.932374 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 39184 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:50.933292 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:50.936300 systemd-logind[1467]: New session 20 of user core. Feb 13 04:12:50.936984 systemd[1]: Started session-20.scope. Feb 13 04:12:51.035691 sshd[4695]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:51.041716 systemd[1]: sshd@26-139.178.90.101:22-139.178.68.195:39184.service: Deactivated successfully. Feb 13 04:12:51.043578 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 04:12:51.045274 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Feb 13 04:12:51.047623 systemd-logind[1467]: Removed session 20. Feb 13 04:12:53.055279 sshd[4669]: Failed password for root from 218.92.0.118 port 62231 ssh2 Feb 13 04:12:56.044302 systemd[1]: Started sshd@27-139.178.90.101:22-139.178.68.195:59542.service. Feb 13 04:12:56.075081 sshd[4721]: Accepted publickey for core from 139.178.68.195 port 59542 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:56.075909 sshd[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:56.078967 systemd-logind[1467]: New session 21 of user core. Feb 13 04:12:56.079614 systemd[1]: Started session-21.scope. Feb 13 04:12:56.162701 sshd[4721]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:56.164149 systemd[1]: sshd@27-139.178.90.101:22-139.178.68.195:59542.service: Deactivated successfully. Feb 13 04:12:56.164581 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 04:12:56.164996 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Feb 13 04:12:56.165519 systemd-logind[1467]: Removed session 21. Feb 13 04:12:56.603752 sshd[4669]: Failed password for root from 218.92.0.118 port 62231 ssh2 Feb 13 04:12:56.822124 sshd[4669]: Received disconnect from 218.92.0.118 port 62231:11: [preauth] Feb 13 04:12:56.822124 sshd[4669]: Disconnected from authenticating user root 218.92.0.118 port 62231 [preauth] Feb 13 04:12:56.822703 sshd[4669]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 04:12:56.824758 systemd[1]: sshd@25-139.178.90.101:22-218.92.0.118:62231.service: Deactivated successfully. Feb 13 04:12:57.002612 systemd[1]: Started sshd@28-139.178.90.101:22-218.92.0.118:12241.service. Feb 13 04:12:58.131125 sshd[4747]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 04:13:00.439625 sshd[4747]: Failed password for root from 218.92.0.118 port 12241 ssh2 Feb 13 04:13:01.172472 systemd[1]: Started sshd@29-139.178.90.101:22-139.178.68.195:59550.service. Feb 13 04:13:01.202396 sshd[4750]: Accepted publickey for core from 139.178.68.195 port 59550 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:01.203252 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:01.206251 systemd-logind[1467]: New session 22 of user core. Feb 13 04:13:01.206852 systemd[1]: Started session-22.scope. Feb 13 04:13:01.306763 sshd[4750]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:01.308216 systemd[1]: sshd@29-139.178.90.101:22-139.178.68.195:59550.service: Deactivated successfully. Feb 13 04:13:01.308676 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 04:13:01.309100 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Feb 13 04:13:01.309761 systemd-logind[1467]: Removed session 22. Feb 13 04:13:03.340727 sshd[4747]: Failed password for root from 218.92.0.118 port 12241 ssh2 Feb 13 04:13:05.948221 sshd[4747]: Failed password for root from 218.92.0.118 port 12241 ssh2 Feb 13 04:13:06.316502 systemd[1]: Started sshd@30-139.178.90.101:22-139.178.68.195:43724.service. Feb 13 04:13:06.346844 sshd[4778]: Accepted publickey for core from 139.178.68.195 port 43724 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:06.347668 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:06.350623 systemd-logind[1467]: New session 23 of user core. Feb 13 04:13:06.351189 systemd[1]: Started session-23.scope. Feb 13 04:13:06.445986 sshd[4778]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:06.448159 systemd[1]: sshd@30-139.178.90.101:22-139.178.68.195:43724.service: Deactivated successfully. Feb 13 04:13:06.448825 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 04:13:06.449316 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Feb 13 04:13:06.450109 systemd-logind[1467]: Removed session 23. Feb 13 04:13:07.766198 sshd[4747]: Received disconnect from 218.92.0.118 port 12241:11: [preauth] Feb 13 04:13:07.766198 sshd[4747]: Disconnected from authenticating user root 218.92.0.118 port 12241 [preauth] Feb 13 04:13:07.766760 sshd[4747]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 04:13:07.768791 systemd[1]: sshd@28-139.178.90.101:22-218.92.0.118:12241.service: Deactivated successfully. Feb 13 04:13:07.948088 systemd[1]: Started sshd@31-139.178.90.101:22-218.92.0.118:32924.service. Feb 13 04:13:09.308967 sshd[4803]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 04:13:10.394222 systemd[1]: Started sshd@32-139.178.90.101:22-61.177.172.179:63889.service. Feb 13 04:13:11.455318 systemd[1]: Started sshd@33-139.178.90.101:22-139.178.68.195:43732.service. Feb 13 04:13:11.462519 sshd[4803]: Failed password for root from 218.92.0.118 port 32924 ssh2 Feb 13 04:13:11.485479 sshd[4809]: Accepted publickey for core from 139.178.68.195 port 43732 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:11.486510 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:11.490034 systemd-logind[1467]: New session 24 of user core. Feb 13 04:13:11.490782 systemd[1]: Started session-24.scope. Feb 13 04:13:11.524253 sshd[4806]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 04:13:11.576485 sshd[4809]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:11.578049 systemd[1]: sshd@33-139.178.90.101:22-139.178.68.195:43732.service: Deactivated successfully. Feb 13 04:13:11.578514 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 04:13:11.578926 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Feb 13 04:13:11.579357 systemd-logind[1467]: Removed session 24. Feb 13 04:13:13.617646 sshd[4806]: Failed password for root from 61.177.172.179 port 63889 ssh2 Feb 13 04:13:14.478553 sshd[4803]: Failed password for root from 218.92.0.118 port 32924 ssh2 Feb 13 04:13:16.588179 systemd[1]: Started sshd@34-139.178.90.101:22-139.178.68.195:45928.service. Feb 13 04:13:16.621263 sshd[4831]: Accepted publickey for core from 139.178.68.195 port 45928 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:16.621962 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:16.624403 systemd-logind[1467]: New session 25 of user core. Feb 13 04:13:16.624981 systemd[1]: Started session-25.scope. Feb 13 04:13:16.713055 sshd[4831]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:16.714501 systemd[1]: sshd@34-139.178.90.101:22-139.178.68.195:45928.service: Deactivated successfully. Feb 13 04:13:16.714947 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 04:13:16.715289 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Feb 13 04:13:16.715800 systemd-logind[1467]: Removed session 25. Feb 13 04:13:17.749021 sshd[4803]: Failed password for root from 218.92.0.118 port 32924 ssh2 Feb 13 04:13:17.972050 sshd[4806]: Failed password for root from 61.177.172.179 port 63889 ssh2 Feb 13 04:13:18.902847 sshd[4803]: Received disconnect from 218.92.0.118 port 32924:11: [preauth] Feb 13 04:13:18.902847 sshd[4803]: Disconnected from authenticating user root 218.92.0.118 port 32924 [preauth] Feb 13 04:13:18.903406 sshd[4803]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Feb 13 04:13:18.905466 systemd[1]: sshd@31-139.178.90.101:22-218.92.0.118:32924.service: Deactivated successfully. Feb 13 04:13:21.324463 sshd[4806]: Failed password for root from 61.177.172.179 port 63889 ssh2 Feb 13 04:13:21.722747 systemd[1]: Started sshd@35-139.178.90.101:22-139.178.68.195:45938.service. Feb 13 04:13:21.752491 sshd[4859]: Accepted publickey for core from 139.178.68.195 port 45938 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:21.753293 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:21.756202 systemd-logind[1467]: New session 26 of user core. Feb 13 04:13:21.756790 systemd[1]: Started session-26.scope. Feb 13 04:13:21.855551 sshd[4859]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:21.861513 systemd[1]: sshd@35-139.178.90.101:22-139.178.68.195:45938.service: Deactivated successfully. Feb 13 04:13:21.863388 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 04:13:21.865249 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Feb 13 04:13:21.867675 systemd-logind[1467]: Removed session 26. Feb 13 04:13:22.946736 sshd[4806]: Received disconnect from 61.177.172.179 port 63889:11: [preauth] Feb 13 04:13:22.946736 sshd[4806]: Disconnected from authenticating user root 61.177.172.179 port 63889 [preauth] Feb 13 04:13:22.947293 sshd[4806]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 04:13:22.950000 systemd[1]: sshd@32-139.178.90.101:22-61.177.172.179:63889.service: Deactivated successfully. Feb 13 04:13:23.128653 systemd[1]: Started sshd@36-139.178.90.101:22-61.177.172.179:40845.service. Feb 13 04:13:24.312115 sshd[4885]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 04:13:25.858906 sshd[4885]: Failed password for root from 61.177.172.179 port 40845 ssh2 Feb 13 04:13:26.863520 systemd[1]: Started sshd@37-139.178.90.101:22-139.178.68.195:53596.service. Feb 13 04:13:26.892775 sshd[4888]: Accepted publickey for core from 139.178.68.195 port 53596 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:26.893553 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:26.896329 systemd-logind[1467]: New session 27 of user core. Feb 13 04:13:26.896887 systemd[1]: Started session-27.scope. Feb 13 04:13:26.983943 sshd[4888]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:26.985423 systemd[1]: sshd@37-139.178.90.101:22-139.178.68.195:53596.service: Deactivated successfully. Feb 13 04:13:26.985911 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 04:13:26.986265 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit. Feb 13 04:13:26.986835 systemd-logind[1467]: Removed session 27. Feb 13 04:13:28.135113 sshd[4885]: Failed password for root from 61.177.172.179 port 40845 ssh2 Feb 13 04:13:31.827628 sshd[4885]: Failed password for root from 61.177.172.179 port 40845 ssh2 Feb 13 04:13:31.993675 systemd[1]: Started sshd@38-139.178.90.101:22-139.178.68.195:53612.service. Feb 13 04:13:32.022776 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 53612 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:32.023618 sshd[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:32.026450 systemd-logind[1467]: New session 28 of user core. Feb 13 04:13:32.027059 systemd[1]: Started session-28.scope. Feb 13 04:13:32.115499 sshd[4913]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:32.117431 systemd[1]: sshd@38-139.178.90.101:22-139.178.68.195:53612.service: Deactivated successfully. Feb 13 04:13:32.118021 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 04:13:32.118593 systemd-logind[1467]: Session 28 logged out. Waiting for processes to exit. Feb 13 04:13:32.119329 systemd-logind[1467]: Removed session 28. Feb 13 04:13:32.531239 sshd[4885]: Received disconnect from 61.177.172.179 port 40845:11: [preauth] Feb 13 04:13:32.531239 sshd[4885]: Disconnected from authenticating user root 61.177.172.179 port 40845 [preauth] Feb 13 04:13:32.531681 sshd[4885]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 04:13:32.534065 systemd[1]: sshd@36-139.178.90.101:22-61.177.172.179:40845.service: Deactivated successfully. Feb 13 04:13:32.729301 systemd[1]: Started sshd@39-139.178.90.101:22-61.177.172.179:50812.service. Feb 13 04:13:33.904069 sshd[4940]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 04:13:36.018156 sshd[4940]: Failed password for root from 61.177.172.179 port 50812 ssh2 Feb 13 04:13:37.124861 systemd[1]: Started sshd@40-139.178.90.101:22-139.178.68.195:57660.service. Feb 13 04:13:37.155126 sshd[4945]: Accepted publickey for core from 139.178.68.195 port 57660 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:37.156023 sshd[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:37.159175 systemd-logind[1467]: New session 29 of user core. Feb 13 04:13:37.159818 systemd[1]: Started session-29.scope. Feb 13 04:13:37.244543 sshd[4945]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:37.246120 systemd[1]: sshd@40-139.178.90.101:22-139.178.68.195:57660.service: Deactivated successfully. Feb 13 04:13:37.246598 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 04:13:37.247039 systemd-logind[1467]: Session 29 logged out. Waiting for processes to exit. Feb 13 04:13:37.247604 systemd-logind[1467]: Removed session 29. Feb 13 04:13:39.056419 sshd[4940]: Failed password for root from 61.177.172.179 port 50812 ssh2 Feb 13 04:13:41.997091 sshd[4940]: Failed password for root from 61.177.172.179 port 50812 ssh2 Feb 13 04:13:42.255967 systemd[1]: Started sshd@41-139.178.90.101:22-139.178.68.195:57664.service. Feb 13 04:13:42.288656 sshd[4970]: Accepted publickey for core from 139.178.68.195 port 57664 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:42.289395 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:42.291818 systemd-logind[1467]: New session 30 of user core. Feb 13 04:13:42.292291 systemd[1]: Started session-30.scope. Feb 13 04:13:42.380502 sshd[4970]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:42.381978 systemd[1]: sshd@41-139.178.90.101:22-139.178.68.195:57664.service: Deactivated successfully. Feb 13 04:13:42.382411 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 04:13:42.382851 systemd-logind[1467]: Session 30 logged out. Waiting for processes to exit. Feb 13 04:13:42.383349 systemd-logind[1467]: Removed session 30. Feb 13 04:13:43.537745 sshd[4940]: Received disconnect from 61.177.172.179 port 50812:11: [preauth] Feb 13 04:13:43.537745 sshd[4940]: Disconnected from authenticating user root 61.177.172.179 port 50812 [preauth] Feb 13 04:13:43.538312 sshd[4940]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.179 user=root Feb 13 04:13:43.540313 systemd[1]: sshd@39-139.178.90.101:22-61.177.172.179:50812.service: Deactivated successfully. Feb 13 04:13:47.384025 systemd[1]: Started sshd@42-139.178.90.101:22-139.178.68.195:41092.service. Feb 13 04:13:47.414803 sshd[4996]: Accepted publickey for core from 139.178.68.195 port 41092 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:47.415607 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:47.418105 systemd-logind[1467]: New session 31 of user core. Feb 13 04:13:47.418750 systemd[1]: Started session-31.scope. Feb 13 04:13:47.513099 sshd[4996]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:47.519017 systemd[1]: sshd@42-139.178.90.101:22-139.178.68.195:41092.service: Deactivated successfully. Feb 13 04:13:47.521105 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 04:13:47.522983 systemd-logind[1467]: Session 31 logged out. Waiting for processes to exit. Feb 13 04:13:47.525191 systemd-logind[1467]: Removed session 31. Feb 13 04:13:52.522113 systemd[1]: Started sshd@43-139.178.90.101:22-139.178.68.195:41104.service. Feb 13 04:13:52.551765 sshd[5022]: Accepted publickey for core from 139.178.68.195 port 41104 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:52.552563 sshd[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:52.555390 systemd-logind[1467]: New session 32 of user core. Feb 13 04:13:52.556026 systemd[1]: Started session-32.scope. Feb 13 04:13:52.645383 sshd[5022]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:52.646952 systemd[1]: sshd@43-139.178.90.101:22-139.178.68.195:41104.service: Deactivated successfully. Feb 13 04:13:52.647446 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 04:13:52.647898 systemd-logind[1467]: Session 32 logged out. Waiting for processes to exit. Feb 13 04:13:52.648309 systemd-logind[1467]: Removed session 32. Feb 13 04:13:57.654344 systemd[1]: Started sshd@44-139.178.90.101:22-139.178.68.195:52802.service. Feb 13 04:13:57.683645 sshd[5048]: Accepted publickey for core from 139.178.68.195 port 52802 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:13:57.684495 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:13:57.687421 systemd-logind[1467]: New session 33 of user core. Feb 13 04:13:57.688036 systemd[1]: Started session-33.scope. Feb 13 04:13:57.773575 sshd[5048]: pam_unix(sshd:session): session closed for user core Feb 13 04:13:57.775144 systemd[1]: sshd@44-139.178.90.101:22-139.178.68.195:52802.service: Deactivated successfully. Feb 13 04:13:57.775637 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 04:13:57.776001 systemd-logind[1467]: Session 33 logged out. Waiting for processes to exit. Feb 13 04:13:57.776458 systemd-logind[1467]: Removed session 33. Feb 13 04:14:02.784812 systemd[1]: Started sshd@45-139.178.90.101:22-139.178.68.195:52804.service. Feb 13 04:14:02.817300 sshd[5072]: Accepted publickey for core from 139.178.68.195 port 52804 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:02.818075 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:02.820858 systemd-logind[1467]: New session 34 of user core. Feb 13 04:14:02.821384 systemd[1]: Started session-34.scope. Feb 13 04:14:02.911813 sshd[5072]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:02.913206 systemd[1]: sshd@45-139.178.90.101:22-139.178.68.195:52804.service: Deactivated successfully. Feb 13 04:14:02.913656 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 04:14:02.914084 systemd-logind[1467]: Session 34 logged out. Waiting for processes to exit. Feb 13 04:14:02.914598 systemd-logind[1467]: Removed session 34. Feb 13 04:14:07.921338 systemd[1]: Started sshd@46-139.178.90.101:22-139.178.68.195:55050.service. Feb 13 04:14:07.951128 sshd[5100]: Accepted publickey for core from 139.178.68.195 port 55050 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:07.951997 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:07.955005 systemd-logind[1467]: New session 35 of user core. Feb 13 04:14:07.955720 systemd[1]: Started session-35.scope. Feb 13 04:14:08.040709 sshd[5100]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:08.042286 systemd[1]: sshd@46-139.178.90.101:22-139.178.68.195:55050.service: Deactivated successfully. Feb 13 04:14:08.042782 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 04:14:08.043250 systemd-logind[1467]: Session 35 logged out. Waiting for processes to exit. Feb 13 04:14:08.044017 systemd-logind[1467]: Removed session 35. Feb 13 04:14:13.050050 systemd[1]: Started sshd@47-139.178.90.101:22-139.178.68.195:55054.service. Feb 13 04:14:13.079774 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 55054 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:13.080623 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:13.083433 systemd-logind[1467]: New session 36 of user core. Feb 13 04:14:13.084203 systemd[1]: Started session-36.scope. Feb 13 04:14:13.167164 sshd[5125]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:13.168844 systemd[1]: sshd@47-139.178.90.101:22-139.178.68.195:55054.service: Deactivated successfully. Feb 13 04:14:13.169353 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 04:14:13.169861 systemd-logind[1467]: Session 36 logged out. Waiting for processes to exit. Feb 13 04:14:13.170372 systemd-logind[1467]: Removed session 36. Feb 13 04:14:18.171261 systemd[1]: Started sshd@48-139.178.90.101:22-139.178.68.195:58598.service. Feb 13 04:14:18.202675 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 58598 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:18.203415 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:18.205934 systemd-logind[1467]: New session 37 of user core. Feb 13 04:14:18.206530 systemd[1]: Started session-37.scope. Feb 13 04:14:18.294794 sshd[5150]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:18.296156 systemd[1]: sshd@48-139.178.90.101:22-139.178.68.195:58598.service: Deactivated successfully. Feb 13 04:14:18.296601 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 04:14:18.296975 systemd-logind[1467]: Session 37 logged out. Waiting for processes to exit. Feb 13 04:14:18.297391 systemd-logind[1467]: Removed session 37. Feb 13 04:14:23.304880 systemd[1]: Started sshd@49-139.178.90.101:22-139.178.68.195:58608.service. Feb 13 04:14:23.334616 sshd[5178]: Accepted publickey for core from 139.178.68.195 port 58608 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:23.335458 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:23.338201 systemd-logind[1467]: New session 38 of user core. Feb 13 04:14:23.338952 systemd[1]: Started session-38.scope. Feb 13 04:14:23.425600 sshd[5178]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:23.427215 systemd[1]: sshd@49-139.178.90.101:22-139.178.68.195:58608.service: Deactivated successfully. Feb 13 04:14:23.427728 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 04:14:23.428180 systemd-logind[1467]: Session 38 logged out. Waiting for processes to exit. Feb 13 04:14:23.428769 systemd-logind[1467]: Removed session 38. Feb 13 04:14:28.435492 systemd[1]: Started sshd@50-139.178.90.101:22-139.178.68.195:35050.service. Feb 13 04:14:28.464593 sshd[5204]: Accepted publickey for core from 139.178.68.195 port 35050 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:28.465431 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:28.468217 systemd-logind[1467]: New session 39 of user core. Feb 13 04:14:28.468983 systemd[1]: Started session-39.scope. Feb 13 04:14:28.556634 sshd[5204]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:28.558286 systemd[1]: sshd@50-139.178.90.101:22-139.178.68.195:35050.service: Deactivated successfully. Feb 13 04:14:28.558806 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 04:14:28.559218 systemd-logind[1467]: Session 39 logged out. Waiting for processes to exit. Feb 13 04:14:28.559883 systemd-logind[1467]: Removed session 39. Feb 13 04:14:33.566077 systemd[1]: Started sshd@51-139.178.90.101:22-139.178.68.195:35066.service. Feb 13 04:14:33.596040 sshd[5230]: Accepted publickey for core from 139.178.68.195 port 35066 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:33.596920 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:33.599948 systemd-logind[1467]: New session 40 of user core. Feb 13 04:14:33.600608 systemd[1]: Started session-40.scope. Feb 13 04:14:33.688089 sshd[5230]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:33.689552 systemd[1]: sshd@51-139.178.90.101:22-139.178.68.195:35066.service: Deactivated successfully. Feb 13 04:14:33.689999 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 04:14:33.690301 systemd-logind[1467]: Session 40 logged out. Waiting for processes to exit. Feb 13 04:14:33.690844 systemd-logind[1467]: Removed session 40. Feb 13 04:14:38.697042 systemd[1]: Started sshd@52-139.178.90.101:22-139.178.68.195:52110.service. Feb 13 04:14:38.726703 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 52110 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:38.727548 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:38.730673 systemd-logind[1467]: New session 41 of user core. Feb 13 04:14:38.731439 systemd[1]: Started session-41.scope. Feb 13 04:14:38.829647 sshd[5259]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:38.835652 systemd[1]: sshd@52-139.178.90.101:22-139.178.68.195:52110.service: Deactivated successfully. Feb 13 04:14:38.837482 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 04:14:38.839257 systemd-logind[1467]: Session 41 logged out. Waiting for processes to exit. Feb 13 04:14:38.841522 systemd-logind[1467]: Removed session 41. Feb 13 04:14:43.838785 systemd[1]: Started sshd@53-139.178.90.101:22-139.178.68.195:52126.service. Feb 13 04:14:43.869180 sshd[5284]: Accepted publickey for core from 139.178.68.195 port 52126 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:43.870029 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:43.873063 systemd-logind[1467]: New session 42 of user core. Feb 13 04:14:43.873715 systemd[1]: Started session-42.scope. Feb 13 04:14:43.962979 sshd[5284]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:43.964420 systemd[1]: sshd@53-139.178.90.101:22-139.178.68.195:52126.service: Deactivated successfully. Feb 13 04:14:43.964879 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 04:14:43.965244 systemd-logind[1467]: Session 42 logged out. Waiting for processes to exit. Feb 13 04:14:43.965757 systemd-logind[1467]: Removed session 42. Feb 13 04:14:48.972820 systemd[1]: Started sshd@54-139.178.90.101:22-139.178.68.195:46344.service. Feb 13 04:14:49.003596 sshd[5308]: Accepted publickey for core from 139.178.68.195 port 46344 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:49.004413 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:49.007165 systemd-logind[1467]: New session 43 of user core. Feb 13 04:14:49.007929 systemd[1]: Started session-43.scope. Feb 13 04:14:49.097449 sshd[5308]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:49.098948 systemd[1]: sshd@54-139.178.90.101:22-139.178.68.195:46344.service: Deactivated successfully. Feb 13 04:14:49.099403 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 04:14:49.099817 systemd-logind[1467]: Session 43 logged out. Waiting for processes to exit. Feb 13 04:14:49.100296 systemd-logind[1467]: Removed session 43. Feb 13 04:14:54.107343 systemd[1]: Started sshd@55-139.178.90.101:22-139.178.68.195:46350.service. Feb 13 04:14:54.137990 sshd[5333]: Accepted publickey for core from 139.178.68.195 port 46350 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:54.138851 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:54.141854 systemd-logind[1467]: New session 44 of user core. Feb 13 04:14:54.142607 systemd[1]: Started session-44.scope. Feb 13 04:14:54.229998 sshd[5333]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:54.231424 systemd[1]: sshd@55-139.178.90.101:22-139.178.68.195:46350.service: Deactivated successfully. Feb 13 04:14:54.231844 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 04:14:54.232212 systemd-logind[1467]: Session 44 logged out. Waiting for processes to exit. Feb 13 04:14:54.232888 systemd-logind[1467]: Removed session 44. Feb 13 04:14:54.237475 systemd[1]: Started sshd@56-139.178.90.101:22-218.92.0.34:12693.service. Feb 13 04:14:55.270702 sshd[5357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 13 04:14:56.977771 sshd[5357]: Failed password for root from 218.92.0.34 port 12693 ssh2 Feb 13 04:14:58.893153 sshd[5357]: Failed password for root from 218.92.0.34 port 12693 ssh2 Feb 13 04:14:59.238808 systemd[1]: Started sshd@57-139.178.90.101:22-139.178.68.195:45916.service. Feb 13 04:14:59.269706 sshd[5360]: Accepted publickey for core from 139.178.68.195 port 45916 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:14:59.270656 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:14:59.273854 systemd-logind[1467]: New session 45 of user core. Feb 13 04:14:59.274529 systemd[1]: Started session-45.scope. Feb 13 04:14:59.364561 sshd[5360]: pam_unix(sshd:session): session closed for user core Feb 13 04:14:59.366045 systemd[1]: sshd@57-139.178.90.101:22-139.178.68.195:45916.service: Deactivated successfully. Feb 13 04:14:59.366545 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 04:14:59.366963 systemd-logind[1467]: Session 45 logged out. Waiting for processes to exit. Feb 13 04:14:59.367359 systemd-logind[1467]: Removed session 45. Feb 13 04:15:01.474645 sshd[5357]: Failed password for root from 218.92.0.34 port 12693 ssh2 Feb 13 04:15:03.779876 sshd[5357]: Received disconnect from 218.92.0.34 port 12693:11: [preauth] Feb 13 04:15:03.779876 sshd[5357]: Disconnected from authenticating user root 218.92.0.34 port 12693 [preauth] Feb 13 04:15:03.780454 sshd[5357]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 13 04:15:03.782491 systemd[1]: sshd@56-139.178.90.101:22-218.92.0.34:12693.service: Deactivated successfully. Feb 13 04:15:04.374565 systemd[1]: Started sshd@58-139.178.90.101:22-139.178.68.195:45920.service. Feb 13 04:15:04.404105 sshd[5388]: Accepted publickey for core from 139.178.68.195 port 45920 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:04.404956 sshd[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:04.408091 systemd-logind[1467]: New session 46 of user core. Feb 13 04:15:04.408718 systemd[1]: Started session-46.scope. Feb 13 04:15:04.496994 sshd[5388]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:04.498476 systemd[1]: sshd@58-139.178.90.101:22-139.178.68.195:45920.service: Deactivated successfully. Feb 13 04:15:04.498898 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 04:15:04.499264 systemd-logind[1467]: Session 46 logged out. Waiting for processes to exit. Feb 13 04:15:04.499922 systemd-logind[1467]: Removed session 46. Feb 13 04:15:04.927650 systemd[1]: Started sshd@59-139.178.90.101:22-218.92.0.34:27518.service. Feb 13 04:15:07.002908 sshd[5412]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 13 04:15:08.690090 sshd[5412]: Failed password for root from 218.92.0.34 port 27518 ssh2 Feb 13 04:15:09.506623 systemd[1]: Started sshd@60-139.178.90.101:22-139.178.68.195:38506.service. Feb 13 04:15:09.535604 sshd[5415]: Accepted publickey for core from 139.178.68.195 port 38506 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:09.536452 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:09.539226 systemd-logind[1467]: New session 47 of user core. Feb 13 04:15:09.539866 systemd[1]: Started session-47.scope. Feb 13 04:15:09.625906 sshd[5415]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:09.627415 systemd[1]: sshd@60-139.178.90.101:22-139.178.68.195:38506.service: Deactivated successfully. Feb 13 04:15:09.627924 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 04:15:09.628282 systemd-logind[1467]: Session 47 logged out. Waiting for processes to exit. Feb 13 04:15:09.629004 systemd-logind[1467]: Removed session 47. Feb 13 04:15:10.793686 sshd[5412]: Failed password for root from 218.92.0.34 port 27518 ssh2 Feb 13 04:15:14.635661 systemd[1]: Started sshd@61-139.178.90.101:22-139.178.68.195:38516.service. Feb 13 04:15:14.665234 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 38516 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:14.666087 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:14.669080 systemd-logind[1467]: New session 48 of user core. Feb 13 04:15:14.669685 systemd[1]: Started session-48.scope. Feb 13 04:15:14.759234 sshd[5438]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:14.760832 systemd[1]: sshd@61-139.178.90.101:22-139.178.68.195:38516.service: Deactivated successfully. Feb 13 04:15:14.761263 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 04:15:14.761726 systemd-logind[1467]: Session 48 logged out. Waiting for processes to exit. Feb 13 04:15:14.762279 systemd-logind[1467]: Removed session 48. Feb 13 04:15:14.793606 sshd[5412]: Failed password for root from 218.92.0.34 port 27518 ssh2 Feb 13 04:15:16.540465 sshd[5412]: Received disconnect from 218.92.0.34 port 27518:11: [preauth] Feb 13 04:15:16.540465 sshd[5412]: Disconnected from authenticating user root 218.92.0.34 port 27518 [preauth] Feb 13 04:15:16.541057 sshd[5412]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 13 04:15:16.543142 systemd[1]: sshd@59-139.178.90.101:22-218.92.0.34:27518.service: Deactivated successfully. Feb 13 04:15:16.707270 systemd[1]: Started sshd@62-139.178.90.101:22-218.92.0.34:54693.service. Feb 13 04:15:17.730650 sshd[5464]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 13 04:15:19.768755 systemd[1]: Started sshd@63-139.178.90.101:22-139.178.68.195:53420.service. Feb 13 04:15:19.798783 sshd[5469]: Accepted publickey for core from 139.178.68.195 port 53420 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:19.799631 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:19.802596 systemd-logind[1467]: New session 49 of user core. Feb 13 04:15:19.803209 systemd[1]: Started session-49.scope. Feb 13 04:15:19.891501 sshd[5469]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:19.893181 systemd[1]: sshd@63-139.178.90.101:22-139.178.68.195:53420.service: Deactivated successfully. Feb 13 04:15:19.893707 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 04:15:19.894171 systemd-logind[1467]: Session 49 logged out. Waiting for processes to exit. Feb 13 04:15:19.894843 systemd-logind[1467]: Removed session 49. Feb 13 04:15:20.125182 sshd[5464]: Failed password for root from 218.92.0.34 port 54693 ssh2 Feb 13 04:15:23.127516 sshd[5464]: Failed password for root from 218.92.0.34 port 54693 ssh2 Feb 13 04:15:24.901471 systemd[1]: Started sshd@64-139.178.90.101:22-139.178.68.195:53422.service. Feb 13 04:15:24.931042 sshd[5492]: Accepted publickey for core from 139.178.68.195 port 53422 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:24.931826 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:24.934579 systemd-logind[1467]: New session 50 of user core. Feb 13 04:15:24.935173 systemd[1]: Started session-50.scope. Feb 13 04:15:25.022579 sshd[5492]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:25.024636 systemd[1]: sshd@64-139.178.90.101:22-139.178.68.195:53422.service: Deactivated successfully. Feb 13 04:15:25.025007 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 04:15:25.025320 systemd-logind[1467]: Session 50 logged out. Waiting for processes to exit. Feb 13 04:15:25.025956 systemd[1]: Started sshd@65-139.178.90.101:22-139.178.68.195:53428.service. Feb 13 04:15:25.026329 systemd-logind[1467]: Removed session 50. Feb 13 04:15:25.056228 sshd[5517]: Accepted publickey for core from 139.178.68.195 port 53428 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:25.056964 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:25.059449 systemd-logind[1467]: New session 51 of user core. Feb 13 04:15:25.059987 systemd[1]: Started session-51.scope. Feb 13 04:15:25.375162 sshd[5464]: Failed password for root from 218.92.0.34 port 54693 ssh2 Feb 13 04:15:26.181449 sshd[5517]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:26.183297 systemd[1]: sshd@65-139.178.90.101:22-139.178.68.195:53428.service: Deactivated successfully. Feb 13 04:15:26.183658 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 04:15:26.184026 systemd-logind[1467]: Session 51 logged out. Waiting for processes to exit. Feb 13 04:15:26.184630 systemd[1]: Started sshd@66-139.178.90.101:22-139.178.68.195:36454.service. Feb 13 04:15:26.185084 systemd-logind[1467]: Removed session 51. Feb 13 04:15:26.214953 sshd[5539]: Accepted publickey for core from 139.178.68.195 port 36454 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:26.215880 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:26.219033 systemd-logind[1467]: New session 52 of user core. Feb 13 04:15:26.219694 systemd[1]: Started session-52.scope. Feb 13 04:15:27.062283 sshd[5539]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:27.066413 systemd[1]: sshd@66-139.178.90.101:22-139.178.68.195:36454.service: Deactivated successfully. Feb 13 04:15:27.067387 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 04:15:27.068275 systemd-logind[1467]: Session 52 logged out. Waiting for processes to exit. Feb 13 04:15:27.069629 systemd[1]: Started sshd@67-139.178.90.101:22-139.178.68.195:36468.service. Feb 13 04:15:27.070448 systemd-logind[1467]: Removed session 52. Feb 13 04:15:27.111640 sshd[5586]: Accepted publickey for core from 139.178.68.195 port 36468 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:27.112785 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:27.116218 systemd-logind[1467]: New session 53 of user core. Feb 13 04:15:27.117024 systemd[1]: Started session-53.scope. Feb 13 04:15:27.281804 sshd[5464]: Received disconnect from 218.92.0.34 port 54693:11: [preauth] Feb 13 04:15:27.281804 sshd[5464]: Disconnected from authenticating user root 218.92.0.34 port 54693 [preauth] Feb 13 04:15:27.281977 sshd[5464]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 13 04:15:27.282517 systemd[1]: sshd@62-139.178.90.101:22-218.92.0.34:54693.service: Deactivated successfully. Feb 13 04:15:27.308377 sshd[5586]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:27.310314 systemd[1]: sshd@67-139.178.90.101:22-139.178.68.195:36468.service: Deactivated successfully. Feb 13 04:15:27.310737 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 04:15:27.311135 systemd-logind[1467]: Session 53 logged out. Waiting for processes to exit. Feb 13 04:15:27.311907 systemd[1]: Started sshd@68-139.178.90.101:22-139.178.68.195:36476.service. Feb 13 04:15:27.312344 systemd-logind[1467]: Removed session 53. Feb 13 04:15:27.342570 sshd[5647]: Accepted publickey for core from 139.178.68.195 port 36476 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:27.343420 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:27.346152 systemd-logind[1467]: New session 54 of user core. Feb 13 04:15:27.346822 systemd[1]: Started session-54.scope. Feb 13 04:15:27.483886 sshd[5647]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:27.486192 systemd[1]: sshd@68-139.178.90.101:22-139.178.68.195:36476.service: Deactivated successfully. Feb 13 04:15:27.486901 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 04:15:27.487477 systemd-logind[1467]: Session 54 logged out. Waiting for processes to exit. Feb 13 04:15:27.488366 systemd-logind[1467]: Removed session 54. Feb 13 04:15:32.492766 systemd[1]: Started sshd@69-139.178.90.101:22-139.178.68.195:36492.service. Feb 13 04:15:32.523615 sshd[5672]: Accepted publickey for core from 139.178.68.195 port 36492 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:32.524490 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:32.527456 systemd-logind[1467]: New session 55 of user core. Feb 13 04:15:32.528173 systemd[1]: Started session-55.scope. Feb 13 04:15:32.616500 sshd[5672]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:32.617913 systemd[1]: sshd@69-139.178.90.101:22-139.178.68.195:36492.service: Deactivated successfully. Feb 13 04:15:32.618337 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 04:15:32.618751 systemd-logind[1467]: Session 55 logged out. Waiting for processes to exit. Feb 13 04:15:32.619251 systemd-logind[1467]: Removed session 55. Feb 13 04:15:37.626302 systemd[1]: Started sshd@70-139.178.90.101:22-139.178.68.195:33002.service. Feb 13 04:15:37.655658 sshd[5699]: Accepted publickey for core from 139.178.68.195 port 33002 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:37.656504 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:37.659385 systemd-logind[1467]: New session 56 of user core. Feb 13 04:15:37.660025 systemd[1]: Started session-56.scope. Feb 13 04:15:37.750695 sshd[5699]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:37.752967 systemd[1]: sshd@70-139.178.90.101:22-139.178.68.195:33002.service: Deactivated successfully. Feb 13 04:15:37.753768 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 04:15:37.754482 systemd-logind[1467]: Session 56 logged out. Waiting for processes to exit. Feb 13 04:15:37.755471 systemd-logind[1467]: Removed session 56. Feb 13 04:15:42.760201 systemd[1]: Started sshd@71-139.178.90.101:22-139.178.68.195:33018.service. Feb 13 04:15:42.789352 sshd[5725]: Accepted publickey for core from 139.178.68.195 port 33018 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:42.790216 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:42.793022 systemd-logind[1467]: New session 57 of user core. Feb 13 04:15:42.793722 systemd[1]: Started session-57.scope. Feb 13 04:15:42.879257 sshd[5725]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:42.880751 systemd[1]: sshd@71-139.178.90.101:22-139.178.68.195:33018.service: Deactivated successfully. Feb 13 04:15:42.881203 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 04:15:42.881558 systemd-logind[1467]: Session 57 logged out. Waiting for processes to exit. Feb 13 04:15:42.882086 systemd-logind[1467]: Removed session 57. Feb 13 04:15:47.888454 systemd[1]: Started sshd@72-139.178.90.101:22-139.178.68.195:48004.service. Feb 13 04:15:47.918072 sshd[5748]: Accepted publickey for core from 139.178.68.195 port 48004 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:47.918927 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:47.921604 systemd-logind[1467]: New session 58 of user core. Feb 13 04:15:47.922237 systemd[1]: Started session-58.scope. Feb 13 04:15:48.007591 sshd[5748]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:48.009153 systemd[1]: sshd@72-139.178.90.101:22-139.178.68.195:48004.service: Deactivated successfully. Feb 13 04:15:48.009646 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 04:15:48.010118 systemd-logind[1467]: Session 58 logged out. Waiting for processes to exit. Feb 13 04:15:48.010755 systemd-logind[1467]: Removed session 58. Feb 13 04:15:53.017445 systemd[1]: Started sshd@73-139.178.90.101:22-139.178.68.195:48010.service. Feb 13 04:15:53.046659 sshd[5773]: Accepted publickey for core from 139.178.68.195 port 48010 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:53.047664 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:53.050439 systemd-logind[1467]: New session 59 of user core. Feb 13 04:15:53.051029 systemd[1]: Started session-59.scope. Feb 13 04:15:53.165435 sshd[5773]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:53.170857 systemd[1]: sshd@73-139.178.90.101:22-139.178.68.195:48010.service: Deactivated successfully. Feb 13 04:15:53.172766 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 04:15:53.174411 systemd-logind[1467]: Session 59 logged out. Waiting for processes to exit. Feb 13 04:15:53.176426 systemd-logind[1467]: Removed session 59. Feb 13 04:15:58.174216 systemd[1]: Started sshd@74-139.178.90.101:22-139.178.68.195:55508.service. Feb 13 04:15:58.203456 sshd[5798]: Accepted publickey for core from 139.178.68.195 port 55508 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:15:58.204257 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:15:58.206927 systemd-logind[1467]: New session 60 of user core. Feb 13 04:15:58.207493 systemd[1]: Started session-60.scope. Feb 13 04:15:58.296659 sshd[5798]: pam_unix(sshd:session): session closed for user core Feb 13 04:15:58.298726 systemd[1]: sshd@74-139.178.90.101:22-139.178.68.195:55508.service: Deactivated successfully. Feb 13 04:15:58.299504 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 04:15:58.300127 systemd-logind[1467]: Session 60 logged out. Waiting for processes to exit. Feb 13 04:15:58.300887 systemd-logind[1467]: Removed session 60. Feb 13 04:16:03.308942 systemd[1]: Started sshd@75-139.178.90.101:22-139.178.68.195:55524.service. Feb 13 04:16:03.363805 sshd[5823]: Accepted publickey for core from 139.178.68.195 port 55524 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:03.364463 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:03.366709 systemd-logind[1467]: New session 61 of user core. Feb 13 04:16:03.367192 systemd[1]: Started session-61.scope. Feb 13 04:16:03.449903 sshd[5823]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:03.451329 systemd[1]: sshd@75-139.178.90.101:22-139.178.68.195:55524.service: Deactivated successfully. Feb 13 04:16:03.451763 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 04:16:03.452153 systemd-logind[1467]: Session 61 logged out. Waiting for processes to exit. Feb 13 04:16:03.452628 systemd-logind[1467]: Removed session 61. Feb 13 04:16:08.460920 systemd[1]: Started sshd@76-139.178.90.101:22-139.178.68.195:59258.service. Feb 13 04:16:08.494604 sshd[5850]: Accepted publickey for core from 139.178.68.195 port 59258 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:08.495438 sshd[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:08.497945 systemd-logind[1467]: New session 62 of user core. Feb 13 04:16:08.498525 systemd[1]: Started session-62.scope. Feb 13 04:16:08.583676 sshd[5850]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:08.585029 systemd[1]: sshd@76-139.178.90.101:22-139.178.68.195:59258.service: Deactivated successfully. Feb 13 04:16:08.585456 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 04:16:08.585787 systemd-logind[1467]: Session 62 logged out. Waiting for processes to exit. Feb 13 04:16:08.586246 systemd-logind[1467]: Removed session 62. Feb 13 04:16:13.593168 systemd[1]: Started sshd@77-139.178.90.101:22-139.178.68.195:59260.service. Feb 13 04:16:13.623593 sshd[5876]: Accepted publickey for core from 139.178.68.195 port 59260 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:13.624572 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:13.627181 systemd-logind[1467]: New session 63 of user core. Feb 13 04:16:13.627855 systemd[1]: Started session-63.scope. Feb 13 04:16:13.713194 sshd[5876]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:13.714525 systemd[1]: sshd@77-139.178.90.101:22-139.178.68.195:59260.service: Deactivated successfully. Feb 13 04:16:13.714947 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 04:16:13.715288 systemd-logind[1467]: Session 63 logged out. Waiting for processes to exit. Feb 13 04:16:13.715810 systemd-logind[1467]: Removed session 63. Feb 13 04:16:18.723148 systemd[1]: Started sshd@78-139.178.90.101:22-139.178.68.195:42800.service. Feb 13 04:16:18.753256 sshd[5901]: Accepted publickey for core from 139.178.68.195 port 42800 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:18.754132 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:18.757084 systemd-logind[1467]: New session 64 of user core. Feb 13 04:16:18.757705 systemd[1]: Started session-64.scope. Feb 13 04:16:18.848486 sshd[5901]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:18.849978 systemd[1]: sshd@78-139.178.90.101:22-139.178.68.195:42800.service: Deactivated successfully. Feb 13 04:16:18.850466 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 04:16:18.850816 systemd-logind[1467]: Session 64 logged out. Waiting for processes to exit. Feb 13 04:16:18.851256 systemd-logind[1467]: Removed session 64. Feb 13 04:16:23.858897 systemd[1]: Started sshd@79-139.178.90.101:22-139.178.68.195:42812.service. Feb 13 04:16:23.888106 sshd[5928]: Accepted publickey for core from 139.178.68.195 port 42812 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:23.889028 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:23.891702 systemd-logind[1467]: New session 65 of user core. Feb 13 04:16:23.892391 systemd[1]: Started session-65.scope. Feb 13 04:16:23.977400 sshd[5928]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:23.978880 systemd[1]: sshd@79-139.178.90.101:22-139.178.68.195:42812.service: Deactivated successfully. Feb 13 04:16:23.979336 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 04:16:23.979782 systemd-logind[1467]: Session 65 logged out. Waiting for processes to exit. Feb 13 04:16:23.980236 systemd-logind[1467]: Removed session 65. Feb 13 04:16:28.987070 systemd[1]: Started sshd@80-139.178.90.101:22-139.178.68.195:46758.service. Feb 13 04:16:29.017002 sshd[5953]: Accepted publickey for core from 139.178.68.195 port 46758 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:29.017797 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:29.020773 systemd-logind[1467]: New session 66 of user core. Feb 13 04:16:29.021401 systemd[1]: Started session-66.scope. Feb 13 04:16:29.109354 sshd[5953]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:29.110897 systemd[1]: sshd@80-139.178.90.101:22-139.178.68.195:46758.service: Deactivated successfully. Feb 13 04:16:29.111352 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 04:16:29.111792 systemd-logind[1467]: Session 66 logged out. Waiting for processes to exit. Feb 13 04:16:29.112276 systemd-logind[1467]: Removed session 66. Feb 13 04:16:34.118208 systemd[1]: Started sshd@81-139.178.90.101:22-139.178.68.195:46774.service. Feb 13 04:16:34.147687 sshd[5980]: Accepted publickey for core from 139.178.68.195 port 46774 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:34.148491 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:34.151159 systemd-logind[1467]: New session 67 of user core. Feb 13 04:16:34.151905 systemd[1]: Started session-67.scope. Feb 13 04:16:34.235176 sshd[5980]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:34.236734 systemd[1]: sshd@81-139.178.90.101:22-139.178.68.195:46774.service: Deactivated successfully. Feb 13 04:16:34.237210 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 04:16:34.237630 systemd-logind[1467]: Session 67 logged out. Waiting for processes to exit. Feb 13 04:16:34.238239 systemd-logind[1467]: Removed session 67. Feb 13 04:16:39.245112 systemd[1]: Started sshd@82-139.178.90.101:22-139.178.68.195:36492.service. Feb 13 04:16:39.274464 sshd[6005]: Accepted publickey for core from 139.178.68.195 port 36492 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:39.275289 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:39.277954 systemd-logind[1467]: New session 68 of user core. Feb 13 04:16:39.278739 systemd[1]: Started session-68.scope. Feb 13 04:16:39.366355 sshd[6005]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:39.367846 systemd[1]: sshd@82-139.178.90.101:22-139.178.68.195:36492.service: Deactivated successfully. Feb 13 04:16:39.368282 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 04:16:39.368718 systemd-logind[1467]: Session 68 logged out. Waiting for processes to exit. Feb 13 04:16:39.369195 systemd-logind[1467]: Removed session 68. Feb 13 04:16:44.378644 systemd[1]: Started sshd@83-139.178.90.101:22-139.178.68.195:36502.service. Feb 13 04:16:44.411185 sshd[6027]: Accepted publickey for core from 139.178.68.195 port 36502 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:44.411954 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:44.414744 systemd-logind[1467]: New session 69 of user core. Feb 13 04:16:44.415426 systemd[1]: Started session-69.scope. Feb 13 04:16:44.502433 sshd[6027]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:44.503910 systemd[1]: sshd@83-139.178.90.101:22-139.178.68.195:36502.service: Deactivated successfully. Feb 13 04:16:44.504336 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 04:16:44.504737 systemd-logind[1467]: Session 69 logged out. Waiting for processes to exit. Feb 13 04:16:44.505240 systemd-logind[1467]: Removed session 69. Feb 13 04:16:49.512423 systemd[1]: Started sshd@84-139.178.90.101:22-139.178.68.195:41336.service. Feb 13 04:16:49.541860 sshd[6047]: Accepted publickey for core from 139.178.68.195 port 41336 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:49.542702 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:49.545670 systemd-logind[1467]: New session 70 of user core. Feb 13 04:16:49.546544 systemd[1]: Started session-70.scope. Feb 13 04:16:49.633201 sshd[6047]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:49.634683 systemd[1]: sshd@84-139.178.90.101:22-139.178.68.195:41336.service: Deactivated successfully. Feb 13 04:16:49.635130 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 04:16:49.635540 systemd-logind[1467]: Session 70 logged out. Waiting for processes to exit. Feb 13 04:16:49.636148 systemd-logind[1467]: Removed session 70. Feb 13 04:16:54.642785 systemd[1]: Started sshd@85-139.178.90.101:22-139.178.68.195:41346.service. Feb 13 04:16:54.672186 sshd[6070]: Accepted publickey for core from 139.178.68.195 port 41346 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:54.673071 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:54.676070 systemd-logind[1467]: New session 71 of user core. Feb 13 04:16:54.676870 systemd[1]: Started session-71.scope. Feb 13 04:16:54.764719 sshd[6070]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:54.766149 systemd[1]: sshd@85-139.178.90.101:22-139.178.68.195:41346.service: Deactivated successfully. Feb 13 04:16:54.766596 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 04:16:54.766985 systemd-logind[1467]: Session 71 logged out. Waiting for processes to exit. Feb 13 04:16:54.767378 systemd-logind[1467]: Removed session 71. Feb 13 04:16:59.774569 systemd[1]: Started sshd@86-139.178.90.101:22-139.178.68.195:34682.service. Feb 13 04:16:59.803969 sshd[6094]: Accepted publickey for core from 139.178.68.195 port 34682 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:16:59.804824 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:16:59.807817 systemd-logind[1467]: New session 72 of user core. Feb 13 04:16:59.808601 systemd[1]: Started session-72.scope. Feb 13 04:16:59.896704 sshd[6094]: pam_unix(sshd:session): session closed for user core Feb 13 04:16:59.898180 systemd[1]: sshd@86-139.178.90.101:22-139.178.68.195:34682.service: Deactivated successfully. Feb 13 04:16:59.898642 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 04:16:59.899074 systemd-logind[1467]: Session 72 logged out. Waiting for processes to exit. Feb 13 04:16:59.899603 systemd-logind[1467]: Removed session 72. Feb 13 04:17:04.905272 systemd[1]: Started sshd@87-139.178.90.101:22-139.178.68.195:34690.service. Feb 13 04:17:04.935115 sshd[6120]: Accepted publickey for core from 139.178.68.195 port 34690 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:04.935990 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:04.939150 systemd-logind[1467]: New session 73 of user core. Feb 13 04:17:04.939793 systemd[1]: Started session-73.scope. Feb 13 04:17:05.028496 sshd[6120]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:05.030010 systemd[1]: sshd@87-139.178.90.101:22-139.178.68.195:34690.service: Deactivated successfully. Feb 13 04:17:05.030451 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 04:17:05.030853 systemd-logind[1467]: Session 73 logged out. Waiting for processes to exit. Feb 13 04:17:05.031303 systemd-logind[1467]: Removed session 73. Feb 13 04:17:10.038829 systemd[1]: Started sshd@88-139.178.90.101:22-139.178.68.195:38434.service. Feb 13 04:17:10.068250 sshd[6147]: Accepted publickey for core from 139.178.68.195 port 38434 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:10.069098 sshd[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:10.072120 systemd-logind[1467]: New session 74 of user core. Feb 13 04:17:10.072867 systemd[1]: Started session-74.scope. Feb 13 04:17:10.157195 sshd[6147]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:10.158622 systemd[1]: sshd@88-139.178.90.101:22-139.178.68.195:38434.service: Deactivated successfully. Feb 13 04:17:10.159034 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 04:17:10.159340 systemd-logind[1467]: Session 74 logged out. Waiting for processes to exit. Feb 13 04:17:10.159839 systemd-logind[1467]: Removed session 74. Feb 13 04:17:15.161437 systemd[1]: Started sshd@89-139.178.90.101:22-139.178.68.195:38440.service. Feb 13 04:17:15.192126 sshd[6171]: Accepted publickey for core from 139.178.68.195 port 38440 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:15.192950 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:15.195908 systemd-logind[1467]: New session 75 of user core. Feb 13 04:17:15.196465 systemd[1]: Started session-75.scope. Feb 13 04:17:15.328591 sshd[6171]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:15.330029 systemd[1]: sshd@89-139.178.90.101:22-139.178.68.195:38440.service: Deactivated successfully. Feb 13 04:17:15.330485 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 04:17:15.330924 systemd-logind[1467]: Session 75 logged out. Waiting for processes to exit. Feb 13 04:17:15.331349 systemd-logind[1467]: Removed session 75. Feb 13 04:17:20.338096 systemd[1]: Started sshd@90-139.178.90.101:22-139.178.68.195:54194.service. Feb 13 04:17:20.368164 sshd[6198]: Accepted publickey for core from 139.178.68.195 port 54194 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:20.369072 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:20.372100 systemd-logind[1467]: New session 76 of user core. Feb 13 04:17:20.372930 systemd[1]: Started session-76.scope. Feb 13 04:17:20.471655 sshd[6198]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:20.477948 systemd[1]: sshd@90-139.178.90.101:22-139.178.68.195:54194.service: Deactivated successfully. Feb 13 04:17:20.479993 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 04:17:20.481678 systemd-logind[1467]: Session 76 logged out. Waiting for processes to exit. Feb 13 04:17:20.483965 systemd-logind[1467]: Removed session 76. Feb 13 04:17:25.480333 systemd[1]: Started sshd@91-139.178.90.101:22-139.178.68.195:54204.service. Feb 13 04:17:25.509784 sshd[6223]: Accepted publickey for core from 139.178.68.195 port 54204 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:25.510651 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:25.513508 systemd-logind[1467]: New session 77 of user core. Feb 13 04:17:25.514149 systemd[1]: Started session-77.scope. Feb 13 04:17:25.599553 sshd[6223]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:25.601084 systemd[1]: sshd@91-139.178.90.101:22-139.178.68.195:54204.service: Deactivated successfully. Feb 13 04:17:25.601538 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 04:17:25.601947 systemd-logind[1467]: Session 77 logged out. Waiting for processes to exit. Feb 13 04:17:25.602341 systemd-logind[1467]: Removed session 77. Feb 13 04:17:30.608919 systemd[1]: Started sshd@92-139.178.90.101:22-139.178.68.195:54210.service. Feb 13 04:17:30.638904 sshd[6247]: Accepted publickey for core from 139.178.68.195 port 54210 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:30.639746 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:30.642743 systemd-logind[1467]: New session 78 of user core. Feb 13 04:17:30.643414 systemd[1]: Started session-78.scope. Feb 13 04:17:30.730277 sshd[6247]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:30.731933 systemd[1]: sshd@92-139.178.90.101:22-139.178.68.195:54210.service: Deactivated successfully. Feb 13 04:17:30.732424 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 04:17:30.732891 systemd-logind[1467]: Session 78 logged out. Waiting for processes to exit. Feb 13 04:17:30.733382 systemd-logind[1467]: Removed session 78. Feb 13 04:17:32.747899 systemd[1]: Started sshd@93-139.178.90.101:22-2.57.122.87:54410.service. Feb 13 04:17:33.540297 sshd[6271]: Invalid user hanzhang from 2.57.122.87 port 54410 Feb 13 04:17:33.742987 sshd[6271]: pam_faillock(sshd:auth): User unknown Feb 13 04:17:33.744099 sshd[6271]: pam_unix(sshd:auth): check pass; user unknown Feb 13 04:17:33.744193 sshd[6271]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=2.57.122.87 Feb 13 04:17:33.745135 sshd[6271]: pam_faillock(sshd:auth): User unknown Feb 13 04:17:35.739772 systemd[1]: Started sshd@94-139.178.90.101:22-139.178.68.195:54220.service. Feb 13 04:17:35.769283 sshd[6276]: Accepted publickey for core from 139.178.68.195 port 54220 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:35.770140 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:35.773044 systemd-logind[1467]: New session 79 of user core. Feb 13 04:17:35.773650 systemd[1]: Started session-79.scope. Feb 13 04:17:35.808524 sshd[6271]: Failed password for invalid user hanzhang from 2.57.122.87 port 54410 ssh2 Feb 13 04:17:35.864125 sshd[6276]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:35.866031 systemd[1]: sshd@94-139.178.90.101:22-139.178.68.195:54220.service: Deactivated successfully. Feb 13 04:17:35.866630 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 04:17:35.867164 systemd-logind[1467]: Session 79 logged out. Waiting for processes to exit. Feb 13 04:17:35.867971 systemd-logind[1467]: Removed session 79. Feb 13 04:17:37.837050 sshd[6271]: Connection closed by invalid user hanzhang 2.57.122.87 port 54410 [preauth] Feb 13 04:17:37.839687 systemd[1]: sshd@93-139.178.90.101:22-2.57.122.87:54410.service: Deactivated successfully. Feb 13 04:17:40.873302 systemd[1]: Started sshd@95-139.178.90.101:22-139.178.68.195:37756.service. Feb 13 04:17:40.903171 sshd[6301]: Accepted publickey for core from 139.178.68.195 port 37756 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:40.904099 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:40.906811 systemd-logind[1467]: New session 80 of user core. Feb 13 04:17:40.907570 systemd[1]: Started session-80.scope. Feb 13 04:17:40.995185 sshd[6301]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:40.996680 systemd[1]: sshd@95-139.178.90.101:22-139.178.68.195:37756.service: Deactivated successfully. Feb 13 04:17:40.997130 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 04:17:40.997492 systemd-logind[1467]: Session 80 logged out. Waiting for processes to exit. Feb 13 04:17:40.998105 systemd-logind[1467]: Removed session 80. Feb 13 04:17:46.006983 systemd[1]: Started sshd@96-139.178.90.101:22-139.178.68.195:37758.service. Feb 13 04:17:46.039694 sshd[6327]: Accepted publickey for core from 139.178.68.195 port 37758 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:46.040441 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:46.043060 systemd-logind[1467]: New session 81 of user core. Feb 13 04:17:46.043706 systemd[1]: Started session-81.scope. Feb 13 04:17:46.126327 sshd[6327]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:46.127889 systemd[1]: sshd@96-139.178.90.101:22-139.178.68.195:37758.service: Deactivated successfully. Feb 13 04:17:46.128355 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 04:17:46.128815 systemd-logind[1467]: Session 81 logged out. Waiting for processes to exit. Feb 13 04:17:46.129325 systemd-logind[1467]: Removed session 81. Feb 13 04:17:51.136225 systemd[1]: Started sshd@97-139.178.90.101:22-139.178.68.195:53966.service. Feb 13 04:17:51.165773 sshd[6352]: Accepted publickey for core from 139.178.68.195 port 53966 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:51.166599 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:51.169254 systemd-logind[1467]: New session 82 of user core. Feb 13 04:17:51.169976 systemd[1]: Started session-82.scope. Feb 13 04:17:51.257388 sshd[6352]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:51.258954 systemd[1]: sshd@97-139.178.90.101:22-139.178.68.195:53966.service: Deactivated successfully. Feb 13 04:17:51.259417 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 04:17:51.259830 systemd-logind[1467]: Session 82 logged out. Waiting for processes to exit. Feb 13 04:17:51.260306 systemd-logind[1467]: Removed session 82. Feb 13 04:17:56.267339 systemd[1]: Started sshd@98-139.178.90.101:22-139.178.68.195:40872.service. Feb 13 04:17:56.296414 sshd[6377]: Accepted publickey for core from 139.178.68.195 port 40872 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:17:56.297252 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:17:56.300168 systemd-logind[1467]: New session 83 of user core. Feb 13 04:17:56.300965 systemd[1]: Started session-83.scope. Feb 13 04:17:56.387576 sshd[6377]: pam_unix(sshd:session): session closed for user core Feb 13 04:17:56.389068 systemd[1]: sshd@98-139.178.90.101:22-139.178.68.195:40872.service: Deactivated successfully. Feb 13 04:17:56.389513 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 04:17:56.389934 systemd-logind[1467]: Session 83 logged out. Waiting for processes to exit. Feb 13 04:17:56.390345 systemd-logind[1467]: Removed session 83. Feb 13 04:17:59.207604 systemd[1]: Started sshd@99-139.178.90.101:22-198.235.24.123:59976.service. Feb 13 04:18:01.399455 systemd[1]: Started sshd@100-139.178.90.101:22-139.178.68.195:40878.service. Feb 13 04:18:01.469230 sshd[6405]: Accepted publickey for core from 139.178.68.195 port 40878 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:01.470636 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:01.475047 systemd-logind[1467]: New session 84 of user core. Feb 13 04:18:01.476389 systemd[1]: Started session-84.scope. Feb 13 04:18:01.564959 sshd[6405]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:01.566519 systemd[1]: sshd@100-139.178.90.101:22-139.178.68.195:40878.service: Deactivated successfully. Feb 13 04:18:01.566982 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 04:18:01.567310 systemd-logind[1467]: Session 84 logged out. Waiting for processes to exit. Feb 13 04:18:01.567972 systemd-logind[1467]: Removed session 84. Feb 13 04:18:04.064816 sshd[6402]: Connection reset by 198.235.24.123 port 59976 [preauth] Feb 13 04:18:04.066992 systemd[1]: sshd@99-139.178.90.101:22-198.235.24.123:59976.service: Deactivated successfully. Feb 13 04:18:06.574585 systemd[1]: Started sshd@101-139.178.90.101:22-139.178.68.195:35948.service. Feb 13 04:18:06.603608 sshd[6433]: Accepted publickey for core from 139.178.68.195 port 35948 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:06.604497 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:06.607635 systemd-logind[1467]: New session 85 of user core. Feb 13 04:18:06.608333 systemd[1]: Started session-85.scope. Feb 13 04:18:06.695500 sshd[6433]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:06.697130 systemd[1]: sshd@101-139.178.90.101:22-139.178.68.195:35948.service: Deactivated successfully. Feb 13 04:18:06.697641 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 04:18:06.698138 systemd-logind[1467]: Session 85 logged out. Waiting for processes to exit. Feb 13 04:18:06.698876 systemd-logind[1467]: Removed session 85. Feb 13 04:18:11.704899 systemd[1]: Started sshd@102-139.178.90.101:22-139.178.68.195:35952.service. Feb 13 04:18:11.735084 sshd[6458]: Accepted publickey for core from 139.178.68.195 port 35952 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:11.738441 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:11.749125 systemd-logind[1467]: New session 86 of user core. Feb 13 04:18:11.751755 systemd[1]: Started session-86.scope. Feb 13 04:18:11.857848 sshd[6458]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:11.859375 systemd[1]: sshd@102-139.178.90.101:22-139.178.68.195:35952.service: Deactivated successfully. Feb 13 04:18:11.859845 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 04:18:11.860217 systemd-logind[1467]: Session 86 logged out. Waiting for processes to exit. Feb 13 04:18:11.860840 systemd-logind[1467]: Removed session 86. Feb 13 04:18:16.867726 systemd[1]: Started sshd@103-139.178.90.101:22-139.178.68.195:42224.service. Feb 13 04:18:16.933841 sshd[6484]: Accepted publickey for core from 139.178.68.195 port 42224 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:16.935229 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:16.939847 systemd-logind[1467]: New session 87 of user core. Feb 13 04:18:16.940891 systemd[1]: Started session-87.scope. Feb 13 04:18:17.030666 sshd[6484]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:17.032137 systemd[1]: sshd@103-139.178.90.101:22-139.178.68.195:42224.service: Deactivated successfully. Feb 13 04:18:17.032611 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 04:18:17.033042 systemd-logind[1467]: Session 87 logged out. Waiting for processes to exit. Feb 13 04:18:17.033601 systemd-logind[1467]: Removed session 87. Feb 13 04:18:22.040325 systemd[1]: Started sshd@104-139.178.90.101:22-139.178.68.195:42234.service. Feb 13 04:18:22.069193 sshd[6511]: Accepted publickey for core from 139.178.68.195 port 42234 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:22.069998 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:22.072951 systemd-logind[1467]: New session 88 of user core. Feb 13 04:18:22.073520 systemd[1]: Started session-88.scope. Feb 13 04:18:22.158109 sshd[6511]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:22.159729 systemd[1]: sshd@104-139.178.90.101:22-139.178.68.195:42234.service: Deactivated successfully. Feb 13 04:18:22.160205 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 04:18:22.160668 systemd-logind[1467]: Session 88 logged out. Waiting for processes to exit. Feb 13 04:18:22.161234 systemd-logind[1467]: Removed session 88. Feb 13 04:18:27.161562 systemd[1]: Started sshd@105-139.178.90.101:22-139.178.68.195:34530.service. Feb 13 04:18:27.192398 sshd[6535]: Accepted publickey for core from 139.178.68.195 port 34530 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:27.193195 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:27.195945 systemd-logind[1467]: New session 89 of user core. Feb 13 04:18:27.196523 systemd[1]: Started session-89.scope. Feb 13 04:18:27.283424 sshd[6535]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:27.285174 systemd[1]: sshd@105-139.178.90.101:22-139.178.68.195:34530.service: Deactivated successfully. Feb 13 04:18:27.285514 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 04:18:27.285838 systemd-logind[1467]: Session 89 logged out. Waiting for processes to exit. Feb 13 04:18:27.286440 systemd[1]: Started sshd@106-139.178.90.101:22-139.178.68.195:34534.service. Feb 13 04:18:27.286961 systemd-logind[1467]: Removed session 89. Feb 13 04:18:27.315780 sshd[6559]: Accepted publickey for core from 139.178.68.195 port 34534 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:27.316465 sshd[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:27.318563 systemd-logind[1467]: New session 90 of user core. Feb 13 04:18:27.319062 systemd[1]: Started session-90.scope. Feb 13 04:18:28.644599 env[1480]: time="2024-02-13T04:18:28.644570505Z" level=info msg="StopContainer for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" with timeout 30 (s)" Feb 13 04:18:28.644896 env[1480]: time="2024-02-13T04:18:28.644830963Z" level=info msg="Stop container \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" with signal terminated" Feb 13 04:18:28.650325 systemd[1]: cri-containerd-307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3.scope: Deactivated successfully. Feb 13 04:18:28.650564 systemd[1]: cri-containerd-307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3.scope: Consumed 3.006s CPU time. Feb 13 04:18:28.656213 env[1480]: time="2024-02-13T04:18:28.656176868Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 04:18:28.659234 env[1480]: time="2024-02-13T04:18:28.659212058Z" level=info msg="StopContainer for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" with timeout 1 (s)" Feb 13 04:18:28.659345 env[1480]: time="2024-02-13T04:18:28.659333407Z" level=info msg="Stop container \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" with signal terminated" Feb 13 04:18:28.659450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3-rootfs.mount: Deactivated successfully. Feb 13 04:18:28.662413 env[1480]: time="2024-02-13T04:18:28.662372417Z" level=info msg="shim disconnected" id=307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3 Feb 13 04:18:28.662413 env[1480]: time="2024-02-13T04:18:28.662413398Z" level=warning msg="cleaning up after shim disconnected" id=307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3 namespace=k8s.io Feb 13 04:18:28.662403 systemd-networkd[1312]: lxc_health: Link DOWN Feb 13 04:18:28.662697 env[1480]: time="2024-02-13T04:18:28.662421300Z" level=info msg="cleaning up dead shim" Feb 13 04:18:28.662407 systemd-networkd[1312]: lxc_health: Lost carrier Feb 13 04:18:28.666048 env[1480]: time="2024-02-13T04:18:28.665991140Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6624 runtime=io.containerd.runc.v2\n" Feb 13 04:18:28.666715 env[1480]: time="2024-02-13T04:18:28.666668058Z" level=info msg="StopContainer for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" returns successfully" Feb 13 04:18:28.667029 env[1480]: time="2024-02-13T04:18:28.666989041Z" level=info msg="StopPodSandbox for \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\"" Feb 13 04:18:28.667029 env[1480]: time="2024-02-13T04:18:28.667022189Z" level=info msg="Container to stop \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.668064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130-shm.mount: Deactivated successfully. Feb 13 04:18:28.702787 systemd[1]: cri-containerd-eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130.scope: Deactivated successfully. Feb 13 04:18:28.733509 systemd[1]: cri-containerd-c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789.scope: Deactivated successfully. Feb 13 04:18:28.734100 systemd[1]: cri-containerd-c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789.scope: Consumed 13.471s CPU time. Feb 13 04:18:28.747312 env[1480]: time="2024-02-13T04:18:28.747192413Z" level=info msg="shim disconnected" id=eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130 Feb 13 04:18:28.747744 env[1480]: time="2024-02-13T04:18:28.747315045Z" level=warning msg="cleaning up after shim disconnected" id=eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130 namespace=k8s.io Feb 13 04:18:28.747744 env[1480]: time="2024-02-13T04:18:28.747347324Z" level=info msg="cleaning up dead shim" Feb 13 04:18:28.751508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130-rootfs.mount: Deactivated successfully. Feb 13 04:18:28.764540 env[1480]: time="2024-02-13T04:18:28.764352090Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6664 runtime=io.containerd.runc.v2\n" Feb 13 04:18:28.765338 env[1480]: time="2024-02-13T04:18:28.765230141Z" level=info msg="TearDown network for sandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" successfully" Feb 13 04:18:28.765338 env[1480]: time="2024-02-13T04:18:28.765308011Z" level=info msg="StopPodSandbox for \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" returns successfully" Feb 13 04:18:28.777082 env[1480]: time="2024-02-13T04:18:28.776984307Z" level=info msg="shim disconnected" id=c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789 Feb 13 04:18:28.777505 env[1480]: time="2024-02-13T04:18:28.777090652Z" level=warning msg="cleaning up after shim disconnected" id=c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789 namespace=k8s.io Feb 13 04:18:28.777505 env[1480]: time="2024-02-13T04:18:28.777140792Z" level=info msg="cleaning up dead shim" Feb 13 04:18:28.778501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789-rootfs.mount: Deactivated successfully. Feb 13 04:18:28.794929 env[1480]: time="2024-02-13T04:18:28.794802713Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6683 runtime=io.containerd.runc.v2\n" Feb 13 04:18:28.796885 env[1480]: time="2024-02-13T04:18:28.796795543Z" level=info msg="StopContainer for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" returns successfully" Feb 13 04:18:28.797884 env[1480]: time="2024-02-13T04:18:28.797776671Z" level=info msg="StopPodSandbox for \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\"" Feb 13 04:18:28.798138 env[1480]: time="2024-02-13T04:18:28.797944884Z" level=info msg="Container to stop \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.798138 env[1480]: time="2024-02-13T04:18:28.797997704Z" level=info msg="Container to stop \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.798138 env[1480]: time="2024-02-13T04:18:28.798046689Z" level=info msg="Container to stop \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.798138 env[1480]: time="2024-02-13T04:18:28.798080086Z" level=info msg="Container to stop \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.798138 env[1480]: time="2024-02-13T04:18:28.798124163Z" level=info msg="Container to stop \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.812296 kubelet[2585]: I0213 04:18:28.812211 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd663c6-00a6-4e48-9a65-42cb478b7569-cilium-config-path\") pod \"acd663c6-00a6-4e48-9a65-42cb478b7569\" (UID: \"acd663c6-00a6-4e48-9a65-42cb478b7569\") " Feb 13 04:18:28.812296 kubelet[2585]: I0213 04:18:28.812313 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7xnj\" (UniqueName: \"kubernetes.io/projected/acd663c6-00a6-4e48-9a65-42cb478b7569-kube-api-access-h7xnj\") pod \"acd663c6-00a6-4e48-9a65-42cb478b7569\" (UID: \"acd663c6-00a6-4e48-9a65-42cb478b7569\") " Feb 13 04:18:28.813363 kubelet[2585]: W0213 04:18:28.812786 2585 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/acd663c6-00a6-4e48-9a65-42cb478b7569/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 04:18:28.812695 systemd[1]: cri-containerd-0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca.scope: Deactivated successfully. Feb 13 04:18:28.817914 kubelet[2585]: I0213 04:18:28.817802 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd663c6-00a6-4e48-9a65-42cb478b7569-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acd663c6-00a6-4e48-9a65-42cb478b7569" (UID: "acd663c6-00a6-4e48-9a65-42cb478b7569"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 04:18:28.819762 kubelet[2585]: I0213 04:18:28.819646 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd663c6-00a6-4e48-9a65-42cb478b7569-kube-api-access-h7xnj" (OuterVolumeSpecName: "kube-api-access-h7xnj") pod "acd663c6-00a6-4e48-9a65-42cb478b7569" (UID: "acd663c6-00a6-4e48-9a65-42cb478b7569"). InnerVolumeSpecName "kube-api-access-h7xnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:28.850068 env[1480]: time="2024-02-13T04:18:28.849984567Z" level=info msg="shim disconnected" id=0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca Feb 13 04:18:28.850284 env[1480]: time="2024-02-13T04:18:28.850063209Z" level=warning msg="cleaning up after shim disconnected" id=0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca namespace=k8s.io Feb 13 04:18:28.850284 env[1480]: time="2024-02-13T04:18:28.850088066Z" level=info msg="cleaning up dead shim" Feb 13 04:18:28.858883 env[1480]: time="2024-02-13T04:18:28.858811919Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6714 runtime=io.containerd.runc.v2\n" Feb 13 04:18:28.859224 env[1480]: time="2024-02-13T04:18:28.859165510Z" level=info msg="TearDown network for sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" successfully" Feb 13 04:18:28.859224 env[1480]: time="2024-02-13T04:18:28.859201511Z" level=info msg="StopPodSandbox for \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" returns successfully" Feb 13 04:18:28.913890 kubelet[2585]: I0213 04:18:28.913661 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-net\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.913890 kubelet[2585]: I0213 04:18:28.913773 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-etc-cni-netd\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.913890 kubelet[2585]: I0213 04:18:28.913794 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.914461 kubelet[2585]: I0213 04:18:28.913842 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-kernel\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.914461 kubelet[2585]: I0213 04:18:28.913893 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.914461 kubelet[2585]: I0213 04:18:28.913899 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.914461 kubelet[2585]: I0213 04:18:28.914021 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-xtables-lock\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.914461 kubelet[2585]: I0213 04:18:28.914082 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.915058 kubelet[2585]: I0213 04:18:28.914125 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-run\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915058 kubelet[2585]: I0213 04:18:28.914221 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.915058 kubelet[2585]: I0213 04:18:28.914242 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clmpv\" (UniqueName: \"kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-kube-api-access-clmpv\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915058 kubelet[2585]: I0213 04:18:28.914358 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-lib-modules\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915058 kubelet[2585]: I0213 04:18:28.914459 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.915058 kubelet[2585]: I0213 04:18:28.914500 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-cgroup\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915744 kubelet[2585]: I0213 04:18:28.914591 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aae4fc55-28a9-499f-a1e6-1b669e3cc369-clustermesh-secrets\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915744 kubelet[2585]: I0213 04:18:28.914591 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.915744 kubelet[2585]: I0213 04:18:28.914659 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-bpf-maps\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915744 kubelet[2585]: I0213 04:18:28.914725 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hostproc\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.915744 kubelet[2585]: I0213 04:18:28.914768 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.915744 kubelet[2585]: I0213 04:18:28.914807 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-config-path\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.916423 kubelet[2585]: I0213 04:18:28.914851 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hostproc" (OuterVolumeSpecName: "hostproc") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.916423 kubelet[2585]: I0213 04:18:28.914967 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cni-path\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.916423 kubelet[2585]: I0213 04:18:28.915020 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cni-path" (OuterVolumeSpecName: "cni-path") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.916423 kubelet[2585]: I0213 04:18:28.915099 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hubble-tls\") pod \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\" (UID: \"aae4fc55-28a9-499f-a1e6-1b669e3cc369\") " Feb 13 04:18:28.916423 kubelet[2585]: I0213 04:18:28.915250 2585 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-h7xnj\" (UniqueName: \"kubernetes.io/projected/acd663c6-00a6-4e48-9a65-42cb478b7569-kube-api-access-h7xnj\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.916423 kubelet[2585]: W0213 04:18:28.915261 2585 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/aae4fc55-28a9-499f-a1e6-1b669e3cc369/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915317 2585 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-lib-modules\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915401 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-cgroup\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915446 2585 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-bpf-maps\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915477 2585 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hostproc\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915506 2585 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cni-path\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915537 2585 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-net\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915581 2585 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-etc-cni-netd\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917073 kubelet[2585]: I0213 04:18:28.915639 2585 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917934 kubelet[2585]: I0213 04:18:28.915679 2585 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-xtables-lock\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917934 kubelet[2585]: I0213 04:18:28.915711 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd663c6-00a6-4e48-9a65-42cb478b7569-cilium-config-path\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.917934 kubelet[2585]: I0213 04:18:28.915742 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-run\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:28.920509 kubelet[2585]: I0213 04:18:28.920407 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 04:18:28.921198 kubelet[2585]: I0213 04:18:28.921127 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-kube-api-access-clmpv" (OuterVolumeSpecName: "kube-api-access-clmpv") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "kube-api-access-clmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:28.921898 kubelet[2585]: I0213 04:18:28.921778 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:28.922118 kubelet[2585]: I0213 04:18:28.921975 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae4fc55-28a9-499f-a1e6-1b669e3cc369-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aae4fc55-28a9-499f-a1e6-1b669e3cc369" (UID: "aae4fc55-28a9-499f-a1e6-1b669e3cc369"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 04:18:28.993240 systemd[1]: Removed slice kubepods-besteffort-podacd663c6_00a6_4e48_9a65_42cb478b7569.slice. Feb 13 04:18:28.993542 systemd[1]: kubepods-besteffort-podacd663c6_00a6_4e48_9a65_42cb478b7569.slice: Consumed 3.045s CPU time. Feb 13 04:18:28.996297 systemd[1]: Removed slice kubepods-burstable-podaae4fc55_28a9_499f_a1e6_1b669e3cc369.slice. Feb 13 04:18:28.996610 systemd[1]: kubepods-burstable-podaae4fc55_28a9_499f_a1e6_1b669e3cc369.slice: Consumed 13.549s CPU time. Feb 13 04:18:29.016913 kubelet[2585]: I0213 04:18:29.016818 2585 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-hubble-tls\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:29.016913 kubelet[2585]: I0213 04:18:29.016896 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aae4fc55-28a9-499f-a1e6-1b669e3cc369-cilium-config-path\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:29.016913 kubelet[2585]: I0213 04:18:29.016934 2585 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-clmpv\" (UniqueName: \"kubernetes.io/projected/aae4fc55-28a9-499f-a1e6-1b669e3cc369-kube-api-access-clmpv\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:29.017492 kubelet[2585]: I0213 04:18:29.016969 2585 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aae4fc55-28a9-499f-a1e6-1b669e3cc369-clustermesh-secrets\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:29.450556 kubelet[2585]: E0213 04:18:29.450486 2585 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 04:18:29.609214 kubelet[2585]: I0213 04:18:29.609147 2585 scope.go:115] "RemoveContainer" containerID="c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789" Feb 13 04:18:29.611774 env[1480]: time="2024-02-13T04:18:29.611690726Z" level=info msg="RemoveContainer for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\"" Feb 13 04:18:29.617252 env[1480]: time="2024-02-13T04:18:29.617128707Z" level=info msg="RemoveContainer for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" returns successfully" Feb 13 04:18:29.617722 kubelet[2585]: I0213 04:18:29.617635 2585 scope.go:115] "RemoveContainer" containerID="5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578" Feb 13 04:18:29.622851 env[1480]: time="2024-02-13T04:18:29.621995168Z" level=info msg="RemoveContainer for \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\"" Feb 13 04:18:29.627183 env[1480]: time="2024-02-13T04:18:29.627062396Z" level=info msg="RemoveContainer for \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\" returns successfully" Feb 13 04:18:29.627514 kubelet[2585]: I0213 04:18:29.627462 2585 scope.go:115] "RemoveContainer" containerID="53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07" Feb 13 04:18:29.630156 env[1480]: time="2024-02-13T04:18:29.630079927Z" level=info msg="RemoveContainer for \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\"" Feb 13 04:18:29.634235 env[1480]: time="2024-02-13T04:18:29.634117147Z" level=info msg="RemoveContainer for \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\" returns successfully" Feb 13 04:18:29.634582 kubelet[2585]: I0213 04:18:29.634532 2585 scope.go:115] "RemoveContainer" containerID="2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770" Feb 13 04:18:29.637106 env[1480]: time="2024-02-13T04:18:29.637000229Z" level=info msg="RemoveContainer for \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\"" Feb 13 04:18:29.641184 env[1480]: time="2024-02-13T04:18:29.641091475Z" level=info msg="RemoveContainer for \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\" returns successfully" Feb 13 04:18:29.641496 kubelet[2585]: I0213 04:18:29.641454 2585 scope.go:115] "RemoveContainer" containerID="bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba" Feb 13 04:18:29.643449 env[1480]: time="2024-02-13T04:18:29.643391398Z" level=info msg="RemoveContainer for \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\"" Feb 13 04:18:29.645956 env[1480]: time="2024-02-13T04:18:29.645884529Z" level=info msg="RemoveContainer for \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\" returns successfully" Feb 13 04:18:29.646717 env[1480]: time="2024-02-13T04:18:29.646339228Z" level=error msg="ContainerStatus for \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\": not found" Feb 13 04:18:29.646843 kubelet[2585]: I0213 04:18:29.646103 2585 scope.go:115] "RemoveContainer" containerID="c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789" Feb 13 04:18:29.646843 kubelet[2585]: E0213 04:18:29.646600 2585 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\": not found" containerID="c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789" Feb 13 04:18:29.646843 kubelet[2585]: I0213 04:18:29.646651 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789} err="failed to get container status \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3ef0d24b1be12618db194d5d741be58682535076d3a0754b212d3947a2e6789\": not found" Feb 13 04:18:29.646843 kubelet[2585]: I0213 04:18:29.646668 2585 scope.go:115] "RemoveContainer" containerID="5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578" Feb 13 04:18:29.647174 env[1480]: time="2024-02-13T04:18:29.646950625Z" level=error msg="ContainerStatus for \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\": not found" Feb 13 04:18:29.647341 kubelet[2585]: E0213 04:18:29.647294 2585 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\": not found" containerID="5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578" Feb 13 04:18:29.647550 kubelet[2585]: I0213 04:18:29.647381 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578} err="failed to get container status \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\": rpc error: code = NotFound desc = an error occurred when try to find container \"5670091f8539bf090ed7bf70f15c50f1d9619acdb1059c0f7d1c478b5b7a1578\": not found" Feb 13 04:18:29.647550 kubelet[2585]: I0213 04:18:29.647417 2585 scope.go:115] "RemoveContainer" containerID="53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07" Feb 13 04:18:29.647786 env[1480]: time="2024-02-13T04:18:29.647698152Z" level=error msg="ContainerStatus for \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\": not found" Feb 13 04:18:29.647974 kubelet[2585]: E0213 04:18:29.647945 2585 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\": not found" containerID="53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07" Feb 13 04:18:29.648101 kubelet[2585]: I0213 04:18:29.648006 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07} err="failed to get container status \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\": rpc error: code = NotFound desc = an error occurred when try to find container \"53fe33c26318084ecbeb0a92fefa716f8be38d6541b5e6e154bef39992568e07\": not found" Feb 13 04:18:29.648101 kubelet[2585]: I0213 04:18:29.648033 2585 scope.go:115] "RemoveContainer" containerID="2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770" Feb 13 04:18:29.648327 env[1480]: time="2024-02-13T04:18:29.648262276Z" level=error msg="ContainerStatus for \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\": not found" Feb 13 04:18:29.648535 kubelet[2585]: E0213 04:18:29.648481 2585 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\": not found" containerID="2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770" Feb 13 04:18:29.648535 kubelet[2585]: I0213 04:18:29.648535 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770} err="failed to get container status \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e75c44a366b7bca79be0d86f0b3c0084f5c869c155e0a5bff66a95ee1882770\": not found" Feb 13 04:18:29.648717 kubelet[2585]: I0213 04:18:29.648559 2585 scope.go:115] "RemoveContainer" containerID="bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba" Feb 13 04:18:29.648919 env[1480]: time="2024-02-13T04:18:29.648807146Z" level=error msg="ContainerStatus for \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\": not found" Feb 13 04:18:29.649054 kubelet[2585]: E0213 04:18:29.649026 2585 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\": not found" containerID="bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba" Feb 13 04:18:29.649147 kubelet[2585]: I0213 04:18:29.649064 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba} err="failed to get container status \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf70c53f8bfe7aaa4dff885694e14c4d32924313400fb711a42724fbf9b7e9ba\": not found" Feb 13 04:18:29.649147 kubelet[2585]: I0213 04:18:29.649086 2585 scope.go:115] "RemoveContainer" containerID="307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3" Feb 13 04:18:29.649873 systemd[1]: var-lib-kubelet-pods-acd663c6\x2d00a6\x2d4e48\x2d9a65\x2d42cb478b7569-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7xnj.mount: Deactivated successfully. Feb 13 04:18:29.650025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca-rootfs.mount: Deactivated successfully. Feb 13 04:18:29.650116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca-shm.mount: Deactivated successfully. Feb 13 04:18:29.650216 systemd[1]: var-lib-kubelet-pods-aae4fc55\x2d28a9\x2d499f\x2da1e6\x2d1b669e3cc369-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dclmpv.mount: Deactivated successfully. Feb 13 04:18:29.650321 systemd[1]: var-lib-kubelet-pods-aae4fc55\x2d28a9\x2d499f\x2da1e6\x2d1b669e3cc369-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 04:18:29.650407 env[1480]: time="2024-02-13T04:18:29.650342880Z" level=info msg="RemoveContainer for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\"" Feb 13 04:18:29.650424 systemd[1]: var-lib-kubelet-pods-aae4fc55\x2d28a9\x2d499f\x2da1e6\x2d1b669e3cc369-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 04:18:29.652357 env[1480]: time="2024-02-13T04:18:29.652300700Z" level=info msg="RemoveContainer for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" returns successfully" Feb 13 04:18:29.652474 kubelet[2585]: I0213 04:18:29.652456 2585 scope.go:115] "RemoveContainer" containerID="307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3" Feb 13 04:18:29.652735 env[1480]: time="2024-02-13T04:18:29.652649791Z" level=error msg="ContainerStatus for \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\": not found" Feb 13 04:18:29.652831 kubelet[2585]: E0213 04:18:29.652802 2585 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\": not found" containerID="307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3" Feb 13 04:18:29.652897 kubelet[2585]: I0213 04:18:29.652834 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3} err="failed to get container status \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"307a6e86aa8fff4485cac9b4fc11de5f1ea60fc41c93a3360f7b09285b303fb3\": not found" Feb 13 04:18:30.596156 sshd[6559]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:30.598058 systemd[1]: sshd@106-139.178.90.101:22-139.178.68.195:34534.service: Deactivated successfully. Feb 13 04:18:30.598487 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 04:18:30.598869 systemd-logind[1467]: Session 90 logged out. Waiting for processes to exit. Feb 13 04:18:30.599562 systemd[1]: Started sshd@107-139.178.90.101:22-139.178.68.195:34538.service. Feb 13 04:18:30.599998 systemd-logind[1467]: Removed session 90. Feb 13 04:18:30.628570 sshd[6731]: Accepted publickey for core from 139.178.68.195 port 34538 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:30.629338 sshd[6731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:30.631825 systemd-logind[1467]: New session 91 of user core. Feb 13 04:18:30.632397 systemd[1]: Started session-91.scope. Feb 13 04:18:30.981123 kubelet[2585]: I0213 04:18:30.980957 2585 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=aae4fc55-28a9-499f-a1e6-1b669e3cc369 path="/var/lib/kubelet/pods/aae4fc55-28a9-499f-a1e6-1b669e3cc369/volumes" Feb 13 04:18:30.983060 kubelet[2585]: I0213 04:18:30.982981 2585 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=acd663c6-00a6-4e48-9a65-42cb478b7569 path="/var/lib/kubelet/pods/acd663c6-00a6-4e48-9a65-42cb478b7569/volumes" Feb 13 04:18:31.026563 sshd[6731]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:31.036289 systemd[1]: sshd@107-139.178.90.101:22-139.178.68.195:34538.service: Deactivated successfully. Feb 13 04:18:31.038822 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 04:18:31.041121 systemd-logind[1467]: Session 91 logged out. Waiting for processes to exit. Feb 13 04:18:31.045852 systemd[1]: Started sshd@108-139.178.90.101:22-139.178.68.195:34544.service. Feb 13 04:18:31.046206 kubelet[2585]: I0213 04:18:31.046034 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:18:31.046206 kubelet[2585]: E0213 04:18:31.046160 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aae4fc55-28a9-499f-a1e6-1b669e3cc369" containerName="clean-cilium-state" Feb 13 04:18:31.046206 kubelet[2585]: E0213 04:18:31.046205 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aae4fc55-28a9-499f-a1e6-1b669e3cc369" containerName="cilium-agent" Feb 13 04:18:31.046588 kubelet[2585]: E0213 04:18:31.046237 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="acd663c6-00a6-4e48-9a65-42cb478b7569" containerName="cilium-operator" Feb 13 04:18:31.046588 kubelet[2585]: E0213 04:18:31.046265 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aae4fc55-28a9-499f-a1e6-1b669e3cc369" containerName="mount-cgroup" Feb 13 04:18:31.046588 kubelet[2585]: E0213 04:18:31.046291 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aae4fc55-28a9-499f-a1e6-1b669e3cc369" containerName="apply-sysctl-overwrites" Feb 13 04:18:31.046588 kubelet[2585]: E0213 04:18:31.046316 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aae4fc55-28a9-499f-a1e6-1b669e3cc369" containerName="mount-bpf-fs" Feb 13 04:18:31.046588 kubelet[2585]: I0213 04:18:31.046364 2585 memory_manager.go:346] "RemoveStaleState removing state" podUID="aae4fc55-28a9-499f-a1e6-1b669e3cc369" containerName="cilium-agent" Feb 13 04:18:31.046588 kubelet[2585]: I0213 04:18:31.046396 2585 memory_manager.go:346] "RemoveStaleState removing state" podUID="acd663c6-00a6-4e48-9a65-42cb478b7569" containerName="cilium-operator" Feb 13 04:18:31.049324 systemd-logind[1467]: Removed session 91. Feb 13 04:18:31.058219 systemd[1]: Created slice kubepods-burstable-podc3071985_b3f7_4369_9b51_29c47a0eaf00.slice. Feb 13 04:18:31.091430 sshd[6754]: Accepted publickey for core from 139.178.68.195 port 34544 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:31.092208 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:31.094599 systemd-logind[1467]: New session 92 of user core. Feb 13 04:18:31.095093 systemd[1]: Started session-92.scope. Feb 13 04:18:31.132291 kubelet[2585]: I0213 04:18:31.132169 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcg7c\" (UniqueName: \"kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-kube-api-access-jcg7c\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.132672 kubelet[2585]: I0213 04:18:31.132584 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-hubble-tls\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.133029 kubelet[2585]: I0213 04:18:31.132941 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-etc-cni-netd\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.133311 kubelet[2585]: I0213 04:18:31.133164 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cni-path\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.133542 kubelet[2585]: I0213 04:18:31.133330 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-clustermesh-secrets\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.133542 kubelet[2585]: I0213 04:18:31.133505 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-ipsec-secrets\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.133899 kubelet[2585]: I0213 04:18:31.133655 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-run\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.133899 kubelet[2585]: I0213 04:18:31.133794 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-bpf-maps\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.134262 kubelet[2585]: I0213 04:18:31.133937 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-hostproc\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.134262 kubelet[2585]: I0213 04:18:31.134099 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-cgroup\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.134638 kubelet[2585]: I0213 04:18:31.134272 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-net\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.134638 kubelet[2585]: I0213 04:18:31.134448 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-xtables-lock\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.134638 kubelet[2585]: I0213 04:18:31.134596 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-kernel\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.135143 kubelet[2585]: I0213 04:18:31.134729 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-lib-modules\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.135143 kubelet[2585]: I0213 04:18:31.134867 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-config-path\") pod \"cilium-mh5fq\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " pod="kube-system/cilium-mh5fq" Feb 13 04:18:31.240178 sshd[6754]: pam_unix(sshd:session): session closed for user core Feb 13 04:18:31.245872 systemd[1]: sshd@108-139.178.90.101:22-139.178.68.195:34544.service: Deactivated successfully. Feb 13 04:18:31.246632 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 04:18:31.249447 systemd-logind[1467]: Session 92 logged out. Waiting for processes to exit. Feb 13 04:18:31.250243 systemd[1]: Started sshd@109-139.178.90.101:22-139.178.68.195:34550.service. Feb 13 04:18:31.250875 systemd-logind[1467]: Removed session 92. Feb 13 04:18:31.307813 sshd[6784]: Accepted publickey for core from 139.178.68.195 port 34550 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:18:31.309211 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:18:31.313143 systemd-logind[1467]: New session 93 of user core. Feb 13 04:18:31.314488 systemd[1]: Started session-93.scope. Feb 13 04:18:31.363006 env[1480]: time="2024-02-13T04:18:31.362874096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh5fq,Uid:c3071985-b3f7-4369-9b51-29c47a0eaf00,Namespace:kube-system,Attempt:0,}" Feb 13 04:18:31.384671 env[1480]: time="2024-02-13T04:18:31.384513184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:18:31.384671 env[1480]: time="2024-02-13T04:18:31.384621820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:18:31.385003 env[1480]: time="2024-02-13T04:18:31.384661727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:18:31.385221 env[1480]: time="2024-02-13T04:18:31.385049251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a pid=6799 runtime=io.containerd.runc.v2 Feb 13 04:18:31.409328 systemd[1]: Started cri-containerd-615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a.scope. Feb 13 04:18:31.432920 env[1480]: time="2024-02-13T04:18:31.432764028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh5fq,Uid:c3071985-b3f7-4369-9b51-29c47a0eaf00,Namespace:kube-system,Attempt:0,} returns sandbox id \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\"" Feb 13 04:18:31.435178 env[1480]: time="2024-02-13T04:18:31.435149433Z" level=info msg="CreateContainer within sandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 04:18:31.441505 env[1480]: time="2024-02-13T04:18:31.441453380Z" level=info msg="CreateContainer within sandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\"" Feb 13 04:18:31.441801 env[1480]: time="2024-02-13T04:18:31.441776470Z" level=info msg="StartContainer for \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\"" Feb 13 04:18:31.452953 systemd[1]: Started cri-containerd-1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2.scope. Feb 13 04:18:31.460115 systemd[1]: cri-containerd-1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2.scope: Deactivated successfully. Feb 13 04:18:31.460356 systemd[1]: Stopped cri-containerd-1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2.scope. Feb 13 04:18:31.468854 env[1480]: time="2024-02-13T04:18:31.468821951Z" level=info msg="shim disconnected" id=1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2 Feb 13 04:18:31.468951 env[1480]: time="2024-02-13T04:18:31.468855075Z" level=warning msg="cleaning up after shim disconnected" id=1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2 namespace=k8s.io Feb 13 04:18:31.468951 env[1480]: time="2024-02-13T04:18:31.468862806Z" level=info msg="cleaning up dead shim" Feb 13 04:18:31.473056 env[1480]: time="2024-02-13T04:18:31.473032842Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6868 runtime=io.containerd.runc.v2\ntime=\"2024-02-13T04:18:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 13 04:18:31.473284 env[1480]: time="2024-02-13T04:18:31.473186536Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Feb 13 04:18:31.473364 env[1480]: time="2024-02-13T04:18:31.473337825Z" level=error msg="Failed to pipe stdout of container \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\"" error="reading from a closed fifo" Feb 13 04:18:31.473412 env[1480]: time="2024-02-13T04:18:31.473366593Z" level=error msg="Failed to pipe stderr of container \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\"" error="reading from a closed fifo" Feb 13 04:18:31.474108 env[1480]: time="2024-02-13T04:18:31.474058986Z" level=error msg="StartContainer for \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 13 04:18:31.474199 kubelet[2585]: E0213 04:18:31.474185 2585 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2" Feb 13 04:18:31.474309 kubelet[2585]: E0213 04:18:31.474268 2585 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 04:18:31.474309 kubelet[2585]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 04:18:31.474309 kubelet[2585]: rm /hostbin/cilium-mount Feb 13 04:18:31.474309 kubelet[2585]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jcg7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-mh5fq_kube-system(c3071985-b3f7-4369-9b51-29c47a0eaf00): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 13 04:18:31.474494 kubelet[2585]: E0213 04:18:31.474300 2585 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mh5fq" podUID=c3071985-b3f7-4369-9b51-29c47a0eaf00 Feb 13 04:18:31.575140 kubelet[2585]: I0213 04:18:31.575073 2585 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-fff065a016" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-13 04:18:31.574951197 +0000 UTC m=+1632.670437837 LastTransitionTime:2024-02-13 04:18:31.574951197 +0000 UTC m=+1632.670437837 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 13 04:18:31.620098 env[1480]: time="2024-02-13T04:18:31.620005361Z" level=info msg="StopPodSandbox for \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\"" Feb 13 04:18:31.620494 env[1480]: time="2024-02-13T04:18:31.620162688Z" level=info msg="Container to stop \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:31.634349 systemd[1]: cri-containerd-615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a.scope: Deactivated successfully. Feb 13 04:18:31.669538 env[1480]: time="2024-02-13T04:18:31.669450655Z" level=info msg="shim disconnected" id=615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a Feb 13 04:18:31.669538 env[1480]: time="2024-02-13T04:18:31.669531797Z" level=warning msg="cleaning up after shim disconnected" id=615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a namespace=k8s.io Feb 13 04:18:31.669931 env[1480]: time="2024-02-13T04:18:31.669556157Z" level=info msg="cleaning up dead shim" Feb 13 04:18:31.681608 env[1480]: time="2024-02-13T04:18:31.681495692Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6898 runtime=io.containerd.runc.v2\n" Feb 13 04:18:31.682107 env[1480]: time="2024-02-13T04:18:31.682014440Z" level=info msg="TearDown network for sandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" successfully" Feb 13 04:18:31.682107 env[1480]: time="2024-02-13T04:18:31.682057537Z" level=info msg="StopPodSandbox for \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" returns successfully" Feb 13 04:18:31.738668 kubelet[2585]: I0213 04:18:31.738551 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcg7c\" (UniqueName: \"kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-kube-api-access-jcg7c\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.738668 kubelet[2585]: I0213 04:18:31.738660 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-ipsec-secrets\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.739408 kubelet[2585]: I0213 04:18:31.738723 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-bpf-maps\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.739408 kubelet[2585]: I0213 04:18:31.738786 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-lib-modules\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.739408 kubelet[2585]: I0213 04:18:31.738848 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cni-path\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.739408 kubelet[2585]: I0213 04:18:31.738889 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.739408 kubelet[2585]: I0213 04:18:31.738955 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-hubble-tls\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.739408 kubelet[2585]: I0213 04:18:31.738942 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.740104 kubelet[2585]: I0213 04:18:31.739006 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cni-path" (OuterVolumeSpecName: "cni-path") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.740104 kubelet[2585]: I0213 04:18:31.739067 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-clustermesh-secrets\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.740104 kubelet[2585]: I0213 04:18:31.739176 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-cgroup\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.740104 kubelet[2585]: I0213 04:18:31.739283 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-net\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.740104 kubelet[2585]: I0213 04:18:31.739250 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.740654 kubelet[2585]: I0213 04:18:31.739429 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-config-path\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.740654 kubelet[2585]: I0213 04:18:31.739407 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.740654 kubelet[2585]: I0213 04:18:31.739549 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-kernel\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.740654 kubelet[2585]: I0213 04:18:31.739651 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-etc-cni-netd\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.740654 kubelet[2585]: I0213 04:18:31.739626 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.740654 kubelet[2585]: W0213 04:18:31.739707 2585 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c3071985-b3f7-4369-9b51-29c47a0eaf00/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 04:18:31.741267 kubelet[2585]: I0213 04:18:31.739749 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.741267 kubelet[2585]: I0213 04:18:31.739754 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-run\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.741267 kubelet[2585]: I0213 04:18:31.739807 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.741267 kubelet[2585]: I0213 04:18:31.739880 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-hostproc\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.741267 kubelet[2585]: I0213 04:18:31.739945 2585 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-xtables-lock\") pod \"c3071985-b3f7-4369-9b51-29c47a0eaf00\" (UID: \"c3071985-b3f7-4369-9b51-29c47a0eaf00\") " Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.739980 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-hostproc" (OuterVolumeSpecName: "hostproc") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.740040 2585 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-bpf-maps\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.740041 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.740079 2585 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-lib-modules\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.740113 2585 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cni-path\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.740145 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-cgroup\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.741790 kubelet[2585]: I0213 04:18:31.740178 2585 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-net\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.742571 kubelet[2585]: I0213 04:18:31.740210 2585 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.742571 kubelet[2585]: I0213 04:18:31.740241 2585 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-etc-cni-netd\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.742571 kubelet[2585]: I0213 04:18:31.740272 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-run\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.745160 kubelet[2585]: I0213 04:18:31.745059 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 04:18:31.745646 kubelet[2585]: I0213 04:18:31.745552 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 04:18:31.745858 kubelet[2585]: I0213 04:18:31.745695 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:31.746111 kubelet[2585]: I0213 04:18:31.745997 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-kube-api-access-jcg7c" (OuterVolumeSpecName: "kube-api-access-jcg7c") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "kube-api-access-jcg7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:31.746401 kubelet[2585]: I0213 04:18:31.746325 2585 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c3071985-b3f7-4369-9b51-29c47a0eaf00" (UID: "c3071985-b3f7-4369-9b51-29c47a0eaf00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 04:18:31.841474 kubelet[2585]: I0213 04:18:31.841232 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-config-path\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.841474 kubelet[2585]: I0213 04:18:31.841313 2585 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-hostproc\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.841474 kubelet[2585]: I0213 04:18:31.841349 2585 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3071985-b3f7-4369-9b51-29c47a0eaf00-xtables-lock\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.841474 kubelet[2585]: I0213 04:18:31.841401 2585 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.841474 kubelet[2585]: I0213 04:18:31.841436 2585 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jcg7c\" (UniqueName: \"kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-kube-api-access-jcg7c\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.841474 kubelet[2585]: I0213 04:18:31.841469 2585 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c3071985-b3f7-4369-9b51-29c47a0eaf00-clustermesh-secrets\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:31.842346 kubelet[2585]: I0213 04:18:31.841501 2585 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c3071985-b3f7-4369-9b51-29c47a0eaf00-hubble-tls\") on node \"ci-3510.3.2-a-fff065a016\" DevicePath \"\"" Feb 13 04:18:32.241268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a-rootfs.mount: Deactivated successfully. Feb 13 04:18:32.241319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a-shm.mount: Deactivated successfully. Feb 13 04:18:32.241399 systemd[1]: var-lib-kubelet-pods-c3071985\x2db3f7\x2d4369\x2d9b51\x2d29c47a0eaf00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djcg7c.mount: Deactivated successfully. Feb 13 04:18:32.241450 systemd[1]: var-lib-kubelet-pods-c3071985\x2db3f7\x2d4369\x2d9b51\x2d29c47a0eaf00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 04:18:32.241481 systemd[1]: var-lib-kubelet-pods-c3071985\x2db3f7\x2d4369\x2d9b51\x2d29c47a0eaf00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 04:18:32.241513 systemd[1]: var-lib-kubelet-pods-c3071985\x2db3f7\x2d4369\x2d9b51\x2d29c47a0eaf00-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 04:18:32.624936 kubelet[2585]: I0213 04:18:32.624841 2585 scope.go:115] "RemoveContainer" containerID="1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2" Feb 13 04:18:32.627544 env[1480]: time="2024-02-13T04:18:32.627436360Z" level=info msg="RemoveContainer for \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\"" Feb 13 04:18:32.631480 env[1480]: time="2024-02-13T04:18:32.631389226Z" level=info msg="RemoveContainer for \"1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2\" returns successfully" Feb 13 04:18:32.634219 systemd[1]: Removed slice kubepods-burstable-podc3071985_b3f7_4369_9b51_29c47a0eaf00.slice. Feb 13 04:18:32.668381 kubelet[2585]: I0213 04:18:32.668355 2585 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:18:32.668515 kubelet[2585]: E0213 04:18:32.668401 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c3071985-b3f7-4369-9b51-29c47a0eaf00" containerName="mount-cgroup" Feb 13 04:18:32.668515 kubelet[2585]: I0213 04:18:32.668423 2585 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3071985-b3f7-4369-9b51-29c47a0eaf00" containerName="mount-cgroup" Feb 13 04:18:32.671270 systemd[1]: Created slice kubepods-burstable-poddd8bcb42_7f4f_4442_8dc0_226e88004fcb.slice. Feb 13 04:18:32.747765 kubelet[2585]: I0213 04:18:32.747686 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-hubble-tls\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748067 kubelet[2585]: I0213 04:18:32.747893 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-cni-path\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748067 kubelet[2585]: I0213 04:18:32.747988 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-cilium-ipsec-secrets\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748067 kubelet[2585]: I0213 04:18:32.748045 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-etc-cni-netd\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748450 kubelet[2585]: I0213 04:18:32.748095 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-xtables-lock\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748450 kubelet[2585]: I0213 04:18:32.748208 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-cilium-cgroup\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748450 kubelet[2585]: I0213 04:18:32.748350 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-cilium-config-path\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748450 kubelet[2585]: I0213 04:18:32.748424 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-clustermesh-secrets\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748890 kubelet[2585]: I0213 04:18:32.748476 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-host-proc-sys-kernel\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748890 kubelet[2585]: I0213 04:18:32.748529 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-bpf-maps\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748890 kubelet[2585]: I0213 04:18:32.748577 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-hostproc\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748890 kubelet[2585]: I0213 04:18:32.748683 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-lib-modules\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748890 kubelet[2585]: I0213 04:18:32.748747 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-host-proc-sys-net\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.748890 kubelet[2585]: I0213 04:18:32.748800 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5m2v\" (UniqueName: \"kubernetes.io/projected/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-kube-api-access-h5m2v\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.749498 kubelet[2585]: I0213 04:18:32.748902 2585 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd8bcb42-7f4f-4442-8dc0-226e88004fcb-cilium-run\") pod \"cilium-t59w7\" (UID: \"dd8bcb42-7f4f-4442-8dc0-226e88004fcb\") " pod="kube-system/cilium-t59w7" Feb 13 04:18:32.980790 kubelet[2585]: I0213 04:18:32.980584 2585 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c3071985-b3f7-4369-9b51-29c47a0eaf00 path="/var/lib/kubelet/pods/c3071985-b3f7-4369-9b51-29c47a0eaf00/volumes" Feb 13 04:18:33.274026 env[1480]: time="2024-02-13T04:18:33.273794585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t59w7,Uid:dd8bcb42-7f4f-4442-8dc0-226e88004fcb,Namespace:kube-system,Attempt:0,}" Feb 13 04:18:33.287728 env[1480]: time="2024-02-13T04:18:33.287665938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:18:33.287728 env[1480]: time="2024-02-13T04:18:33.287687386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:18:33.287728 env[1480]: time="2024-02-13T04:18:33.287694015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:18:33.287870 env[1480]: time="2024-02-13T04:18:33.287752272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f pid=6924 runtime=io.containerd.runc.v2 Feb 13 04:18:33.296432 systemd[1]: Started cri-containerd-1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f.scope. Feb 13 04:18:33.306179 env[1480]: time="2024-02-13T04:18:33.306125325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t59w7,Uid:dd8bcb42-7f4f-4442-8dc0-226e88004fcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\"" Feb 13 04:18:33.307349 env[1480]: time="2024-02-13T04:18:33.307334482Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 04:18:33.312869 env[1480]: time="2024-02-13T04:18:33.312819600Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649\"" Feb 13 04:18:33.313094 env[1480]: time="2024-02-13T04:18:33.313048654Z" level=info msg="StartContainer for \"51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649\"" Feb 13 04:18:33.320728 systemd[1]: Started cri-containerd-51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649.scope. Feb 13 04:18:33.333768 env[1480]: time="2024-02-13T04:18:33.333714650Z" level=info msg="StartContainer for \"51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649\" returns successfully" Feb 13 04:18:33.338708 systemd[1]: cri-containerd-51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649.scope: Deactivated successfully. Feb 13 04:18:33.374944 env[1480]: time="2024-02-13T04:18:33.374869409Z" level=info msg="shim disconnected" id=51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649 Feb 13 04:18:33.374944 env[1480]: time="2024-02-13T04:18:33.374910887Z" level=warning msg="cleaning up after shim disconnected" id=51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649 namespace=k8s.io Feb 13 04:18:33.374944 env[1480]: time="2024-02-13T04:18:33.374921456Z" level=info msg="cleaning up dead shim" Feb 13 04:18:33.381078 env[1480]: time="2024-02-13T04:18:33.381042984Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7007 runtime=io.containerd.runc.v2\n" Feb 13 04:18:33.635859 env[1480]: time="2024-02-13T04:18:33.635747604Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 04:18:33.650058 env[1480]: time="2024-02-13T04:18:33.649962780Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee\"" Feb 13 04:18:33.650906 env[1480]: time="2024-02-13T04:18:33.650825447Z" level=info msg="StartContainer for \"e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee\"" Feb 13 04:18:33.674588 systemd[1]: Started cri-containerd-e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee.scope. Feb 13 04:18:33.703732 env[1480]: time="2024-02-13T04:18:33.703682154Z" level=info msg="StartContainer for \"e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee\" returns successfully" Feb 13 04:18:33.713505 systemd[1]: cri-containerd-e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee.scope: Deactivated successfully. Feb 13 04:18:33.736121 env[1480]: time="2024-02-13T04:18:33.736050244Z" level=info msg="shim disconnected" id=e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee Feb 13 04:18:33.736121 env[1480]: time="2024-02-13T04:18:33.736116953Z" level=warning msg="cleaning up after shim disconnected" id=e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee namespace=k8s.io Feb 13 04:18:33.736422 env[1480]: time="2024-02-13T04:18:33.736133006Z" level=info msg="cleaning up dead shim" Feb 13 04:18:33.744942 env[1480]: time="2024-02-13T04:18:33.744869336Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7069 runtime=io.containerd.runc.v2\n" Feb 13 04:18:34.452151 kubelet[2585]: E0213 04:18:34.452091 2585 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 04:18:34.575505 kubelet[2585]: W0213 04:18:34.575394 2585 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3071985_b3f7_4369_9b51_29c47a0eaf00.slice/cri-containerd-1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2.scope WatchSource:0}: container "1c51041b87d78ec5438990691a096375d49eaa1ead82bf64c4c6d5879ee6f1e2" in namespace "k8s.io": not found Feb 13 04:18:34.643902 env[1480]: time="2024-02-13T04:18:34.643744078Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 04:18:34.662221 env[1480]: time="2024-02-13T04:18:34.662097971Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4\"" Feb 13 04:18:34.663056 env[1480]: time="2024-02-13T04:18:34.662985492Z" level=info msg="StartContainer for \"1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4\"" Feb 13 04:18:34.692679 systemd[1]: Started cri-containerd-1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4.scope. Feb 13 04:18:34.711614 env[1480]: time="2024-02-13T04:18:34.711505133Z" level=info msg="StartContainer for \"1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4\" returns successfully" Feb 13 04:18:34.714212 systemd[1]: cri-containerd-1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4.scope: Deactivated successfully. Feb 13 04:18:34.742974 env[1480]: time="2024-02-13T04:18:34.742882024Z" level=info msg="shim disconnected" id=1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4 Feb 13 04:18:34.742974 env[1480]: time="2024-02-13T04:18:34.742944524Z" level=warning msg="cleaning up after shim disconnected" id=1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4 namespace=k8s.io Feb 13 04:18:34.742974 env[1480]: time="2024-02-13T04:18:34.742956468Z" level=info msg="cleaning up dead shim" Feb 13 04:18:34.749781 env[1480]: time="2024-02-13T04:18:34.749715630Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7127 runtime=io.containerd.runc.v2\n" Feb 13 04:18:35.285445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4-rootfs.mount: Deactivated successfully. Feb 13 04:18:35.652056 env[1480]: time="2024-02-13T04:18:35.651908594Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 04:18:35.666006 env[1480]: time="2024-02-13T04:18:35.665826097Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907\"" Feb 13 04:18:35.666943 env[1480]: time="2024-02-13T04:18:35.666843944Z" level=info msg="StartContainer for \"0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907\"" Feb 13 04:18:35.702619 systemd[1]: Started cri-containerd-0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907.scope. Feb 13 04:18:35.736395 env[1480]: time="2024-02-13T04:18:35.736321699Z" level=info msg="StartContainer for \"0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907\" returns successfully" Feb 13 04:18:35.738293 systemd[1]: cri-containerd-0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907.scope: Deactivated successfully. Feb 13 04:18:35.779017 env[1480]: time="2024-02-13T04:18:35.778948921Z" level=info msg="shim disconnected" id=0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907 Feb 13 04:18:35.779282 env[1480]: time="2024-02-13T04:18:35.779012677Z" level=warning msg="cleaning up after shim disconnected" id=0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907 namespace=k8s.io Feb 13 04:18:35.779282 env[1480]: time="2024-02-13T04:18:35.779036808Z" level=info msg="cleaning up dead shim" Feb 13 04:18:35.788107 env[1480]: time="2024-02-13T04:18:35.788057545Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7183 runtime=io.containerd.runc.v2\n" Feb 13 04:18:36.285447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907-rootfs.mount: Deactivated successfully. Feb 13 04:18:36.661525 env[1480]: time="2024-02-13T04:18:36.661345332Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 04:18:36.675113 env[1480]: time="2024-02-13T04:18:36.675041803Z" level=info msg="CreateContainer within sandbox \"1f1ff4bb6e1e74033f42095f311c3beab4bc64e05876987aa19aea3d6d67484f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1dc9aeac4f078167f3fac1421f10d993ec3e41f2ed47e9d3c45ae90377eecd3\"" Feb 13 04:18:36.675338 env[1480]: time="2024-02-13T04:18:36.675327737Z" level=info msg="StartContainer for \"e1dc9aeac4f078167f3fac1421f10d993ec3e41f2ed47e9d3c45ae90377eecd3\"" Feb 13 04:18:36.677472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371630504.mount: Deactivated successfully. Feb 13 04:18:36.684056 systemd[1]: Started cri-containerd-e1dc9aeac4f078167f3fac1421f10d993ec3e41f2ed47e9d3c45ae90377eecd3.scope. Feb 13 04:18:36.696525 env[1480]: time="2024-02-13T04:18:36.696470108Z" level=info msg="StartContainer for \"e1dc9aeac4f078167f3fac1421f10d993ec3e41f2ed47e9d3c45ae90377eecd3\" returns successfully" Feb 13 04:18:36.847423 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 04:18:37.687980 kubelet[2585]: W0213 04:18:37.687880 2585 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd8bcb42_7f4f_4442_8dc0_226e88004fcb.slice/cri-containerd-51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649.scope WatchSource:0}: task 51abcc298719cd14b53f694a12b2ff047118f01baaac6ca39d4b8dfe6d408649 not found: not found Feb 13 04:18:39.671890 systemd-networkd[1312]: lxc_health: Link UP Feb 13 04:18:39.694389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 04:18:39.695784 systemd-networkd[1312]: lxc_health: Gained carrier Feb 13 04:18:40.796911 kubelet[2585]: W0213 04:18:40.796845 2585 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd8bcb42_7f4f_4442_8dc0_226e88004fcb.slice/cri-containerd-e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee.scope WatchSource:0}: task e5192b19c4e8ab7c9a32a590493b2e119004eedf02ac15e746da24dfd338e0ee not found: not found Feb 13 04:18:40.933574 systemd-networkd[1312]: lxc_health: Gained IPv6LL Feb 13 04:18:41.283669 kubelet[2585]: I0213 04:18:41.283645 2585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t59w7" podStartSLOduration=9.283614966 pod.CreationTimestamp="2024-02-13 04:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 04:18:37.698600619 +0000 UTC m=+1638.794087291" watchObservedRunningTime="2024-02-13 04:18:41.283614966 +0000 UTC m=+1642.379101548" Feb 13 04:18:43.904097 kubelet[2585]: W0213 04:18:43.903981 2585 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd8bcb42_7f4f_4442_8dc0_226e88004fcb.slice/cri-containerd-1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4.scope WatchSource:0}: task 1dc9436f9bd55095af91f5818363b5c47f2489b91dd1bfbac0776767d34b39a4 not found: not found Feb 13 04:18:47.014599 kubelet[2585]: W0213 04:18:47.014519 2585 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd8bcb42_7f4f_4442_8dc0_226e88004fcb.slice/cri-containerd-0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907.scope WatchSource:0}: task 0b5fe5251b1bfe86832dfb78e6386b22d8d335d7fb547fbd978081d613721907 not found: not found Feb 13 04:19:19.003149 env[1480]: time="2024-02-13T04:19:19.003091494Z" level=info msg="StopPodSandbox for \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\"" Feb 13 04:19:19.003492 env[1480]: time="2024-02-13T04:19:19.003152174Z" level=info msg="TearDown network for sandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" successfully" Feb 13 04:19:19.003492 env[1480]: time="2024-02-13T04:19:19.003176238Z" level=info msg="StopPodSandbox for \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" returns successfully" Feb 13 04:19:19.003492 env[1480]: time="2024-02-13T04:19:19.003360343Z" level=info msg="RemovePodSandbox for \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\"" Feb 13 04:19:19.003492 env[1480]: time="2024-02-13T04:19:19.003407423Z" level=info msg="Forcibly stopping sandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\"" Feb 13 04:19:19.003492 env[1480]: time="2024-02-13T04:19:19.003473965Z" level=info msg="TearDown network for sandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" successfully" Feb 13 04:19:19.005382 env[1480]: time="2024-02-13T04:19:19.005363201Z" level=info msg="RemovePodSandbox \"eccd9cbc674e0a028a2bac13509202ac98c19122635442694e20910158b28130\" returns successfully" Feb 13 04:19:19.005571 env[1480]: time="2024-02-13T04:19:19.005550078Z" level=info msg="StopPodSandbox for \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\"" Feb 13 04:19:19.005648 env[1480]: time="2024-02-13T04:19:19.005629069Z" level=info msg="TearDown network for sandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" successfully" Feb 13 04:19:19.005681 env[1480]: time="2024-02-13T04:19:19.005647572Z" level=info msg="StopPodSandbox for \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" returns successfully" Feb 13 04:19:19.005859 env[1480]: time="2024-02-13T04:19:19.005847088Z" level=info msg="RemovePodSandbox for \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\"" Feb 13 04:19:19.005887 env[1480]: time="2024-02-13T04:19:19.005861353Z" level=info msg="Forcibly stopping sandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\"" Feb 13 04:19:19.005908 env[1480]: time="2024-02-13T04:19:19.005893047Z" level=info msg="TearDown network for sandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" successfully" Feb 13 04:19:19.007366 env[1480]: time="2024-02-13T04:19:19.007352295Z" level=info msg="RemovePodSandbox \"615378b4c104a7c034a8a8e5d596f5cea90a55d995e93c8a05ea23d4877ac64a\" returns successfully" Feb 13 04:19:19.007530 env[1480]: time="2024-02-13T04:19:19.007513151Z" level=info msg="StopPodSandbox for \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\"" Feb 13 04:19:19.007592 env[1480]: time="2024-02-13T04:19:19.007552796Z" level=info msg="TearDown network for sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" successfully" Feb 13 04:19:19.007592 env[1480]: time="2024-02-13T04:19:19.007575975Z" level=info msg="StopPodSandbox for \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" returns successfully" Feb 13 04:19:19.007723 env[1480]: time="2024-02-13T04:19:19.007710066Z" level=info msg="RemovePodSandbox for \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\"" Feb 13 04:19:19.007758 env[1480]: time="2024-02-13T04:19:19.007725825Z" level=info msg="Forcibly stopping sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\"" Feb 13 04:19:19.007781 env[1480]: time="2024-02-13T04:19:19.007772432Z" level=info msg="TearDown network for sandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" successfully" Feb 13 04:19:19.008820 env[1480]: time="2024-02-13T04:19:19.008809478Z" level=info msg="RemovePodSandbox \"0d2fa2b92a594ecb6028cfac5affd46db6302de410a33878ae6e7b1e16acafca\" returns successfully" Feb 13 04:19:33.391759 sshd[6784]: pam_unix(sshd:session): session closed for user core Feb 13 04:19:33.393291 systemd[1]: sshd@109-139.178.90.101:22-139.178.68.195:34550.service: Deactivated successfully. Feb 13 04:19:33.393867 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 04:19:33.394262 systemd-logind[1467]: Session 93 logged out. Waiting for processes to exit. Feb 13 04:19:33.394887 systemd-logind[1467]: Removed session 93.